Code and results for our corresponding research paper, which was published and presented at the 2024 IEEE/ACM Symposium on Edge Computing (SEC).
To view our experimental results, check out the publicly available webpage of our framework for Sustainable and Trustworthy Reporting, which offers the EdgeAcc results alongside several other evaluation databases.
Note that we continue to advance our software - it is work in progress and subject to change, so you might encounter delays, off-times, and slight differences to our paper.
If you want to run the exploration tool locally, make sure you have the the following python packages installed:
numpy, pandas, pint, scipy, dash, dash_bootstrap_components, Pillow, reportlab, fitz, frontend, plotly
Then start our app via main.py and open the webpage. Besides the interactive plots, you can also inspect the PDF files in our paper_results directory.
If you appreciate our work and code, please cite our paper as given by Springer:
A. Van Der Staay, R. Fischer and S. Buschjäger, "Stress-Testing USB Accelerators for Efficient Edge Inference", 2024 IEEE/ACM Symposium on Edge Computing (SEC), Rome, Italy, 2024, pp. 1-14, doi: 10.1109/SEC62691.2024.00015.
or using the bibkey below:
@INPROCEEDINGS{10818191,
author={Van Der Staay, Alexander and Fischer, Raphael and Buschjäger, Sebastian},
booktitle={2024 IEEE/ACM Symposium on Edge Computing (SEC)},
title={Stress-Testing USB Accelerators for Efficient Edge Inference},
year={2024},
volume={},
number={},
pages={1-14},
doi={10.1109/SEC62691.2024.00015}
}
Models compared for Imagenet Classification:
'DenseNet121' 'DenseNet169' 'DenseNet201' 'EfficientNetB0' 'EfficientNetB1' 'EfficientNetB2' 'EfficientNetB3' 'EfficientNetB4' 'EfficientNetB5' 'EfficientNetB6' 'EfficientNetB7' 'EfficientNetV2B0' 'EfficientNetV2B1' 'EfficientNetV2B2' 'EfficientNetV2B3' 'EfficientNetV2L' 'EfficientNetV2M' 'EfficientNetV2S' 'InceptionResNetV2' 'InceptionV3' 'MobileNet' 'MobileNetV2' 'NASNetLarge' 'NASNetMobile' 'ResNet101' 'ResNet152' 'ResNet50' 'ResNet101V2' 'ResNet152V2' 'ResNet50V2' 'VGG16' 'VGG19' 'Xception' 'MobileNetV3Large' 'MobileNetV3Small'
Models compared for Imagenet Segmentation:
'yolov8s', 'yolov8n', 'yolov8m', 'yolov8x', 'yolov8l'
batchsize_comparison_resultscontains the tables that we used to compare the performance of different batch sizes on our three host systems.classification_databaseandsegmentation_databasecontain the results and configurations of our experimental results. These two directories are accessed bymain.pyto create the paper results.creator_scriptscontain the python scripts used to create the datasets used for the experiment as well as the model files for the different model formats required by the accelerators TPU and NCS.helper_scriptscontain miscelaneous scripts that are used by other scripts or that can be used to clean up experimental result files. Model metadata that we use for analysis is collected with the corresponding scripts. Our batch size comparison that determines the batch size of used in our CPU experiments is included here.paper_resultscontain our result graphs as PDF files.result_databasescontain the pickled pandas dataframes created byload_experiment_logs.pythat are further merged with themerge_all_databases.pyscript.strepcontains scripts used to create our interactive model results.- Make sure to include a directory that holds the model input data as well as all model files using the scripts from
creator_scripts.
main.pycan be executed to create the interactive model results based on the content ofclassification_databaseandsegmentation_database. It uses thepaper_results.pyscript where the graph specifications are coded.load_experiment_logs.pymerges an experiments monitoring directory into a pandas dataframe and then pickles it for further use.merge_all_databases.pymerges all of the chosen dataframes formresult_databasesinto theclassification_databaseandsegmentation_databasedirectories for the final results.pycoral_classification.py,pycoral_segmentation.pyandpycoral_classification_raspi.pyare the scripts that run the actual experiments. They are run by therun_allscripts.- The
run_allscripts define one run of an experiment depending on the used environments. We ran these expermients multimple times for our paper results.
- Follow guide: https://coral.ai/docs/accelerator/get-started/
- Setup EdgeTPU Compiler for TPU Conversion: https://coral.ai/docs/edgetpu/compiler/
- Setup pycoral: https://coral.ai/docs/edgetpu/tflite-python/#update-existing-tf-lite-code-for-the-edge-tpu
- Follow guide: https://docs.openvino.ai/2022.3/openvino_docs_install_guides_installing_openvino_from_archive_linux.html
- Step 2 (environment config) needs to be performed in each session!
- udev Regeln angepasst werden und dein Nutzer muss der users Gruppe hinzugefigt werden. https://docs.openvino.ai/2022.3/openvino_docs_install_guides_configurations_for_ncs2.html#ncs-guide
- pip install openvino
- Make sure you created your model and data directory in our scripts, we use the directory name
mnt_data. - In
mnt_data, we have the subfoldersunpackedandstaay. Inunpacked, we downloaded the 1% imagenet database. (https://www.image-net.org/) - In
staay, we have theimagenet_datadirectory where our preprocessed data is saved as well as themodelsdirectory where the models are saved in the corresponding subdirectoriesedgetpu_models,openVINO,saved_models,tflite_models. These directories get filled by using the scripts in thecreator_scriptsdirectory. - Also in
staaythe filescoco.yamlandcoco128-seg.yamlshould be included for segmentation with YOLO models. (Follow : https://docs.ultralytics.com/datasets/detect/coco/#applications) - If you choose to change the naming of your files, make sure to adjust all occurences of
mnt_datain this repositories scripts.
- Using the
create_imagenet_dataset.pyscript, create the preprocessed imagenet data of the models that you want to compare. You may choose from any of the Tensorflow2 models listed in this readme. It saves the model-individualized preprocessed datasets as numpy arrays in theimagenet_datafile. - Use the
export_tflite_models.pyscripts to export the models into Tensorflow Lite models. These can then converted into TPU-compatible models using theexport_edgetpu_models.pyscript. The latter script uses the EdgeTPU Compiler. - Export the saved_models into NCS compatible models using
export_NCS_models.py. This uses the model optimizermoprovided by openVINO2022.3. Make sure to save the created models in the correct directory (models/openVINO). - Execute the
create_YOLO_models.pyscript to create YOLO models as saved_model, edgeTPU compatible model and openVINO model. Save accordingly.
- Now you can execute singular runs with the
pycoral_scripts. Adjust all flags to your liking. - You can even test all of your created models by using the
run_allscripts adjusting the flags to your liking. - This will create a directory with the experiments logging. (Monitoring directory)
- Adjust the path at the end of
load_experiment_logs.pyto your monitoring directory and execute the script. This will merge the directory into one dataframe that is saved inresult_databases - In
merge_all_databases.py, adjust all of the databases you want to include fromresult_databases. This will create the final databases in theclassification_databaseandsegmentation_databasedirectories. - Now
main.pycan be run with the newly included experiment results.
- de- and reconnect edgeTPU
- try different USB cable!
Copyright (c) 2025 Raphael Fischer, Alexander van der Staay, Sebastian Buschjäger
