-
Notifications
You must be signed in to change notification settings - Fork 86
Design Principles
The benchmarking platform is designed under the following principles:
The benchmarking platform is designed to support any ML framework and any backend without invasive changes to the core of the platform. All changes can be restricted to files under the framework or platform directory following a flexible python API.
The framework and backend developers can contribute to the benchmarking platform in the ways deemed most fit to them. This is a much more flexible way than providing a rigid external API that the frameworks and backends need to bind.
From user's experience, explicitness is considered at higher priority than convenience, even at the cost of being more verbose.
The goal is to explicit specify everything that might affect benchmarking result. The users do not need to guess or search the answer.
For example, in the command line the test json file is explicitly specified. In the json file, all attributes that affect benchmark result are explicitly specified.
There is no defaults that may affect the benchmark result.
The code performing different functionalities are clearly separated.
Composability is considered more important than being efficient.
Without invasive changes, the framework can be extended to support:
- new metrics
- new pre-/post-processing routines
- new blocks in the flow
It is very easy to follow from the command line to understand what happens in the benchmarking process
- From the
--platform
, one can find the target device as well as the exact build process - From the
--benchmark_file
, one can find the entire benchmarking process: the model, the pre/post processes, the runtime conditions etc.
- Experiment with docker
- Run FAI-PEP for the first time
- Meta data file explained
- Work with iOS
- Work on Power/Energy
- Run Imagenet validate dataset
- Convert ONNX models to Caffe2 models
- Presentations