Skip to content

Design Principles

sf-wind edited this page Oct 5, 2018 · 4 revisions

The benchmarking platform is designed under the following principles:

Generalizability

The benchmarking platform is designed to support any ML framework and any backend without invasive changes to the core of the platform. All changes can be restricted to files under the framework or platform directory following a flexible python API.

The framework and backend developers can contribute to the benchmarking platform in the ways deemed most fit to them. This is a much more flexible way than providing a rigid external API that the frameworks and backends need to bind.

Explicitness

From user's experience, explicitness is considered at higher priority than convenience, even at the cost of being more verbose.

The goal is to explicit specify everything that might affect benchmarking result. The users do not need to guess or search the answer.

For example, in the command line the test json file is explicitly specified. In the json file, all attributes that affect benchmark result are explicitly specified.

There is no defaults that may affect the benchmark result.

Composability

The code performing different functionalities are clearly separated.

Composability is considered more important than being efficient.

Extensibility

Without invasive changes, the framework can be extended to support:

  • new metrics
  • new pre-/post-processing routines
  • new blocks in the flow

Centralization

It is very easy to follow from the command line to understand what happens in the benchmarking process

  • From the --platform, one can find the target device as well as the exact build process
  • From the --benchmark_file, one can find the entire benchmarking process: the model, the pre/post processes, the runtime conditions etc.