- PyTorch training, evaluation, inference and benchmark code with SOTA practices (support for wandb.ai logging)
- Onnx conversion, calibration and inference
- TensorRT conversion and inference
- Example notebook
- C++ Inference (Future release)
- FastAPI (
fastapi
branch) [+ Heroku deployment] - Triton Inference Server (
triton
branch)
In this project, for a given image classification task, we can perform a large number of experiments just by changing param.json
file.
The project supports pretraining and finetuning of timm models
. The training code supports scope for lot many customization for example adding more optimizers in _get_optimizers
or schedulers in _get_scheduler
functions.
It also contains an option to convert model to onnx and TensorRT. There are reference inference scripts for all different formats of the model.
- How to run with custom dataset?
- replace
datasets_to_df
inutils.py
with a function that returns a dataframe with 2 columns containing image file paths namedfile
and labels namedlabel
. - check if
prepare_df
inmain.py
is compatible.
- replace
- Create many different models and experiments just by replacing
model_name
inparams.json
(by creating appropriate folder for each model underexperiments
folder) orfinetune_layer
parameter or any other hyper parameter in json file.
Notebooks
folder contains a sample notebook to run cifar10
dataset end to end.
Docker
container
sudo docker build -t e2e .
sudo chmod +x run_container.sh
./run_container.sh
python3 main_cifar10.py
To run TensorRT Inference, build it's corresponding docker and set do_trt_inference
to True
in main_cifar10.py
.