Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add yolov6 to object detection #1192

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
15 changes: 11 additions & 4 deletions python/app/fedcv/.gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
__pycache__
wandb
runs
*.cache
*.zip
__MACOSX/
__pycache__/
*.tmp
mpi_host_file
*.jpg
*.png
mlops
config/exp*
.idea
.DS_Store
devops
153 changes: 12 additions & 141 deletions python/app/fedcv/README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,17 @@
# FedCV: A Federated Learning Framework for Diverse Computer Vision Tasks

## Introduction

![](fedcv_arch.jpg)

Federated Learning (FL) is a distributed learning paradigm that can learn a global or personalized model from decentralized datasets on edge devices. However, in the computer vision domain, model performance in FL is far behind centralized training due to the lack of exploration in diverse tasks with a unified FL framework. FL has rarely been demonstrated effectively in advanced computer vision tasks such as object detection and image segmentation. To bridge the gap and facilitate the development of FL for computer vision tasks, in this work, we propose a federated learning library and benchmarking framework, named FedCV, to evaluate FL on the three most representative computer vision tasks: image classification, image segmentation, and object detection. We provide non-I.I.D. benchmarking datasets, models, and various reference FL algorithms. Our benchmark study suggests that there are multiple challenges that deserve future exploration: centralized training tricks may not be directly applied to FL; the non-I.I.D. dataset actually downgrades the model accuracy to some degree in different tasks; improving the system efficiency of federated training is challenging given the huge number of parameters and the per-client memory cost. We believe that such a library and benchmark, along with comparable evaluation settings, is necessary to make meaningful progress in FL on computer vision tasks.
# FedCV - Object Detection

## Prerequisites & Installation

```bash
pip install fedml --upgrade
```

There are other dependencies in some tasks that need to be installed.
## Prepare YOLOv6

```bash
git clone https://github.com/FedML-AI/FedML
cd FedML/python/app/fedcv/[image_classification, image_segmentation, object_detection]
Download the YOLOv6-S6 checkpoint from `https://github.com/meituan/YOLOv6` and add the checkpoint path to `./YOLOv6/configs/yolov6s6_finetune.py`.

cd config/
bash bootstrap.sh

cd ..
```
## Prepare VOC dataset
Download the VOC dataset from `https://yolov6-docs.readthedocs.io/zh_CN/latest/%E5%85%A8%E6%B5%81%E7%A8%8B%E4%BD%BF%E7%94%A8%E6%8C%87%E5%8D%97/%E8%AE%AD%E7%BB%83%E8%AF%84%E4%BC%B0%E6%8E%A8%E7%90%86%E6%B5%81%E7%A8%8B.html#id2` and run `python ./YOLOv6/yolov6/data/voc2yolo.py --voc_path your_path/to/VOCdevkit`. Then, fill in the path in `./YOLOv6/data/voc.yaml`.

### Run the MPI simulation

Expand All @@ -38,50 +27,30 @@ train_args:
client_id_list:
client_num_in_total: 2 # change here!
client_num_per_round: 2 # change here!
comm_round: 20
epochs: 5
batch_size: 1
comm_round: 10000
epochs: 1
steps: 8
batch_size: 8
```

### Run the server and client using MQTT

If you want to run the edge server and client using MQTT, you need to run the following commands.

> !!IMPORTANT!! In order to avoid crosstalk during use, it is strongly recommended to modify `run_id` in `run_server.sh` and `run_client.sh` to avoid conflict.

```bash
bash run_server.sh your_run_id

# in a new terminal window

# run the client 1
bash run_client.sh 1 your_run_id
bash run_client.sh [CLIENT_ID] your_run_id

# run the client with client_id
bash run_client.sh [CLIENT_ID] your_run_id
```

To customize the number of client, you can change the following variables in `config/fedml_config.yaml`:

```bash
train_args:
federated_optimizer: "FedAvg"
client_id_list:
client_num_in_total: 2 # change here!
client_num_per_round: 2 # change here!
comm_round: 20
epochs: 5
batch_size: 1
```

### Run the application using MLOps

You just need to select the YOLOv5 Object Detection application and start a new run.

Run the following command to login to MLOps.

```bash
fedml login [ACCOUNT_ID]
```

### Build your own application

1. Build package
Expand All @@ -92,101 +61,3 @@ bash build_mlops_pkg.sh
```

2. Create an application and upload package in mlops folder to MLOps

## FedCV Experiments

1. [Image Classification](#image-classification)

Model:

- CNN
- DenseNet
- MobileNetv3
- EfficientNet

Dataset:

- CIFAR-10
- CIFAR-100
- CINIC-10
- FedCIFAR-100
- FederatedEMNIST
- ImageNet
- Landmark
- MNIST

2. [Image Segmentation](#image-segmentation)

Model:

- UNet
- DeeplabV3
- TransUnet

Dataset:

- Cityscapes
- COCO
- PascalVOC

3. [Object Detection](#object-detection)

Model:

- YOLOv5

Dataset:

- COCO
- COCO128

## How to Add Your Own Model?

Our framework supports `PyTorch` based models. To add your own specific model,

1. Create a `PyTorch` model and place it under `model` folder.
2. Prepare a `trainer module` by inheriting the base class `ClientTrainer`.
3. Prepare an experiment file similar to `fedml_*.py` and shell script similar to `run_*.sh`.
4. Adjust the `fedml_config.yaml` file with the model-specific parameters.

## How to Add More Datasets, Domain-Specific Splits & Non-I.I.D.ness Generation Mechanisms?

Create new folder for your dataset under `data/` folder and provide utilities to before feeding the data to federated pre-processing utilities listed in `data/data_loader.py` based on your new dataset.

Splits and Non-I.I.D.'ness methods specific to each task are also located under `data/data_loader.py`. By default, we provide I.I.D. and non-I.I.D. sampling, Dirichlet distribution sampling based on sample size of the dataset. To create custom splitting method based on the sample size, you can create a new function or modify `load_partition_data_*` function.

## Code Structure of FedCV

- `config`: Experiment and GPU mapping configurations.

- `data`: Provide data downloading scripts and store the downloaded datasets. FedCV supports more advanced datasets and models for federated training of computer vision tasks.
- `model`: advanced CV models.
- `trainer`: please define your own trainer.py by inheriting the base class in `fedml.core.alg_frame.client_trainer.ClientTrainer `. Some tasks can share the same trainer.
- `utils`: utility functions.

You can see the `README.md` file in each folder for more details.

## Citation

Please cite our FedML and FedCV papers if it helps your research.

```text
@article{he2021fedcv,
title={Fedcv: a federated learning framework for diverse computer vision tasks},
author={He, Chaoyang and Shah, Alay Dilipbhai and Tang, Zhenheng and Sivashunmugam, Di Fan1Adarshan Naiynar and Bhogaraju, Keerti and Shimpi, Mita and Shen, Li and Chu, Xiaowen and Soltanolkotabi, Mahdi and Avestimehr, Salman},
journal={arXiv preprint arXiv:2111.11066},
year={2021}
}
@misc{he2020fedml,
title={FedML: A Research Library and Benchmark for Federated Machine Learning},
author={Chaoyang He and Songze Li and Jinhyun So and Xiao Zeng and Mi Zhang and Hongyi Wang and Xiaoyang Wang and Praneeth Vepakomma and Abhishek Singh and Hang Qiu and Xinghua Zhu and Jianzong Wang and Li Shen and Peilin Zhao and Yan Kang and Yang Liu and Ramesh Raskar and Qiang Yang and Murali Annavaram and Salman Avestimehr},
year={2020},
eprint={2007.13518},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```

## Contact

Please find contact information at the homepage.
1 change: 1 addition & 0 deletions python/app/fedcv/YOLOv6/.github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
blank_issues_enabled: false
Original file line number Diff line number Diff line change
@@ -1,29 +1,29 @@
name: 🚀 Feature Request
description: Suggest a YOLOv5 idea
description: Suggest a YOLOv6 idea
# title: " "
labels: [enhancement]
body:
- type: markdown
attributes:
value: |
Thank you for submitting a YOLOv5 🚀 Feature Request!
Thank you for submitting a YOLOv6 Feature Request!

- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/ultralytics/yolov5/issues) to see if a similar feature request already exists.
Please search the [issues](https://github.com/meituan/YOLOv6/issues) to see if a similar feature request already exists.
options:
- label: >
I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
I have searched the YOLOv6 [issues](https://github.com/meituan/YOLOv6/issues) and found no similar feature requests.
required: true

- type: textarea
attributes:
label: Description
description: A short description of your feature.
placeholder: |
What new feature would you like to see in YOLOv5?
What new feature would you like to see in YOLOv6? (你希望YOLOv6可以支持哪些新功能?)
validations:
required: true

Expand All @@ -33,7 +33,7 @@ body:
description: |
Describe the use case of your feature request. It will help us understand and prioritize the feature request.
placeholder: |
How would this feature be used, and who would use it?
How would this feature be used, and who would use it?(请描述一下这个新功能的应用场景?)

- type: textarea
attributes:
Expand All @@ -44,7 +44,6 @@ body:
attributes:
label: Are you willing to submit a PR?
description: >
(Optional) We encourage you to submit a [Pull Request](https://github.com/ultralytics/yolov5/pulls) (PR) to help improve YOLOv5 for everyone, especially if you have a good understanding of how to implement a fix or feature.
See the YOLOv5 [Contributing Guide](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) to get started.
(Optional) We encourage you to submit a [Pull Request](https://github.com/meituan/YOLOv6/pulls) (PR) to help improve YOLOv6 for everyone, especially if you have a good understanding of how to implement a fix or feature.
options:
- label: Yes I'd like to help by submitting a PR!
54 changes: 54 additions & 0 deletions python/app/fedcv/YOLOv6/.github/ISSUE_TEMPLATE/question.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
name: ❓ Question
description: Ask a YOLOv6 question
# title: " "
labels: [question]
body:
- type: markdown
attributes:
value: |
Thanks for your attention. We will try our best to solve your problem, but more concrete information is necessary to reproduce your problem.
- type: checkboxes
attributes:
label: Before Asking
description: >
Please check and try following methods to solve it by yourself
options:
- label: >
I have read the [README](https://github.com/meituan/YOLOv6/blob/main/README.md) carefully.
我已经仔细阅读了README上的操作指引。
required: true
- label: >
I want to train my custom dataset, and I have read the [tutorials for training your custom data](https://github.com/meituan/YOLOv6/blob/main/docs/Train_custom_data.md) carefully and organize my dataset correctly;
(FYI: We recommand you to apply the config files of xx_finetune.py.)
我想训练自定义数据集,我已经仔细阅读了训练自定义数据的教程,以及按照正确的目录结构存放数据集。(FYI: 我们推荐使用xx_finetune.py等配置文件训练自定义数据集。)
required: False
- label: >
I have pulled the latest code of main branch to run again and the problem still existed.
我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
required: true


- type: checkboxes
attributes:
label: Search before asking
description: >
Please search the [issues](https://github.com/meituan/YOLOv6/issues) to see if a similar question already exists.
options:
- label: >
I have searched the YOLOv6 [issues](https://github.com/meituan/YOLOv6/issues) and found no similar questions.
required: true

- type: textarea
attributes:
label: Question
description: What is your question?
placeholder: |
💡 ProTip! Include as much information as possible (screenshots, logs, tracebacks, training commands etc.) to receive the most helpful response.
(请仔细阅读上面的信息先进行问题排查,如果仍不能解决您的问题,请将问题尽可能地描述详细,以及提供相关命令、超参配置、报错日志等信息或截图,以便更快地定位和解决问题。)
validations:
required: true

- type: textarea
attributes:
label: Additional
description: Anything else you would like to share?
Loading