Skip to content

takaniwa/DSNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DSNet:A Novel Way to Use Atrous Convolutions in Semantic Segmentation

🚀 If it helps you, click a star! ⭐

a novel real-time model in semantic segmentation.(TCSVT 2024)

Paper: ([PDF]https://arxiv.org/abs/2406.03702)) PWC PWC PWC

This is the implementation for DSNet. DSNetV2 is currently under development... DSNetV2 will offer a better balance between speed and accuracy, as well as a more comprehensive DSNet family ranging from small to large models.

git clone  https://github.com/takaniwa/DSNet.git

News 2024/7/28

We extended the MSAF and MSA experiments for classification by applying them to models like ResNet18. On the ImageNet1K task, this led to a 3.3% accuracy improvement with only a 1% increase in computational cost! We will include a more detailed explanation of this module in our paper for submission to a high-quality journal.

Performance of MSA and MSAF on ResNet. "Insert" represents applying the module after the feature fusion of BasicBlock, and "Fusion" represents using MSAF or the AFF series for feature fusion. r stands for channel compression multiplier. The GFLOPs is measured at 256 x 3 x 224 x 224.

Method Type Model r #Params #GFLOPs Top1. acc(%)
Add None ResNet18 - 11.7 434.9 69.7
Add None ResNet34 - 21.8 877.1 72.9
SE Insert ResNet18 16 11.8 435.1 71.2
MSA(Ours) Insert ResNet18 16 12.1 441.1 72.2
MSA(Ours) Insert ResNet18 4 13.1 455.5 72.9
AFF Fusion ResNet18 4 12.4 448.3 72.0
iAFF Fusion ResNet18 4 12.8 461.7 60.2
MSAF(Ours) Fusion ResNet18 16 12.1 441.1 72.3
MSAF(Ours) Fusion ResNet18 4 13.1 455.5 73.2

Environment:

PyTorch 1.10

python 3.8

4*RTX4090 or 8*RTX4090

pip install -r requirements.txt

Highlight

Params vs mIOU on Cityscapes val set ADE20K

• We revisited the design of atrous convolutions in CNNs,and explored three empirical guidelines for atrous convolution. Based on the above guidelines, we proposed a novel Dual-branch network.

• DSNet achieves a new state-of-the-art trade-off between accuracy and speed on ADE20K, Cityscapes,and BDD10K.

Overview:

overview-of-our-method
An overview of the basic architecture of our proposed DSNet.


Diagram of Multi-Scale Fusion Atrous Convolutional Block (MFACB).

Train and Inference speed:

This implementation is based on HRNet-Semantic-Segmentation and PIDNet. Please refer to their repository for installation and dataset preparation. The inference speed is tested on single RTX 3090 or RTX4090. BDD10K has not been implemented in the above link. The dataset storage format is as follows. Download link: web page

  • bdd
    • seg
      • color_labels
        • train
        • val
      • images
        • train
        • val
        • test
      • labels
        • train
        • val

Train

python -m torch.distributed.launch --nproc_per_node=4 DSNet/tools/train.py

Inference speed

python DSNet/models/speed/dsnet_speed.py

Weight

DSNet-Base:

DSNet_Base_imagenet: Baidu drive ,google drive

ADE20K: 43.44%mIOU: Baidu drive, google drive

BDD10K: 64.6%mIOU: Baidu drive, google drive

Camvid(pretrained on Cityscapes train set): 83.32%mIOU: Baidu drive, google drive

Cityscapes : 82.0%mIOU:google drive

DSNet:

DSNet_imagenet: Baidu drive, google drive

ADE20k 40.0%mIOU: Baidu drive, google drive

BDD10K 62.8%mIOU: Baidu drive, google drive

Cityscapes: 80.4%mIOU:google drive

Citation:

@article{guo2024dsnet,
  title={DSNet: A Novel Way to Use Atrous Convolutions in Semantic Segmentation},
  author={Guo, Zilu and Bian, Liuyang and Wei, Hu and Li, Jingyu and Ni, Huasheng and Huang, Xuan},
  journal={IEEE Transactions on Circuits and Systems for Video Technology},
  year={2024},
  publisher={IEEE}
}

About

a novel real-time model in semantic segmentation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages