Skip to content

Latest commit

 

History

History
44 lines (27 loc) · 1.71 KB

README.md

File metadata and controls

44 lines (27 loc) · 1.71 KB

Time and Tokens: Benchmarking End-to-End Speech Dysfluency Detection

image-20240321090057059

Datasets

We open sourced our simulated datasets VCTK-Token.

Download link will be attached soon.

Environment configuration

Please refer environment.yml

If you have Miniconda/Anaconda installed, you can directly use the command: conda env create -f environment.yml

Inference

We opensourced our inference code and checkpoints, here are the steps to perform inference:

  1. Clone this repository.

  2. Download checkpoints, create a folder named pretrained, and put all downloaded models into it.

  3. We also provide testing datasets for quick inference, download it here. Put the folder testingset at the same level as inference.ipynb.

  4. Run inference.ipynb to perform inference step by step.

Citation

If you find our paper helpful, please cite it by:

@misc{zhou2024timetokensbenchmarkingendtoend,
      title={Time and Tokens: Benchmarking End-to-End Speech Dysfluency Detection}, 
      author={Xuanru Zhou and Jiachen Lian and Cheol Jun Cho and Jingwen Liu and Zongli Ye and Jinming Zhang and Brittany Morin and David Baquirin and Jet Vonk and Zoe Ezzes and Zachary Miller and Maria Luisa Gorno Tempini and Gopala Anumanchipalli},
      year={2024},
      eprint={2409.13582},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2409.13582}, 
}