You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am new to this topic and have downloaded and implemented the code. The only adjustment I made to the docker-compose file was setting the model to large and the engine to faster_whisper. The model that was downloaded in this case was large-v3.pt. However, on a second machine, I used only the quick usage command:
docker run -d -p 9000:9000 -e ASR_MODEL=large -e ASR_ENGINE=faster_whisper onerahmet/openai-whisper-asr-webservice:latest
I noticed that it used a different model (models--Systran--faster-whisper-large-v3). Can someone explain why there is a difference?
The text was updated successfully, but these errors were encountered:
I am new to this topic and have downloaded and implemented the code. The only adjustment I made to the docker-compose file was setting the model to large and the engine to faster_whisper. The model that was downloaded in this case was large-v3.pt. However, on a second machine, I used only the quick usage command:
docker run -d -p 9000:9000 -e ASR_MODEL=large -e ASR_ENGINE=faster_whisper onerahmet/openai-whisper-asr-webservice:latest
I noticed that it used a different model (models--Systran--faster-whisper-large-v3). Can someone explain why there is a difference?
The text was updated successfully, but these errors were encountered: