You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current version uses OPEN AI models to transcribe the audio and OAI LLM to transform the text, it would be great if we can add options to use open source LLM's and transcription models.
The text was updated successfully, but these errors were encountered:
did you have any specific llms in mind that you already now could be suitable?
how do I test your service, do you have integration tests?
I've roughly looked into he code and can see that the transctriptions controller is taking care of the transcription to form by using fill_form function of the openai api. can you point out other AI interaction relevant sections in the code?
I'd suggest extracting our AI interactions into a provider-based pattern that would allow easy swapping of models based on environment configuration that will let users pick a model for transcription and another for completion.
The current version uses OPEN AI models to transcribe the audio and OAI LLM to transform the text, it would be great if we can add options to use open source LLM's and transcription models.
The text was updated successfully, but these errors were encountered: