You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have just started to experiment with your plugin and it seems very promising, interesting and powerful.
However, one thing that's really bad in my opinion is this slow typing out of the text once you submit the query to GPT. Would it not be possible to just display the text at whatever speed the API returns it? The openAI API does have a streaming functionality.
The text was updated successfully, but these errors were encountered:
Yea as I've used the plugin more and more, it's been annoying me too. I am aware of the stream functionality, but for some reason I can't remember when I was hacking this out 3 weeks ago I was stuck on getting it to work properly.
In release 1.1.4, the typing speed is faster now as a temporary solution. When I get sometime later this weekend, I'll look into this. Or if you'd like, I'd welcome a pull request!
Well just pasting the text instantly once you get the full blob without streaming would also be a solution, which might be easier to implement. Streaming could be added later of cause. I implemented streaming like this:
This might not be helpful at all because my code is messy, and the structure is probably very different from your thing because mine is just a CLI app.
I have just started to experiment with your plugin and it seems very promising, interesting and powerful.
However, one thing that's really bad in my opinion is this slow typing out of the text once you submit the query to GPT. Would it not be possible to just display the text at whatever speed the API returns it? The openAI API does have a streaming functionality.
The text was updated successfully, but these errors were encountered: