-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for capturing interimResults (Real Time Hypothosis) #101
Comments
If you're asking why I would want to have real time hypotheses it's because it would allow...
Basically to indicate that speech recognition is working. If you're asking why I would want to add this feature once I have time then it's because I like this library and I want to see it have more features that other developers and I would use. |
That reply has nothing to do with this thread. @TalAter I don't know if the above comment is from a bot or a person trolling. In either case I will stop responding to it and defer to you to moderate. |
@alanjames1987 This is now possible in annyang v2.0.0 Since v2.0.0, the So you can do something like: annyang.addCallback('result', function(phrases) {
console.log('Speech recognized. Possible sentences said:');
console.log(phrases);
}); |
Does the 2.0.0 release allow for I don't see a way to enable that. |
Sorry, I was meaning |
No. I think I didn't add it originally because I didn't want to try and match interim results with commands. But with the new callbacks that can be called with parameters, we could maybe implement interim results as a callback. We'll need to check if we can enable it without affecting current functionality... What do you think about this approach? |
I think that would be a great approach. |
Hi, I wanted to use your awesome library but I need interimResults on my project :/ EDIT: I find a workaround but I think that implement interim results as a callback is a better solution. var recognition = annyang.getSpeechRecognizer();
var final_transcript = '';
recognition.interimResults = true;
annyang.start();
recognition.onresult = function(event) {
var interim_transcript = '';
final_transcript = '';
for (var i = event.resultIndex; i < event.results.length; ++i) {
if (event.results[i].isFinal) {
final_transcript += event.results[i][0].transcript;
console.log("final_transcript");
console.log(final_transcript);
annyang.trigger(final_transcript); //If the sentence is "final" for the Web Speech API, we can try to trigger the sentence
} else {
interim_transcript += event.results[i][0].transcript;
console.log("interim_transcript");
console.log(interim_transcript);
}
}
final_transcript = capitalize(final_transcript);
final_span.innerHTML = linebreak(final_transcript);
interim_span.innerHTML = linebreak(interim_transcript);
}; |
Thanks @kant73 ! I used that code to implement interim results as well and it's working great. |
The @kant73 solution for this issue worked perfectly for me. |
I'm trying to get interim results. I see two people here that say they used the above method and it worked. I can't seem to get it to work. Has anything changed since this was posted. Here is my exact code. I had to change it slightly from Kant73's original just because I think he only posted a snippet.
Thanks. |
Argh. Sorry I figured it out 2 minutes after posting this. The interim went so fast that it only showed up in the console but never in the |
Thanks!!!! It saves me a lot :) |
It would be great to have an event listener which allows me to get hypothosis in real time, allowing the user to see what they are saying in real time, similar to how Android's speech input works.
I know the
webkitSpeechRecognition
allows this and I would be willing to add it once I have time.The text was updated successfully, but these errors were encountered: