Demo: https://youtu.be/2XsfQsN-P2M.
- Get a Windows computer with Visual Studio and NVidia graphics card. I use 3090, but lower end card would work too. Get headphones and a microphone (or maybe you have a speaker with microphone array which subtracts speaker from microphone signal? Because this software is not capable of doing that, sorry!)
- Download an LLM in GGUF format. You can download it from HuggingFace. Just search models for "gguf". I like MistralLite 7B and Mistral 11B - they are fast. User TheBloke does excellent job quantizing them. I use quantization Q5_K-M, but you can use Q4_K_M if your graphics card is lower end.
- In program.cs replace static string LLM_Model_Path = @"D:\Llama\models\mistral-11b-omnimix-bf16.Q5_K_M.gguf"; with path to the gguf file you downloaded.
- Start the application, wait for prompt "I am listening! Press Esc to quit." and start asking questions. AI will answer you once you stop speaking. You can interrupt the AI.
First start will be slower because it will be downloading Whisperer (speech to text) neural network en-us-base.ggml which is 141MB. Subsequent software restarts are fast.
-
LLM is returning some garbage Perhaps, it cannot hear you. Make sure Whisperer downloaded en-us-base.ggm to your binary folder and the file is ~141MB, and not a few kilobytes Also, make sure you see text "Loud" when you speak, and "Quiet" when you stop speaking. Adjust mic_threshold value to match your microphone sensitivity. For example, if you never see "Loud", increase this value 2x until you see text "Loud" every time you speak. The software will type recognized text input to console.
-
LLM does not fit into graphics card memory. Reduce number of layers loaded into GPU in ChatAI.cs (GpuLayerCount = 51).
-
How to test LLM in text mode? There is excellent project LLamaSharp which works with gguf models.
AudioLevelMonitor acquires audio from microphone with NAudio library and analyzes it for user audio input. It waits for user to start and then stop speaking. Then Whisperer (SpeechToText.cs) converts it to text Then LLM (ChatAI.cs) processes it and start streaming the response. It goes to TextToSpeech which is Microsoft synthesizer.
If you start speaking before it finished, previous response is aborted and a new one starts. I.e., you are welcome to interrupt the AI any time.
The AI name is "David" because I am using "David" voice from Microsoft. You are welcome to change this name. You can also use other voices and other languages.