The Project aims to facilitate the human-building interaction within smart buildings using open-sourced LLM such as LLaMA 3. This AI assistant provides smart and personalized assistance to occupants through web apps. Users can communicate with the AI virtual assistant through text and voice input to control various building facilities, adjust setpoints for the specific building smart facilities, or turn systems on or off as needed. The assistant also provides real-time information on indoor environmental conditions by accessing live sensor data reading from the IoT device. The Text-to-Speech (TTS) and Speech-to-Text (STT) models are powered by open-source tools and models such as Whisper and Piper.
Click on the image to view the demo video.
- Open-source Large language model (e.g., LLaMA)
- Generative AI inference tool. llama.cpp
- Python 3.10
- Raspberry Pi and IoT sensors
- Open-source Text-to-Speech (TTS) model, Whisper
- Open-source Speech-to-Text (STT) model, Piper
Coming soon.....