Skip to content

reachsak/LLM_AI_Assistant_for_Human_Building_Interaction

Repository files navigation

Large Language Model for Human-Building Interaction

Manuscript

Autonomous Building Cyber-Physical Systems Using Decentralized Autonomous Organizations, Digital Twins, and Large Language Model

Project Overview

The Project aims to facilitate the human-building interaction within smart buildings using open-sourced LLM such as LLaMA 3. This AI assistant provides smart and personalized assistance to occupants through web apps. Users can communicate with the AI virtual assistant through text and voice input to control various building facilities, adjust setpoints for the specific building smart facilities, or turn systems on or off as needed. The assistant also provides real-time information on indoor environmental conditions by accessing live sensor data reading from the IoT device. The Text-to-Speech (TTS) and Speech-to-Text (STT) models are powered by open-source tools and models such as Whisper and Piper.

Video Demo

Watch the demo video
Click on the image to view the demo video.

Summary

Requirements

  • Open-source Large language model (e.g., LLaMA)
  • Generative AI inference tool. llama.cpp
  • Python 3.10
  • Raspberry Pi and IoT sensors
  • Open-source Text-to-Speech (TTS) model, Whisper
  • Open-source Speech-to-Text (STT) model, Piper

Detailed setup guide

Coming soon.....

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published