Caution
AI Backend is still in Development. You will find bugs and broken/unfinished features.
ai-backend is a backend for AI-powered applications. It leverages FastAPI and Ollama to provide a robust API for natural language processing tasks.
(Also available via nix-shell
)
- Python 3.12
- pip
- git
- Clone the repository
git clone https://github.com/Dino-Kupinic/ai-backend.git
- [Optional] Using a python virtual environment for a local installation
python3 -m venv venv
Activate venv: (example for *nix systems)
source ./venv/bin/activate
- Install dependencies
pip install -r requirements.txt
-
Create a
.env
file in the root directory and copy over the fields from the.env.example
file. -
Download ollama for your system from here.
Note
Can be skipped if you use nix-shell
.
Note
In the future, ollama will be downloaded from the command line automatically.
- Start Ollama and Pull the model
ollama serve
ollama pull llama3
- Run the server
fastapi dev src/main.py
The OpenAPI documentation is available at /docs
. It is automatically generated from the code.
// WIP
curl -X POST "http://localhost:8000/message/" -H "Content-Type: application/json" -d '{"prompt": "Tell me something about Vienna, Austria", "model": "llama3"}' --no-buffer
Tip
--no-buffer
is needed due to streaming.
// WIP
To run the test suite:
- Ensure that both the AI Backend and Ollama services are running.
- Execute the following command:
pytest
This will run all tests in the tests/
directory.
// WIP
This project is licensed under the MIT License - see the LICENSE file for details.
- Special thanks to the FastAPI and Ollama communities for their excellent tools and documentation
For more information, please open an issue or contact the maintainers.