Skip to content

Backend infrastructure for running multi-agent systems at scale, across a network of nodes.

License

Notifications You must be signed in to change notification settings

NapthaAI/naptha-node

Repository files navigation

Visit naptha.ai Discord Hugging Face

             █▀█                  
          ▄▄▄▀█▀            
          █▄█ █    █▀█        
       █▀█ █  █ ▄▄▄▀█▀      
    ▄▄▄▀█▀ █  █ █▄█ █ ▄▄▄       
    █▄█ █  █  █  █  █ █▄█        ███╗   ██╗ █████╗ ██████╗ ████████╗██╗  ██╗ █████╗ 
 ▄▄▄ █  █  █  █  █  █  █ ▄▄▄     ████╗  ██║██╔══██╗██╔══██╗╚══██╔══╝██║  ██║██╔══██╗
 █▄█ █  █  █  █▄█▀  █  █ █▄█     ██╔██╗ ██║███████║██████╔╝   ██║   ███████║███████║
  █  █   ▀█▀  █▀▀  ▄█  █  █      ██║╚██╗██║██╔══██║██╔═══╝    ██║   ██╔══██║██╔══██║
  █  ▀█▄  ▀█▄ █ ▄█▀▀ ▄█▀  █      ██║ ╚████║██║  ██║██║        ██║   ██║  ██║██║  ██║
   ▀█▄ ▀▀█  █ █ █ ▄██▀ ▄█▀       ╚═╝  ╚═══╝╚═╝  ╚═╝╚═╝        ╚═╝   ╚═╝  ╚═╝╚═╝  ╚═╝
     ▀█▄ █  █ █ █ █  ▄█▀                             Orchestrating the Web of Agents
        ▀█  █ █ █ █ ▌▀                                                 www.naptha.ai
          ▀▀█ █ ██▀▀                                                    

NapthaAI Node License GitHub release (latest by date) Documentation

Naptha is a framework and infrastructure for developing and running multi-agent systems at scale with heterogeneous models, architectures and data. Agents and other modules can run on separate devices, while still interacting over the network. Our mission is to reimagine the internet - to bring forth the Web of Agents - by enabling the next-generation of AI applications and use cases.

If you find this repo useful, please don't forget to star ⭐!

Quick Start

Download the source code:

git clone https://github.com/NapthaAI/naptha-node.git
cd naptha-node

Launch the node:

bash launch.sh

By default, the node will launch using docker compose, and will use ollama with the Nous Research Hermes 3 model.

If PRIVATE_KEY, HUB_USERNAME and HUB_PASSWORD are not set in the .env file, you will be prompted to set them. You will also be prompted as to whether you want to set OPENAI_API_KEY and STABILITY_API_KEY.

Customizing the node

The node packages a number of services, with several options and combinations of services available. The services that you would like to run are configured using the .env file, and the launch.sh script will automatically start the services you have configured.

Node Services

  • Local Inference: Using either VLLM (or Ollama). Not many open source models support tool calling out of the box. The Naptha Node (soon) supports tool calling with 8 open source models, with more to come.

  • LiteLLM Proxy Server: A proxy server that provides a unified OpenAI-compatible API interface for multiple LLM providers and models. This allows seamless switching between different models while maintaining consistent API calls.

  • Local Server: The Naptha Node runs a local server that can be accessed by other agents in the network (via HTTP, Web Sockets, or gRPC). Agents and other modules that you publish on Naptha are accessible via API.

  • Local Storage: Naptha Nodes support the deployment of Knowledge Base, Memory and Environment modules, which all require storage. With Knowledge Bases and Environments, you can build things like group chats (think WhatsApp for agents), information boards (Reddit for agents), job boards (LinkedIn for agents), social networks (Twitter for agents), and auctions (eBay for agents). You can implement different types of memory like cognitive and episodic memory. The state of these modules is stored in a local database (postgres), file system or IPFS. The Naptha Node also stores details of module runs and (soon) model inference (token usage, costs etc.) in the local database.

  • Module Manager: Supports downloading and installation of modules (agents, tools, knowledge bases, memories, personas, agent orchestrators, environments) from GitHub, HuggingFace and IPFS.

  • Message Broker and Workers: The Naptha Node uses asynchronous processing and message queues (RabbitMQ) to pass messages between modules. Modules are executed using either Poetry or Docker.

  • (Optional) Local Hub: The Naptha Node can run a local Hub, which is a registry for modules (agents, tools, knowledge bases, memories, personas agent orchestrators, and environments) and nodes by setting LOCAL_HUB=true in the .env file. This is useful for testing locally before publishing to the main Naptha Hub. For the Hub DB, we use SurrealDB.

Configuring the Node Services

Make sure the .env file has been created:

cp .env.example .env

Modify any relevant variables in the .env file:

  • LAUNCH_DOCKER: Set to true if you want to launch the node using docker compose, or false if you want to launch the node using systemd/launchd.
  • LLM_BACKEND: Should be set to ollama if on a laptop, or to vllm if you want to use a GPU machine.
  • OLLAMA_MODELS: If using ollama, set this to the models you want to use, separated by commas. By default, the node will use the Nous Research Hermes 3 model.
  • VLLM_MODELS: If using VLLM, set this to the models you want to use, separated by commas.

For more details on node configuration for docker or systemd/launchd, see the relevant readme files for docker and systemd/launchd. For advanced configuration settings, see the Advanced Configuration guide.

Launching

Launch the node using:

bash launch.sh

For more details on ensuring the node launched successfully, checking the logs and troubleshooting you can check out the relevant readme files for docker and systemd/launchd.

Run AI agents on your node

To run agents, keep your node running and follow the instructions using the Naptha SDK. In the SDK repo, you should set the NODE_URL to the URL of your local node (default is http://localhost:7001).

Become a contributor to the Naptha Node

About

Backend infrastructure for running multi-agent systems at scale, across a network of nodes.

Resources

License

Stars

Watchers

Forks

Packages

No packages published