Skip to content

Commit

Permalink
added info that Ollama also supports embedding and reranking models
Browse files Browse the repository at this point in the history
  • Loading branch information
pbharti0831 committed Jan 29, 2025
1 parent 677e378 commit ee81c8b
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ Ollama provides a comprehensive set of libraries and tools to facilitate the dep
VLLM is an optimized inference engine designed for high-throughput token generation and efficient memory utilization, making it suitable for large-scale AI deployments. Ollama is a lightweight and intuitive framework that facilitates the execution of open-source LLMs on local on-prem hardware. In terms of popularity, the [vLLM](https://github.com/vllm-project/vllm) GitHub repository has 35K stars, while [Ollama](https://github.com/ollama/ollama) has 114K stars.

#### Key Features of Ollama
- **Extensive Model Support**: Ollama supports a variety of open-source language models, including state-of-the-art models that are continuously updated.
- **Extensive Model Support**: Ollama supports a variety of open-source large and small language models, including state-of-the-art models that are continuously updated. Additionally, it also hosts embedding and re-ranking models that are critical for RAG applications.
- **Ease of Integration**: The libraries and tools provided by Ollama are designed to integrate smoothly with existing systems, reducing the complexity of adding new models to the workflow.
- **Scalability**: Ollama's tools are built to handle different scales of deployment, from small-scale local setups to larger, more complex environments.
- **Community and Documentation**: Ollama has a strong community and extensive documentation, providing support and resources for developers to effectively use and troubleshoot the tools.
Expand Down

0 comments on commit ee81c8b

Please sign in to comment.