Skip to content

zeuscsc/llm_mediator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM Mediator

Just a simple mediator for different LLM models Will cache the response for the same input text during debug and save money for you.

Features

  • Cache
  • GPT-3.5
  • GPT-3.5-16k
  • GPT-4
  • GPT-4-32k
  • GPT-4-vision
  • DeepSeek-Gradio-API (Chinese LLM Gradio API)
  • DeepSeek (Chinese LLM)

Quick Usage for

Install:

pip install LLM-Mediator
# Install llm_mediator from github
pip install git+https://github.com/zeuscsc/llm_mediator.git

Usage:

model_name="GPT-4-32k"
model=LLM(GPT)
model.model_class.set_model_name(model_name)
response=model.get_response(system,assistant,user)

Where system, assistant, user are the input text, and response is the output text. Or you can just follow the docs from OpenAi: ~~python generator=model.get_chat_completion(messages=messages,functions=functions,function_call=function_call,stream=True,temperature=0,completion_extractor=GPT.AutoGeneratorExtractor,print_chunk=False)

## Set Environment Variables
Unix:
~~~shell Unix
export OPENAI_API_KEY=your openai key (Nessary for GPT)
export TECKY_API_KEY=your tecky key (Nessary for GPT)

Windows:

$ENV:OPENAI_API_KEY="your openai key" (Nessary for GPT)
$ENV:TECKY_API_KEY="your tecky key" (Nessary for GPT)

Python: Create a

from llm_mediator import gpt
gpt.OPENAI_API_KEY="your openai key" (Nessary for GPT)
gpt.TECKY_API_KEY = "your tecky key" (Nessary for GPT)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages