This project is a mock implementation of the OpenAI API endpoint for usage during devlopment. It's built with Node.js and Express, and it's written in TypeScript.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
- Node.js
- npm
- Clone the repository:
git clone https://github.com/seyf1elislam/fake_openai_endpoint_ts
- Navigate to the project directory:
cd fake_openai_api_endpoint
- Install the dependencies:
npm install
To start the application, run the following command:
npm run serve
The application will start and listen on http://127.0.0.1:3000/v1
.
- Node.js
- Express
- TypeScript
This application provides two main API endpoints:
This endpoint returns a list of available models. To use it, send a GET request to http://127.0.0.1:3000/v1/models
.
Example:
curl http://127.0.0.1:3000/v1/models
This endpoint is used to get completions for a given prompt. To use it, send a POST request to http://127.0.0.1:3000/v1/completions
with a JSON body containing the model
and prompt
parameters.
import openai
client = openai.OpenAI(
api_key='...',
base_url ="http://127.0.0.1:3000/v1"
)
# Non Streaming example
completion = client.chat.completions.create(
model="GPTforST",
messages=[
{
"role": "user",
"content": "How do I output all files in a directory using Python?",
},
],
)
print(completion.choices[0].message.content)
stream = client.chat.completions.create(
model="gpt",
messages=[{"role": "user", "content": "Say this is a test"}],
stream=True,
)
for chunk in stream._iterator:
print(chunk.choices[0].delta.content or "-" , end=" ")