Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updates and improvements for providers and documentation. #2515

Merged
merged 2 commits into from
Dec 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ client = Client()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
# Add any other necessary parameters
web_search = False
)
print(response.choices[0].message.content)
```
Expand All @@ -230,7 +230,6 @@ response = client.images.generate(
model="flux",
prompt="a white siamese cat",
response_format="url"
# Add any other necessary parameters
)

image_url = response.data[0].url
Expand Down
33 changes: 30 additions & 3 deletions docs/async_client.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The G4F AsyncClient API is designed to be compatible with the OpenAI API, making
- [Initializing the Client](#initializing-the-client)
- [Creating Chat Completions](#creating-chat-completions)
- [Configuration](#configuration)
- [Explanation of Parameters](#explanation-of-parameters)
- [Usage Examples](#usage-examples)
- [Text Completions](#text-completions)
- [Streaming Completions](#streaming-completions)
Expand Down Expand Up @@ -80,6 +81,29 @@ client = AsyncClient(
)
```

## Explanation of Parameters
**When using the G4F to create chat completions or perform related tasks, you can configure the following parameters:**
- **`model`**:
Specifies the AI model to be used for the task. Examples include `"gpt-4o"` for GPT-4 Optimized or `"gpt-4o-mini"` for a lightweight version. The choice of model determines the quality and speed of the response. Always ensure the selected model is supported by the provider.

- **`messages`**:
**A list of dictionaries representing the conversation context. Each dictionary contains two keys:**
- `role`: Defines the role of the message sender, such as `"user"` (input from the user) or `"system"` (instructions to the AI).
- `content`: The actual text of the message.
**Example:**
```python
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What day is it today?"}
]
```

- **`web_search`**:
(Optional) A Boolean flag indicating whether to enable internet-based search capabilities for the task. If True, the system performs a web search using the DuckDuckGo search engine to retrieve up-to-date information. This is particularly useful for obtaining real-time or specific details not contained within the model's training.

- **`provider`**:
Specifies the backend provider for the API. Examples include `g4f.Provider.Blackbox` or `g4f.Provider.OpenaiChat`. Each provider may support a different subset of models and features, so select one that matches your requirements.

## Usage Examples
### Text Completions
**Generate text completions using the ChatCompletions endpoint:**
Expand All @@ -97,7 +121,8 @@ async def main():
"role": "user",
"content": "Say this is a test"
}
]
],
web_search = False
)

print(response.choices[0].message.content)
Expand Down Expand Up @@ -139,13 +164,15 @@ import g4f
import requests
import asyncio
from g4f.client import AsyncClient
from g4f.Provider.CopilotAccount import CopilotAccount

async def main():
client = AsyncClient(
provider=g4f.Provider.CopilotAccount
provider=CopilotAccount
)

image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw
# Or: image = open("docs/images/cat.jpeg", "rb")

response = await client.chat.completions.create(
model=g4f.models.default,
Expand Down Expand Up @@ -374,4 +401,4 @@ Remember to handle errors gracefully, implement rate limiting, and monitor your

---

[Return to Home](/)
[Return to Home](/)
32 changes: 29 additions & 3 deletions docs/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
- [Initializing the Client](#initializing-the-client)
- [Creating Chat Completions](#creating-chat-completions)
- [Configuration](#configuration)
- [Explanation of Parameters](#explanation-of-parameters)
- [Usage Examples](#usage-examples)
- [Text Completions](#text-completions)
- [Streaming Completions](#streaming-completions)
Expand Down Expand Up @@ -84,6 +85,30 @@ client = Client(
)
```

## Explanation of Parameters
**When using the G4F to create chat completions or perform related tasks, you can configure the following parameters:**
- **`model`**:
Specifies the AI model to be used for the task. Examples include `"gpt-4o"` for GPT-4 Optimized or `"gpt-4o-mini"` for a lightweight version. The choice of model determines the quality and speed of the response. Always ensure the selected model is supported by the provider.

- **`messages`**:
**A list of dictionaries representing the conversation context. Each dictionary contains two keys:**
- `role`: Defines the role of the message sender, such as `"user"` (input from the user) or `"system"` (instructions to the AI).
- `content`: The actual text of the message.
**Example:**
```python
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What day is it today?"}
]
```

- **`web_search`**:
(Optional) A Boolean flag indicating whether to enable internet-based search capabilities for the task. If True, the system performs a web search using the DuckDuckGo search engine to retrieve up-to-date information. This is particularly useful for obtaining real-time or specific details not contained within the model's training.

- **`provider`**:
Specifies the backend provider for the API. Examples include `g4f.Provider.Blackbox` or `g4f.Provider.OpenaiChat`. Each provider may support a different subset of models and features, so select one that matches your requirements.


## Usage Examples
### Text Completions
**Generate text completions using the `ChatCompletions` endpoint:**
Expand All @@ -99,7 +124,8 @@ response = client.chat.completions.create(
"role": "user",
"content": "Say this is a test"
}
]
],
web_search = False
# Add any other necessary parameters
)

Expand Down Expand Up @@ -234,15 +260,15 @@ client = Client(
provider=GeminiPro
)

image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/cat.jpeg", stream=True).raw
image = requests.get("https://raw.githubusercontent.com/xtekky/gpt4free/refs/heads/main/docs/images/cat.jpeg", stream=True).raw
# Or: image = open("docs/images/cat.jpeg", "rb")

response = client.chat.completions.create(
model=g4f.models.default,
messages=[
{
"role": "user",
"content": "What are on this image?"
"content": "What's in this image?"
}
],
image=image
Expand Down
12 changes: 9 additions & 3 deletions docs/providers-and-models.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,13 @@ This document provides an overview of various AI providers and models, including
### Providers Free
| Website | Provider | Text Models | Image Models | Vision Models | Stream | Status | Auth |
|----------|-------------|--------------|---------------|--------|--------|------|------|
|[api.airforce](https://api.airforce)|`g4f.Provider.Airforce`|`phi-2, gpt-4, gpt-4o-mini, gpt-4o, gpt-4-turbo, o1-mini, openchat-3.5, deepseek-coder, hermes-2-dpo, hermes-2-pro, openhermes-2.5, lfm-40b, german-7b, llama-2-7b, llama-3.1-70b, neural-7b, zephyr-7b, evil,`|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌+✔|
|[api.airforce](https://api.airforce)|`g4f.Provider.Airforce`|`phi-2, gpt-4, gpt-4o-mini, gpt-4o, gpt-4-turbo, o1-mini, openchat-3.5, deepseek-coder, hermes-2-dpo, hermes-2-pro, openhermes-2.5, lfm-40b, german-7b, llama-2-7b, llama-3.1-8b, llama-3.1-70b, neural-7b, zephyr-7b, evil,`|`sdxl, flux-pro, flux, flux-realism, flux-anime, flux-3d, flux-disney, flux-pixel, flux-4o, any-dark, midjourney, dall-e-3`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌+✔|
|[amigochat.io](https://amigochat.io/chat/)|`g4f.Provider.AmigoChat`|✔|✔|❌|✔|![Error](https://img.shields.io/badge/RateLimit-f48d37)|❌|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.Blackbox`|`blackboxai, gpt-4, gpt-4o, gemini-pro, claude-3.5-sonnet, blackboxai-pro, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b, llama-3.3-70b, mixtral-7b, deepseek-chat, dbrx-instruct, qwq-32b, hermes-2-dpo`|`flux`|`blackboxai, gpt-4o, gemini-pro, gemini-flash, llama-3.1-8b, llama-3.1-70b, llama-3.1-405b`|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.Blackbox2`|`llama-3.1-70b`|`flux`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[blackbox.ai](https://www.blackbox.ai)|`g4f.Provider.BlackboxCreateAgent`|`llama-3.1-70b`|`flux`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[chatgpt.com](https://chatgpt.com)|`g4f.Provider.ChatGpt`|✔|❌|❌|✔|![Error](https://img.shields.io/badge/HTTPError-f48d37)|❌|
|[chatgpt.es](https://chatgpt.es)|`g4f.Provider.ChatGptEs`|`gpt-4, gpt-4o, gpt-4o-mini`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[claudeson.net](https://claudeson.net)|`g4f.Provider.ClaudeSon`|`claude-3.5-sonnet`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[playground.ai.cloudflare.com](https://playground.ai.cloudflare.com)|`g4f.Provider.Cloudflare`|`llama-2-7b, llama-3-8b, llama-3.1-8b, llama-3.2-1b, qwen-1.5-7b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[copilot.microsoft.com](https://copilot.microsoft.com)|`g4f.Provider.Copilot`|`gpt-4`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[darkai.foundation](https://darkai.foundation)|`g4f.Provider.DarkAI`|`gpt-3.5-turbo, gpt-4o, llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
Expand All @@ -47,6 +48,11 @@ This document provides an overview of various AI providers and models, including
|[teach-anything.com](https://www.teach-anything.com)|`g4f.Provider.TeachAnything`|`llama-3.1-70b`|❌|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[you.com](https://you.com)|`g4f.Provider.You`|✔|✔|✔|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|

---
### Providers Free [HuggingSpace](https://hf.space)
|[black-forest-labs-flux-1-dev.hf.space](https://black-forest-labs-flux-1-dev.hf.space)|`g4f.Provider.BlackForestLabsFlux1Dev`|❌|`flux-dev`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[black-forest-labs-flux-1-schnell.hf.space](https://black-forest-labs-flux-1-schnell.hf.space)|`g4f.Provider.BlackForestLabsFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|
|[voodoohop-flux-1-schnell.hf.space](https://voodoohop-flux-1-schnell.hf.space)|`g4f.Provider.VoodoohopFlux1Schnell`|❌|`flux-schnell`|❌|✔|![](https://img.shields.io/badge/Active-brightgreen)|❌|

---
### Providers Needs Auth
Expand Down Expand Up @@ -165,7 +171,7 @@ This document provides an overview of various AI providers and models, including
|flux-disney|Flux AI|1+ Providers|[]( )|
|flux-pixel|Flux AI|1+ Providers|[]( )|
|flux-4o|Flux AI|1+ Providers|[]( )|
|flux-schnell|Black Forest Labs|2+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|flux-schnell|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)|
|dall-e-3|OpenAI|5+ Providers|[openai.com](https://openai.com/index/dall-e/)|
|midjourney|Midjourney|2+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)|
|any-dark||2+ Providers|[]( )|
Expand Down
4 changes: 4 additions & 0 deletions docs/requests.md
Original file line number Diff line number Diff line change
Expand Up @@ -389,3 +389,7 @@ Feel free to customize and expand upon these examples to suit your specific need

6. **Logging:**
- Implement logging to monitor the behavior of your applications, which is crucial for debugging and maintaining your systems.

---

[Return to Home](/)
14 changes: 9 additions & 5 deletions etc/examples/api.py → etc/examples/api_completions_copilot.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
conversation_id = str(uuid.uuid4())
body = {
"model": "",
"provider": "Copilot",
"provider": "Copilot",
"stream": True,
"messages": [
{"role": "user", "content": "Hello, i am Heiner. How are you?"}
Expand All @@ -22,7 +22,9 @@
if json_data.get("error"):
print(json_data)
break
print(json_data.get("choices", [{"delta": {}}])[0]["delta"].get("content", ""), end="")
content = json_data.get("choices", [{"delta": {}}])[0]["delta"].get("content", "")
if content:
print(content, end="")
except json.JSONDecodeError:
pass
print()
Expand All @@ -31,7 +33,7 @@
body = {
"model": "",
"provider": "Copilot",
"stream": True,
"stream": True,
"messages": [
{"role": "user", "content": "Tell me somethings about my name"}
],
Expand All @@ -46,6 +48,8 @@
if json_data.get("error"):
print(json_data)
break
print(json_data.get("choices", [{"delta": {}}])[0]["delta"].get("content", ""), end="")
content = json_data.get("choices", [{"delta": {}}])[0]["delta"].get("content", "")
if content:
print(content, end="")
except json.JSONDecodeError:
pass
pass
6 changes: 4 additions & 2 deletions etc/examples/image_api.py → etc/examples/api_generations_image.py
100644 → 100755
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
import requests
url = "http://localhost:1337/v1/images/generations"
body = {
"model": "dall-e",
"model": "flux",
"prompt": "hello world user",
"response_format": None,
#"response_format": "url",
#"response_format": "b64_json",
}
data = requests.post(url, json=body, stream=True).json()
print(data)
print(data)
33 changes: 33 additions & 0 deletions etc/examples/messages.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
from g4f.client import Client

class ConversationHandler:
def __init__(self, model="gpt-4"):
self.client = Client()
self.model = model
self.conversation_history = []

def add_user_message(self, content):
self.conversation_history.append({
"role": "user",
"content": content
})

def get_response(self):
response = self.client.chat.completions.create(
model=self.model,
messages=self.conversation_history
)
assistant_message = {
"role": response.choices[0].message.role,
"content": response.choices[0].message.content
}
self.conversation_history.append(assistant_message)
return assistant_message["content"]

# Usage example
conversation = ConversationHandler()
conversation.add_user_message("Hello!")
print("Assistant:", conversation.get_response())

conversation.add_user_message("How are you?")
print("Assistant:", conversation.get_response())
25 changes: 25 additions & 0 deletions etc/examples/messages_stream.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
import asyncio
from g4f.client import AsyncClient

async def main():
client = AsyncClient()

stream = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Say hello there!"}],
stream=True,
)

accumulated_text = ""
try:
async for chunk in stream:
if chunk.choices and chunk.choices[0].delta.content:
content = chunk.choices[0].delta.content
accumulated_text += content
print(content, end="", flush=True)
except Exception as e:
print(f"\nError occurred: {e}")
finally:
print("\n\nFinal accumulated text:", accumulated_text)

asyncio.run(main())
Empty file modified etc/examples/openaichat.py
100644 → 100755
Empty file.
17 changes: 17 additions & 0 deletions etc/examples/text_completions_demo_async.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import asyncio
from g4f.client import AsyncClient

async def main():
client = AsyncClient()

response = await client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "how does a court case get to the Supreme Court?"}
]
)

print(response.choices[0].message.content)

asyncio.run(main())
13 changes: 13 additions & 0 deletions etc/examples/text_completions_demo_sync.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
from g4f.client import Client

client = Client()

response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "how does a court case get to the Supreme Court?"}
],
)

print(response.choices[0].message.content)
49 changes: 49 additions & 0 deletions etc/examples/text_completions_streaming.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import asyncio
from g4f.client import Client, AsyncClient

question = """
Hey! How can I recursively list all files in a directory in Python?
"""

# Synchronous streaming function
def sync_stream():
client = Client()
stream = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": question}
],
stream=True,
)

for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content or "", end="")

# Asynchronous streaming function
async def async_stream():
client = AsyncClient()
stream = client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": question}
],
stream=True,
)

async for chunk in stream:
if chunk.choices and chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

# Main function to run both streams
def main():
print("Synchronous Stream:")
sync_stream()
print("\n\nAsynchronous Stream:")
asyncio.run(async_stream())

if __name__ == "__main__":
try:
main()
except Exception as e:
print(f"An error occurred: {str(e)}")
Loading
Loading