Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: updates #7

Merged
merged 18 commits into from
Dec 18, 2024
2 changes: 1 addition & 1 deletion docs/docs/deployments/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import DocCardList from '@theme/DocCardList';
The **Deployment Modes** refer to where the agent is running as well as it capabilities.

- `In-Kernel` - When installed in the Jupyter Kernel, the Agent can be requested directly. This is not the recommended way and should be used only for development purposes.
- `Out-Kernel Stateless` - The Agent can be requested thourh CLI for example. In a Stateless it is not possible to leverage the `Agent Memory` features, meaning that the agent is stateless and does not remember previous interactions.
- `Out-Kernel Stateless` - The Agent can be requested trough CLI for example. In a Stateless it is not possible to leverage the `Agent Memory` features, meaning that the agent does not remember previous interactions.
- `Out-Kernel Stateful` - A separated process that is requested via e.g. REST endpoints, being able to leverage the `Agent Memory` features.

<DocCardList />
4 changes: 3 additions & 1 deletion docs/docs/deployments/out-kernel-stateful/index.mdx
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Out Kernel Stateful

A separated process that is requested via e.g. REST endpoints, being able to leverage the `Agent Memory` features.
A separated process that is requested via e.g. REST endpoints.

In this Stateful mode, it is possible to leverage the `Agent Memory` features, leveraging previous interactions.
4 changes: 3 additions & 1 deletion docs/docs/deployments/out-kernel-stateless/index.mdx
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# Out Kernel Stateless

The Agent can be requested thourh CLI for example. In a Stateless it is not possible to leverage the `Agent Memory` features, meaning that the agent is stateless and does not remember previous interactions.
The Agent can be requested thourh CLI for example.

In this Stateless mode, it is not possible to leverage the `Agent Memory` features, meaning that the agent is stateless and does not remember previous interactions.
2 changes: 2 additions & 0 deletions docs/docs/interactions/ask-mode/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Ask Mode"
position: 1
3 changes: 3 additions & 0 deletions docs/docs/interactions/ask-mode/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Ask Mode

In a `Ask Mode`, the Agent is explicitely `requested` by the User.
2 changes: 0 additions & 2 deletions docs/docs/interactions/ask/_category_.yaml

This file was deleted.

3 changes: 0 additions & 3 deletions docs/docs/interactions/ask/index.mdx

This file was deleted.

2 changes: 2 additions & 0 deletions docs/docs/interactions/controllers/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Controllers"
position: 5
3 changes: 3 additions & 0 deletions docs/docs/interactions/controllers/index.mdx
echarles marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Controllers

Controllers.
11 changes: 9 additions & 2 deletions docs/docs/interactions/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,14 @@ import DocCardList from '@theme/DocCardList';

The **interaction modes** refer to how the agent is used.

- `Ask` - The Agent is triggered when the User requests it.
- `Listen` - The Agent "observes" the Notebook an Kernels events in the background and is triggered when a specific event occurs.
- [`Ask Mode`](/docs/interactions/ask-mode) - The Agent is explicitely `requested` by the User.
- [`Listen Mode`](/docs/interactions/listen-mode) - The Agent `observes` the Notebook an Kernels events and is requested when a specific event occurs without User action.

To interact with the Agent, you will need to.

- Provide information via [`Inputters`](/docs/interactions/inputters)
- Retrieve the result via [`Outputters`](/docs/interactions/outputters).

[`Controllers`](/docs/interactions/controllers) are available to ease the User interaction with the Agent.

<DocCardList />
2 changes: 2 additions & 0 deletions docs/docs/interactions/inputters/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Inputters"
position: 3
57 changes: 57 additions & 0 deletions docs/docs/interactions/inputters/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Inputters

Available Inputters:

- CLI.
- Notebook Cell Extension.
- Notebook Extension.

The types of input for an AI agent depend on the task it is designed for and the modalities it can process. Common types of inputs include:

1. Textual Input
- Natural Language: Plain text, questions, or commands written in human language. Example: "Write a story about a space explorer."
- Structured Data: JSON, XML, or other data formats with structured content. Example: `{"age": 30, "gender": "female", "preferences": ["sports", "music"]}`

2. Visual Input
- Images: Photographs, drawings, or other static visual data. Example: A JPEG of a sunset uploaded for analysis.
- Videos: Sequences of images, often used in tasks like action recognition or video summarization. Example: A 10-second clip of a person playing basketball.
- 3D Data: Point clouds or 3D models used in fields like robotics or virtual reality. Example: A CAD model for object recognition.

3. Auditory Input
- Audio Files: Speech, music, or other sound recordings. Example: An MP3 file of someone saying, "What's the weather today?"
- Live Audio Streams: Real-time audio data for tasks like speech-to-text or sound detection.

4. Sensor Data
- Motion Sensors: Inputs from accelerometers, gyroscopes, or other motion detectors. Example: Data from a fitness tracker measuring steps and activity.
- Environmental Sensors: Inputs like temperature, humidity, or light levels. Example: Data from IoT sensors in a smart home.

5. Numerical or Tabular Data
- Spreadsheets or CSV Files: Inputs for machine learning models used in financial, statistical, or predictive analytics. Example: Sales data for a forecasting model.

6. Multimodal Input
- Combination of Modalities: Inputs that mix text, images, audio, and other formats. Example: A video clip with subtitles analyzed for sentiment and actions.

7. Code
- Programming Code: For debugging, code generation, or analysis. Example: Python code submitted for auto-completion or error checking.

8. Commands or Control Inputs
- Interactive Commands: Inputs from a user interface, voice command, or button clicks. Example: A user saying, "Turn on the lights" to a smart assistant.

9. Contextual Inputs
- State Information: Data about the current context or environment of the AI agent. Example: Current GPS location provided to a navigation assistant.
- Previous Interactions: Memory of prior prompts or actions to enable continuity in conversation or tasks. Example: "Continue the story from where we left off."

10. Real-Time Streams
- Sensor Streams: Continuous input from a camera, microphone, or another sensor. Example: Live video feed for a surveillance AI.

11. Game or Simulation Data
- Game States: Inputs representing the current state in a game or simulation. Example: The configuration of pieces on a chessboard.

12. Logical and Mathematical Inputs
- Formulas or Equations: Mathematical or logical expressions for computation. Example: A quadratic equation to solve.

13. Feedback and Labels
- Supervisory Inputs: Labels or corrections provided by humans for training or refinement. Example: "The generated text is biased; correct it." Each type of input often requires preprocessing or specific handling to make it compatible with the AI system. Multimodal AI systems are capable of handling and integrating multiple types of input for more complex tasks.ntinuity in conversation or tasks.
Example: "Continue the story from where we left off."
echarles marked this conversation as resolved.
Show resolved Hide resolved

Each type of input often requires preprocessing or specific handling to make it compatible with the AI system. Multimodal AI systems are capable of handling and integrating multiple types of input for more complex tasks.
2 changes: 2 additions & 0 deletions docs/docs/interactions/listen-mode/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Listen Mode"
position: 2
5 changes: 5 additions & 0 deletions docs/docs/interactions/listen-mode/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Listen Mode

In a `Listen Mode`, the Agent `observes` the Notebook an Kernels events and is requested when a specific event occurs without User action.

In this mode, the User need to opt-in via a [`Controllers`](/docs/interactions/controllers).
2 changes: 0 additions & 2 deletions docs/docs/interactions/listen/_category_.yaml

This file was deleted.

3 changes: 0 additions & 3 deletions docs/docs/interactions/listen/index.mdx

This file was deleted.

2 changes: 2 additions & 0 deletions docs/docs/interactions/outputters/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Outputters"
position: 4
21 changes: 21 additions & 0 deletions docs/docs/interactions/outputters/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Outputters

Available Outputters:

- CLI.
- Notebook Cell Extension.
- Notebook Extension.

The Outputters need to provide options for the user to:

- Accept the Outputs.
- Request execution of the Outputs.
- Request explanation of the Outputs.

Output is a broad term used to describe any result generated by the AI, whether it's text, an image, or another format. The result of an AI prompt is often referred to based on the type of output it produces or the context in which it is used.

- Response: A general term used when the AI provides a textual reply or answer to the input prompt.
- Completion: In contexts like text generation with models like OpenAI's GPT, it is called a completion because the AI completes the input provided.
- Generation: This term is used when the AI creates something new, like text, images, music, etc.
- Artifact: A term sometimes used in artistic or creative contexts to describe the produced work, especially in image generation.
- Result: A simple and generic term for the outcome of running a prompt.
echarles marked this conversation as resolved.
Show resolved Hide resolved
2 changes: 2 additions & 0 deletions docs/docs/models/azure/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Azure OpenAI"
position: 1
13 changes: 13 additions & 0 deletions docs/docs/models/azure/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Azure

## Azure OpenAI

Jupyter AI Agent supports models from [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service).

Read the [Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai) to get the needed credentials and make sure you define them in the following environment variables.

```bash
export OPENAI_API_VERSION="..."
export AZURE_OPENAI_ENDPOINT="..."
export AZURE_OPENAI_API_KEY="..."
```
2 changes: 2 additions & 0 deletions docs/docs/models/fin-tuned/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Fine-tuned Models"
position: 3
7 changes: 7 additions & 0 deletions docs/docs/models/fin-tuned/index.mdx
echarles marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Fine-tuned Models

Datalayer fine-tunes models to support specific use-cases.

- [Satellites](https://pypi.org/project/satellites)
- [Telescopes](https://pypi.org/project/telescopes)
- [Jupyter Earth](https://pypi.org/project/jupyter-earth)
4 changes: 4 additions & 0 deletions docs/docs/models/index.mdx
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
import DocCardList from '@theme/DocCardList';

# Models

Jupyter AI Agent currently supports models from [Azure OpenAI](https://azure.microsoft.com/en-us/products/ai-services/openai-service).
Expand All @@ -9,3 +11,5 @@ export OPENAI_API_VERSION="..."
export AZURE_OPENAI_ENDPOINT="..."
export AZURE_OPENAI_API_KEY="..."
```

<DocCardList />
2 changes: 2 additions & 0 deletions docs/docs/models/llama3/_category_.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
label: "Llama3"
position: 2
3 changes: 3 additions & 0 deletions docs/docs/models/llama3/index.mdx
echarles marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Llama3 Model

Llama3 Model.
12 changes: 6 additions & 6 deletions docs/docs/tools/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@

The Current Tools are listed here.

- **Add Code Cell**: Dynamically add new code cells to the notebook.
- **Execute Code Cell**: Run code within specific cells, providing instant results.
- **Add Markdown Cell**: Insert markdown cells to structure and document notebooks effectively.
- **Add Code Cell** - Dynamically add new code cells to the notebook.
- **Add Markdown Cell** - Insert markdown cells to structure and document notebooks effectively.
- **Execute Code Cell** - Run code within specific cells, providing instant results.

We are implementing more Tools.

- **Modify Code Cell**: Edit existing code cells to fix errors or improve code quality.
- **Add Code Cell at a specific position**: Insert code cells at a specific location in the notebook.
- **Add Markdown Cell at a specific position**: Insert markdown cells at a specific location in the notebook.
- **Modify Code Cell** - Edit existing code cells to fix errors or improve code quality.
- **Add Code Cell at a specific position** - Insert code cells at a specific location in the notebook.
- **Add Markdown Cell at a specific position** - Insert markdown cells at a specific location in the notebook.
1 change: 1 addition & 0 deletions jupyter_ai_agent/agents/explain_error.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ def add_code_cell(cell_content: str) -> None:
"""Add a Python code cell with a content to the notebook and execute it."""
return add_code_cell_tool(notebook, kernel, cell_content)


tools = [add_code_cell]

cells_content_until_first_error, first_error = retrieve_cells_content_until_first_error(notebook)
Expand Down
2 changes: 1 addition & 1 deletion jupyter_ai_agent/agents/prompt.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ def add_markdown_cell(cell_content: str) -> None:
else:
SYSTEM_PROMP_FINAL = SYSTEM_PROMPT

agent = create_azure_open_ai_agent(azure_deployment_name, SYSTEM_PROMPT, tools)
agent = create_azure_open_ai_agent(azure_deployment_name, SYSTEM_PROMP_FINAL, tools)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

return list(agent_executor.stream({"input": input}))
Loading