Skip to content

Commit

Permalink
Merge branch 'main' into dependabot/pip/pymongo-4.6.3
Browse files Browse the repository at this point in the history
  • Loading branch information
denniszielke authored May 17, 2024
2 parents a6df033 + 97f8b30 commit 506ac78
Show file tree
Hide file tree
Showing 48 changed files with 1,850 additions and 1,004 deletions.
14 changes: 12 additions & 2 deletions .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,19 @@ AZURE_OPENAI_COMPLETION_DEPLOYMENT_NAME = "<YOUR AZURE OPENAI COMPLETIONS DEPLOY
AZURE_OPENAI_EMBEDDING_MODEL = "<YOUR OPENAI EMBEDDING MODEL NAME - e.g. text-embedding-ada-002>"
AZURE_OPENAI_EMBEDDING_DEPLOYMENT_NAME = "<YOUR AZURE OPENAI EMBEDDINGS DEPLOYMENT NAME - e.g. text-embedding-ada-002>"

#return here at lab 03 to fill the connection string
MONGO_DB_CONNECTION_STRING = "mongodb+srv://<username>:<password>@<clustername>.mongocluster.cosmos.azure.com/?tls=true&authMechanism=SCRAM-SHA-256&retrywrites=false&maxIdleTimeMS=120000"
MONGO_DB_database_name = "movie_db"
MONGO_DB_collection_name = "movie_data"
MONGO_DB_cache_collection_name = "chat_cache"
MONGO_DB_semcache_collection_name = "lc_chat_cache"
MONGO_DB_chathistory_collection_name = "lc_chat_history_data"
MONGO_DB_vector_property_name = "vector"

storage_file_url = "https://cosmosdbcosmicworks.blob.core.windows.net/fabcondata/movielens_dataset.json"

#return here to fill these at lab 03 - ACS
AZURE_AI_SEARCH_SERVICE_NAME = "<YOUR AZURE AI SEARCH SERVICE NAME - e.g. ai-vectorstore-xyz>"
AZURE_AI_SEARCH_ENDPOINT = "<YOUR AZURE AI SEARCH ENDPOINT NAME - e.g. https://ai-vectorstore-xyz.search.windows.net"
AZURE_AI_SEARCH_INDEX_NAME = "<YOUR AZURE AI SEARCH INDEX NAME - e.g. ai-search-index>"
AZURE_AI_SEARCH_API_KEY = "<YOUR AZURE AI SEARCH ADMIN API KEY - get this value from the Azure portal>"

MONGO_DB_CONNECTION_STRING = "<mongodb://account:[email protected]:10255/? - Azure Cosmos DB for MongoDB get this value from the azure portal>"
4 changes: 2 additions & 2 deletions .github/workflows/build-acs-lc-python-api.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ on:

defaults:
run:
working-directory: labs/04-deploy-ai/01-backend-api/acs-lc-python-api/acs-lc-python
working-directory: labs/04-deploy-ai/01-backend-api/aais-lc-python-api/aais-lc-python

env:
IMAGE_NAME: acs-lc-python-api
IMAGE_NAME: aais-lc-python-api

jobs:

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/build-acs-sk-csharp-api.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ on:

defaults:
run:
working-directory: labs/04-deploy-ai/01-backend-api/acs-sk-csharp-api/acs-sk-csharp
working-directory: labs/04-deploy-ai/01-backend-api/aais-sk-csharp-api/aais-sk-csharp

env:
IMAGE_NAME: acs-sk-csharp-api
IMAGE_NAME: aais-sk-csharp-api

jobs:

Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -179,6 +179,8 @@ DocProject/Help/html
# Click-Once directory
publish/

.mono/**

# Publish Web Output
*.[Pp]ublish.xml
*.azurePubxml
Expand Down
5 changes: 3 additions & 2 deletions labs/00-setup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ On the **Develop** page you will see values for **Key 1**, **Key 2**, **Location

![Alt text](images/deployments.png)

You can see above that we have a *completions* model `gpt-35-turbo` with version `0613` and an *embeddings* model `text-embedding-ada-002` with version `2`. If you have both of these, then you're good to go. If not, click on the **+ Create new deployment** link and follow the steps to create two deployments. Ensure that one model deployment uses `text-embedding-ada-002` and the other uses a completions model such as `gpt-35-turbo`.
You can see above that we have a *completions* model `gpt-35-turbo` with version `1106` or newer and an *embeddings* model `text-embedding-ada-002` with version `2`. If you have both of these, then you're good to go. If not, click on the **+ Create new deployment** link and follow the steps to create two deployments. Ensure that one model deployment uses `text-embedding-ada-002` and the other uses a completions model such as `gpt-35-turbo`.

Make a note of both the **deployment name** and the **model name** for each of the two deployments.

Expand Down Expand Up @@ -107,8 +107,9 @@ With all of the above updates to the `.env` file made, make sure you save the fi

**NOTE**: The `.gitignore` file in this repo is configured to ignore the `.env` file, so the secrets such as the API key will not be uploaded to a public repo.

You can update the rest of the properties later in the labs.
___

## Next Section

📣 [Prompts](../01-prompts/README.md)
📣 [Prompts](../01-prompts/README.md)
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.11.9"
},
"orig_nbformat": 4
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.11.8"
},
"orig_nbformat": 4
},
Expand Down
4 changes: 2 additions & 2 deletions labs/02-integrating-ai/02-OpenAIPackages/openai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -168,7 +168,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.11.9"
},
"orig_nbformat": 4
},
Expand Down
105 changes: 73 additions & 32 deletions labs/02-integrating-ai/03-Langchain/langchain.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,7 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.llms import AzureOpenAI\n",
"from langchain_openai import AzureChatOpenAI\n",
"from langchain.schema import HumanMessage"
"from langchain_openai import AzureChatOpenAI"
]
},
{
Expand Down Expand Up @@ -91,24 +89,29 @@
"metadata": {},
"outputs": [],
"source": [
"# Define the prompt we want the AI to respond to - the message the Human user is asking\n",
"msg = HumanMessage(content=\"Explain step by step. How old is the president of USA?\")\n",
"\n",
"# Call the API\n",
"r = llm.invoke([msg])\n",
"r = llm.invoke(\"What things could I make with a Raspberry Pi?\")\n",
"\n",
"# Print the response\n",
"print(r.content)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Compared to using the OpenAI Python library as we did in the previous lab, Langchain further simplified the process of interacting with the LLM by reducing it to a `llm.invoke` call."
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## Send a prompt to Azure OpenAI using Langchain Chaining\n",
"## Using templates and chains\n",
"\n",
"We've seen that we can use Langchain to interact with the LLM and it's a little easier to work with than the OpenAI Python library. However, that's just the start of how Langchain makes it easier to work with LLM's. Most OpenAI models are designed to be interacted with using a Chat style interface, where you provide a persona or system prompt which helps the LLM understand the context of the conversation. This will then be sent to the LLM along with the user's request.\n",
"\n",
"Now that we have seen Langchain in action, let's take a quick peek at chaining and adding variables to our prompt. To do this we will add `LLMChain` to the `llm` instance created above."
"So that you don't have to setup the persona / system prompt every time you want to interact with the LLM, Langchain provides the concept of Templates. Templates are a way to define the persona and system prompt once and then reuse them across multiple interactions."
]
},
{
Expand All @@ -117,18 +120,20 @@
"metadata": {},
"outputs": [],
"source": [
"from langchain.prompts import PromptTemplate\n",
"from langchain.chains import LLMChain"
"from langchain_core.prompts import ChatPromptTemplate\n",
"prompt = ChatPromptTemplate.from_messages([\n",
" (\"system\", \"You are a chatbot that helps people generate ideas for their next project. You can help them brainstorm ideas, come up with a plan, or even help them with their project.\"),\n",
" (\"user\", \"{input}\")\n",
"])"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"With the OpenAI API, we still had to pass the prompt in using the `Completion.create()` method. With Langchain, we can create a `PromptTemplate`. This way, we can define our prompt up front and leave placeholders for values that will be set later on. The placeholder could be values that are passed from an end user or application via an API. We don't know what they at this point.\n",
"Above we've defined a \"system\" message which will tell the LLM how we're expecting it to respond, and an `{input}` placeholder for the user's prompt.\n",
"\n",
"In the below example, the `{input}` in curly brackets is the placeholder value that will be populated later on."
"Next, we define a chain. A chain allows us to define a sequence of operations that we want to perform. In this case, we're defining a simple chain that will take the prompt we've defined above and send it to the LLM."
]
},
{
Expand All @@ -137,19 +142,14 @@
"metadata": {},
"outputs": [],
"source": [
"# Create a prompt template with variables, note the curly braces\n",
"prompt = PromptTemplate(\n",
" input_variables=[\"input\"],\n",
" template=\"What interesting things can I make with a {input}?\",\n",
")"
"chain = prompt | llm"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Next we define a chain. In this case, the chain has two components. One component is the prompt template. The other is the object that represents our AI model (`llm`)."
"Now, we can invoke the chain in a similar fashion to how to invoked the LLM earlier. This time, we're passing in the user's input as a parameter to the chain, which will replace the `{input}` placeholder in the prompt."
]
},
{
Expand All @@ -158,16 +158,16 @@
"metadata": {},
"outputs": [],
"source": [
"# Create a chain\n",
"chain = LLMChain(llm=llm, prompt=prompt)"
"chain.invoke({\"input\": \"I've just purchased a Raspberry Pi and I'm looking for a project to work on. Can you help me brainstorm some ideas?\"})"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Finally, we initiate the chain. You can see that we pass in a value for the `input` placeholder."
"The result will be an `AIMessage` object, which contains the response from the LLM.\n",
"\n",
"Let's enhance the chain further to get it to parse the output from the LLM and extract the text from the response. First, we define an output parser."
]
},
{
Expand All @@ -176,13 +176,54 @@
"metadata": {},
"outputs": [],
"source": [
"# Run the chain only specifying the input variable.\n",
"response = chain.invoke({\"input\": \"raspberry pi\"})\n",
"from langchain_core.output_parsers import StrOutputParser\n",
"\n",
"output_parser = StrOutputParser()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we redefine our chain to include the output parser. So now when we invoke the chain, it will \n",
"\n",
"# As we are using a single input variable, you could also run the string like this:\n",
"# response = chain.run(\"raspberry pi\")\n",
"- Take the prompt template and add the user's input\n",
"- Send the prompt to the LLM\n",
"- Parse the response from the LLM and extract the text"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain = prompt | llm | output_parser"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's invoke the chain again with the same prompt as before."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"chain.invoke({\"input\": \"I've just purchased a Raspberry Pi and I'm looking for a project to work on. Can you help me brainstorm some ideas?\"})"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This time, you should only get a string containing the text from the response.\n",
"\n",
"print(response['text'])"
"We can do much more powerful things with chains than simply setting up and passing prompts to the LLM and parsing the results. We can augment the prompt with external data retrieved from a database, we could add conversation history to provide context for a chatbot, or we could even chain multiple LLMs together to create a more powerful model. We'll explore some of these ideas in future labs."
]
},
{
Expand Down Expand Up @@ -231,7 +272,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.11.9"
},
"orig_nbformat": 4
},
Expand Down
28 changes: 25 additions & 3 deletions labs/02-integrating-ai/04-SemanticKernel/semantickernel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -76,15 +76,15 @@
},
"outputs": [],
"source": [
"#r \"nuget: Microsoft.SemanticKernel, 1.0.1\""
"#r \"nuget: Microsoft.SemanticKernel, 1.10.0\""
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Semantic Kernel works by creating an instance of the Kernel and then adding in various plugins to perform different functions. Those addins or functions can then be called individually or chained together to perform more complex tasks.\n",
"Semantic Kernel works by creating an instance of the Kernel and then adding in various plugins to perform different functions. Those plugins or functions can then be called individually or chained together to perform more complex tasks.\n",
"\n",
"We use the standard .NET `builder` pattern to initialise the kernel. Notice that we pass in the details of the completion model that we're going to use, the Azure OpenAI API endpoint URL and the API key."
]
Expand Down Expand Up @@ -112,12 +112,34 @@
"var kernel = builder.Build();"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Send a prompt to Azure OpenAI using Semantic Kernel\n",
"\n",
"Now that we've established a connection to the Azure OpenAI API, we can go ahead and send a prompt to the LLM."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"vscode": {
"languageId": "polyglot-notebook"
}
},
"outputs": [],
"source": [
"Console.WriteLine(await kernel.InvokePromptAsync(\"What things could I make with a Raspberry Pi?\"));"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's create a Semantic Function to perform a simple request to Azure OpenAI. In this case, the function contains a *prompt template*. The template allows us to define a prompt and add placeholders for values that we will provide later. These values could come from user input, or another function, for example."
"Let's take that simple prompt forward and create a function with a prompt template to perform a simple request to Azure OpenAI. The template allows us to define a prompt and add placeholders for values that we will provide later. These values could come from user input, or another function, for example."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion labs/03-orchestration/01-Tokens/tokens.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -322,7 +322,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
"version": "3.11.9"
},
"orig_nbformat": 4
},
Expand Down
Loading

0 comments on commit 506ac78

Please sign in to comment.