-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
a1f4a4b
commit fd3d5a9
Showing
1 changed file
with
157 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,157 @@ | ||
{ | ||
"nbformat": 4, | ||
"nbformat_minor": 0, | ||
"metadata": { | ||
"colab": { | ||
"provenance": [] | ||
}, | ||
"kernelspec": { | ||
"name": "python3", | ||
"display_name": "Python 3" | ||
}, | ||
"language_info": { | ||
"name": "python" | ||
} | ||
}, | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"# Gemini API: Self-ask prompting" | ||
], | ||
"metadata": { | ||
"id": "sP8PQnz1QrcF" | ||
} | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n", | ||
" <td>\n", | ||
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/cookbook/blob/main/quickstarts/examples/prompting/Self_ask_prompting.ipynb\"><img src = \"https://www.tensorflow.org/images/colab_logo_32px.png\"/>Run in Google Colab</a>\n", | ||
" </td>\n", | ||
"</table>" | ||
], | ||
"metadata": { | ||
"id": "bxGr_x3MRA0z" | ||
} | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"Self ask prompting is similar to chain of thought, but instead of going step by step as one answer, it asks itself questions that will help answer the query. Like the chain of thought, it helps the model to think analytically." | ||
], | ||
"metadata": { | ||
"id": "ysy--KfNRrCq" | ||
} | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"source": [ | ||
"!pip install -U -q google-generativeai" | ||
], | ||
"metadata": { | ||
"id": "Ne-3gnXqR0hI" | ||
}, | ||
"execution_count": 1, | ||
"outputs": [] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": 2, | ||
"metadata": { | ||
"id": "EconMHePQHGw" | ||
}, | ||
"outputs": [], | ||
"source": [ | ||
"import google.generativeai as genai\n", | ||
"\n", | ||
"from IPython.display import Markdown" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## Configure your API key\n", | ||
"\n", | ||
"To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see [Authentication](https://github.com/google-gemini/cookbook/blob/main/quickstarts/Authentication.ipynb) for an example." | ||
], | ||
"metadata": { | ||
"id": "eomJzCa6lb90" | ||
} | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"source": [ | ||
"from google.colab import userdata\n", | ||
"GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n", | ||
"\n", | ||
"genai.configure(api_key=GOOGLE_API_KEY)" | ||
], | ||
"metadata": { | ||
"id": "v-JZzORUpVR2" | ||
}, | ||
"execution_count": 3, | ||
"outputs": [] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"## Example" | ||
], | ||
"metadata": { | ||
"id": "yQnqEPjephXi" | ||
} | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"source": [ | ||
"prompt = \"\"\"\n", | ||
"Query: Who was the president of the united states when Mozart died?\n", | ||
"Are follow up questions needed?: yes.\n", | ||
"Follow up: When did Mozart died?\n", | ||
"Intermediate answer: 1791.\n", | ||
"Follow up: Who was the president of the united states in 1791?\n", | ||
"Intermediate answer: George Washington.\n", | ||
"Final answer: When Mozart died George Washington was the president of the USA.\n", | ||
"\n", | ||
"Question: where died the Emperor of Japan who ruled when Maria Skłodowska was born?\"\"\"\n", | ||
"model = genai.GenerativeModel(model_name='gemini-1.5-flash-latest', generation_config={\"temperature\": 0})\n", | ||
"Markdown(model.generate_content(prompt).text)" | ||
], | ||
"metadata": { | ||
"colab": { | ||
"base_uri": "https://localhost:8080/", | ||
"height": 144 | ||
}, | ||
"id": "XEfLLHa7pjC8", | ||
"outputId": "254b8353-c65d-446c-d560-660d68b19d44" | ||
}, | ||
"execution_count": 4, | ||
"outputs": [ | ||
{ | ||
"output_type": "execute_result", | ||
"data": { | ||
"text/plain": [ | ||
"<IPython.core.display.Markdown object>" | ||
], | ||
"text/markdown": "Let's break this down step-by-step:\n\n1. **When was Maria Skłodowska born?** We need this information to determine which Emperor of Japan was in power. \n2. **Who was the Emperor of Japan during that time period?** Once we know the year of Maria Skłodowska's birth, we can find the corresponding Emperor.\n3. **Where did that Emperor die?** This is the final piece of information we need to answer the question.\n\n**Please provide the year Maria Skłodowska was born so we can continue!** \n" | ||
}, | ||
"metadata": {}, | ||
"execution_count": 4 | ||
} | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"source": [ | ||
"\n", | ||
"## Additional note\n", | ||
"Self ask prompting works well with function calling. Follow-up questions can be used as input to a function, which e.g. searches the internet. The question and answer from the function can be added back to the prompt. During the next query to the model, it can either create another function call or return the final answer." | ||
], | ||
"metadata": { | ||
"id": "1RtZ1y-IpcnV" | ||
} | ||
} | ||
] | ||
} |