-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python: How the Semantic Kernel handles the system prompt under the hood #9881
Comments
Hi @lucasmsoares96, thanks for your question. You can send a
Note: we recently refactored our concept samples so the sample you were referring to Let's look at how the system message is handled. If I start out with this sample and ask a question "Why is the sky blue in one sentence?" you can see that when we form the messages dictionary for the OpenAI request, we have: Once the response is returned, and we ask another question, we see the following messages with their roles in the dictionary: We can see that as the chat history grows, we keep that context as the user continues to add their input. This is why the kernel function's prompt is configured as It does look like the prompt would only contain "direct text" as you mentioned, but because the ChatHistory knows what the underlying roles are, the prompt is first rendered depending on the type of prompt template (Semantic Kernel, Jinja2, or Handlebars) and then it is converted again back into a role to message dictionary once we send it to the model. One clarification based on Does this help answer your question? |
Hi @lucasmsoares96, please feel free to re-open this issue if you need further help around your original question. Thank you. |
Thank you very much for the response, @moonbox3. It clarified a lot, but there are still some gaps. For example, in the screenshots you provided, the function's prompt didn't appear in the message history. This makes sense since it's a simple prompt, but how does the model manage to execute a more complex prompt like 54 # Following example demonstrates the use of the plugin within a semantic function
55 prompt = """
56 Answer the question using only the data that is provided in the data section.
57 Do not use any prior knowledge to answer the question.
58 Data: {{WebSearch.SearchAsync "What is semantic kernel?"}}
59 Question: What is semantic kernel?
60 Answer:
61 """
62
63 qna = kernel.add_function(
64 plugin_name="qa",
65 function_name="qna",
66 prompt=prompt,
67 prompt_execution_settings=PromptExecutionSettings(temperature=0.2),
68 ) |
When analyzing the system prompt examples in the semantic kernel, some things were not clear.
python/samples/concepts/chat_completion/chat_gpt_api.py
Analyzing this code, on line 32 a prompt is defined that interpolates two variables:
chat_history
anduser_input
. This prompt will be used to create a function and add it to the kernel.Then a
ChatHistory
is defined and some messages are added to it.And finally, on line 57 a message is sent to the OpenAI API passing
user_input
andchat_history
asKernelArguments
:which will be replaced on line 32:
This way it seems that the semantic kernel adds the
ChatHistory
messages as direct text inside a single message withuser
role
instead of sending a list of messages with differentuser
,system
andassistant
roles for each message, as demonstrated in the documentation:Create chat completion
Quickstart
Am I right? If so, how do I actually send a message with
system
role to the OpenAI API? Is there any example about this? If not, how does the Semantic Kernel handle this role conversion? I didn't find anything in the documentation or examples about this.The text was updated successfully, but these errors were encountered: