Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python: New Feature: Content Safety Layer for Python #9536

Open
nmoeller opened this issue Nov 5, 2024 · 6 comments
Open

Python: New Feature: Content Safety Layer for Python #9536

nmoeller opened this issue Nov 5, 2024 · 6 comments
Labels
filters Anything related to filters python Pull requests for the Python Semantic Kernel

Comments

@nmoeller
Copy link
Contributor

nmoeller commented Nov 5, 2024


Name : Content Safety Layer for Python

Abstract:

The idea is to have some kind of Interceptor when invoking API's that checks for Content Safety, to ensure a stable Content Safety when switching the underlaying Model. Also with Embedded Content Safety Containers, flows

Why is this a good Idea ?

When People switching from GPT4 to Anthropic for Example, they would loose the Content Safety Feature of the Azure AI Services. To ensure Responsible AI when switch between Models, we should have a layer in the Kernel to keep Content Safety when switching Model consistent.

Also when using Hugging Face Models or Onnx Models locally there is no support for content safety except in the model itself. To enable all kind of Models to work with Content Safety a seperate layer in the Kernel would be beneficial.

As far as i saw, this feature is already available in the C# Version of semantic kernel.

How can this be implemented ?

The idea here would be to add a new abstract class named ContentSafetyConnectorBase, this class would have abstract methods to enforce subclasses to implement them. Before and After sending data to the Models we could intercept the questions/answers in the ChatCompletionClientBase. We could also add an ContentSafetyException in the Semantic Kernel Exceptions and the user could catch the ContentSafetyException and deal with it accordingly.

We could also add the AzureContentSafety Service as a Connector for samples. Also there is a offline container of Content Safety, so we could have a complete offline example with SLM's and Content Safety.


@markwallace-microsoft markwallace-microsoft added .NET Issue or Pull requests regarding .NET code python Pull requests for the Python Semantic Kernel triage labels Nov 5, 2024
@github-actions github-actions bot changed the title New Feature: Content Safety Layer for Python Python: New Feature: Content Safety Layer for Python Nov 5, 2024
@github-actions github-actions bot changed the title Python: New Feature: Content Safety Layer for Python .Net: New Feature: Content Safety Layer for Python Nov 5, 2024
@nmoeller nmoeller changed the title .Net: New Feature: Content Safety Layer for Python Pythont: New Feature: Content Safety Layer for Python Nov 5, 2024
@nmoeller nmoeller changed the title Pythont: New Feature: Content Safety Layer for Python Python: New Feature: Content Safety Layer for Python Nov 5, 2024
@moonbox3
Copy link
Contributor

moonbox3 commented Nov 5, 2024

Thanks for filing the feature request, @nmoeller. Could this be accomplished with our prompt filter? Not everyone may want the added latency of every request going through a content safety pipeline (I don't have data to back this up, but just a thought). We will discuss as a team, again thanks!

@nmoeller
Copy link
Contributor Author

nmoeller commented Nov 5, 2024

@moonbox3 just to clarify this is something that not everybody has to use, the simply idea would be to have something like kernel.add_content_safety(ContentSafetyConnectorBase()) and then the kernel check prompts & responses automatically. If the user does not add the ContentSafety to the Kernel everything will work like usual.

I think something like this is available in the dotnet already.

https://github.com/microsoft/semantic-kernel/tree/711afeae05eeea81e404b600d30bca2bfd206d0c/dotnet/samples/Demos/ContentSafety

But i will also check if this can be archived via prompt filters !

@moonbox3
Copy link
Contributor

moonbox3 commented Nov 5, 2024

Understood. I see that in .Net they are using the IPromptRenderFilter interface to create the dependency for something like TextModerationFilter. I see in their getting started docs, one can download the appropriate client. This could be included in a function that implements our prompt filter and then makes the call to the content safety service.

https://learn.microsoft.com/en-us/azure/ai-services/content-safety/quickstart-text?tabs=visual-studio%2Cwindows&pivots=programming-language-python

@moonbox3 moonbox3 removed the triage label Nov 5, 2024
@moonbox3 moonbox3 added filters Anything related to filters and removed .NET Issue or Pull requests regarding .NET code labels Nov 5, 2024
@nmoeller
Copy link
Contributor Author

nmoeller commented Dec 19, 2024

@moonbox3 is it fine if i would start working on this ?

My Idea would be to create the connector in a new folder python/semantic-kernel/connector/content_safety.

I would implement a Class for the AzureAIContentSafety. Maybe also an Abstract Class since AWS also have something like this available.

And to demonstrate how it works i would add a Sample in Concepts using the ContentFilterConnector as a Filter.

@moonbox3
Copy link
Contributor

@nmoeller do you know if something like this is already present in SK .Net? If not, we'd want to make sure we align the design for both languages before jumping into the implementation.

@moonbox3
Copy link
Contributor

moonbox3 commented Dec 19, 2024

Sorry, I didn't even read my response to you above. :( It's been a while since I looked at this issue. Yeah, I think it's good to go then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
filters Anything related to filters python Pull requests for the Python Semantic Kernel
Projects
Status: No status
Development

No branches or pull requests

3 participants