Skip to content

amazon-science/llm-pieval

Repository files navigation

LLM-PIEval: A benchmark for indirect prompt injection attacks in Large Language Models

Official repository for LLM-PIEval. This release contains full API specifications along with the blackbox benchmark prompts generated for this paper.

Security

See CONTRIBUTING for more information.

License

This library is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.

Citation

If you use this benchmark or the APIs, consider citing our work:

@misc{ramakrishna2024llm,
  title={LLM-PIEval: A benchmark for indirect prompt injection attacks in large language models},
  author={Ramakrishna, Anil and Majmudar, Jimit and Gupta, Rahul and Hazarika, Devamanyu},
  year={2024},
  howpublished={AdvML-Frontiers’24: The 3nd Workshop on New Frontiers in Adversarial Machine Learning@NeurIPS’24,
Vancouver, CA},
  url = {https://www.amazon.science/publications/llm-pieval-a-benchmark-for-indirect-prompt-injection-attacks-in-large-language-models},
}

About

No description, website, or topics provided.

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published