We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am using a custom provider as follows:
prompts: - "dummy_prompt_ignored" # Dummy prompts, not even used for anything providers: - "file://my_custom_provider.py"
but when I change code in my_custom_provider.py or dependent code, the github action doesn't re-run the evals. Instead it gives this message:
my_custom_provider.py
No LLM prompt or config files were modified.
I can trigger it by manually updating the config file with a dummy change. Is there a config option to have it run no matter what?
PS: promptfoo is an awesome tool!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
I am using a custom provider as follows:
but when I change code in
my_custom_provider.py
or dependent code, the github action doesn't re-run the evals. Instead it gives this message:I can trigger it by manually updating the config file with a dummy change. Is there a config option to have it run no matter what?
PS: promptfoo is an awesome tool!
The text was updated successfully, but these errors were encountered: