diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index a7719cdf640b92e..557ec3453fcb365 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -60,6 +60,6 @@ You need to select human feedback as an evaluator to receive its metrics. After running the evaluation, review the results on the Evaluations tab. You will be able to see the performance of the model based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). -The human feedback score is displayed as a percentage, showing how manyy of the dataset's responses were rated positively. +The human feedback score is displayed as a percentage, showing the distribution of positively rated responses from the database. For more information on running evaluations, refer to the documentation [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/).