From 033e1b2bb14c4295458bc47010bdd323a879ace0 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:08:52 +0100 Subject: [PATCH] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index a7719cdf640b92..557ec3453fcb36 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -60,6 +60,6 @@ You need to select human feedback as an evaluator to receive its metrics. After running the evaluation, review the results on the Evaluations tab. You will be able to see the performance of the model based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). -The human feedback score is displayed as a percentage, showing how manyy of the dataset's responses were rated positively. +The human feedback score is displayed as a percentage, showing the distribution of positively rated responses from the database. For more information on running evaluations, refer to the documentation [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/).