From 0f9d15cc363db0d6822fdfbd83c0974d950bc5ca Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Wed, 9 Oct 2024 17:01:47 +0100 Subject: [PATCH 01/14] How to add feedback documentatation --- .../evaluations/add-human-feedback.mdx | 60 +++++++++++++++++++ 1 file changed, 60 insertions(+) create mode 100644 src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx new file mode 100644 index 000000000000000..8d4a3043d9e5a7e --- /dev/null +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -0,0 +1,60 @@ +--- +pcx_content_type: how-to +title: Add Human Feedback +sidebar: + order: 3 + badge: + text: Beta +--- + +Human feedback is a valuable metric to assess the performance of your AI models. By incorporating human feedback, you can gain deeper insights into how the model's responses are perceived and how well it performs from a user-centric perspective. This feedback can then be used in evaluations to calculate performance metrics, driving optimization and ultimately enhancing the reliability, accuracy, and efficiency of your AI application. + +Human feedback measures the performance of your dataset based on direct human input. The metric is calculated as the percentage of positive feedback (thumbs up) given on logs, which are annotated in the Logs tab of the Cloudflare dashboard. This feedback helps refine model performance by considering real-world evaluations of its output. + +This tutorial will guide you through the process of adding human feedback to your evaluations in AI Gateway. + +## 1. Log into the dashboard + +1. Log into the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. +2. Navigate to the AI Gateway section. + +## 2. Access the Logs tab + +1. In the dashboard, go to the Logs tab under the AI Gateway section. +2. The Logs tab displays all logs associated with your datasets. These logs show key information, including: + - Timestamp: When the interaction occurred. + - Status: Whether the request was successful, cached, or failed. + - Model: The model used in the request. + - Tokens: The number of tokens consumed by the response. + - Cost: The cost based on token usage. + - Duration: The time taken to complete the response. + - Feedback: Where you can provide human feedback on each log. + +## 3. Provide human feedback + +1. In the Logs tab, click on the log entry you want to review. This expands the log, allowing you to see more detailed information. +2. In the expanded log, you can view additional details such as: + - The user prompt. + - The model's response. + - HTTP response details. + - Endpoint information. +3. You will see two icons: + - Thumbs up: Indicates positive feedback. + - Thumbs down: Indicates negative feedback. +4. Click either the thumbs up or thumbs down icon based on how you rate the model's response for that particular log entry. + +## 4. Evaluate human feedback + +1. After providing feedback on your logs, it becomes part of the evaluation process. + - When you run an evaluation (as outlined in the [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) guide), the human feedback metric will be calculated based on the percentage of logs that received thumbs-up feedback. + - Note: You need to select human feedback as an evaluator to receive its metrics. +2. The human feedback score is displayed as a percentage, showing how much of the dataset's responses were rated positively by human reviewers. + +## 5. Review results + +After running the evaluation,review results on the Evaluations tab. +You will be able to see the model's performance based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). + +For more information on running evaluations, refer to the official Cloudflare documentation: + +- [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) From c63bb66e37d121f9d833924dcd172cbfd2b78661 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Wed, 9 Oct 2024 17:06:26 +0100 Subject: [PATCH 02/14] grammar edit --- .../docs/ai-gateway/evaluations/add-human-feedback.mdx | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index 8d4a3043d9e5a7e..8fbaa67208c185d 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -55,6 +55,4 @@ This tutorial will guide you through the process of adding human feedback to you After running the evaluation,review results on the Evaluations tab. You will be able to see the model's performance based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). -For more information on running evaluations, refer to the official Cloudflare documentation: - -- [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) +For more information on running evaluations, refer to the documentation [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/). From 3c6316776e6b81233614fb996ee7c32070d2a026 Mon Sep 17 00:00:00 2001 From: Kathy <153706637+kathayl@users.noreply.github.com> Date: Wed, 9 Oct 2024 09:23:04 -0700 Subject: [PATCH 03/14] Update add-human-feedback.mdx --- .../docs/ai-gateway/evaluations/add-human-feedback.mdx | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index 8fbaa67208c185d..efbe4646c1e7225 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -45,14 +45,15 @@ This tutorial will guide you through the process of adding human feedback to you ## 4. Evaluate human feedback -1. After providing feedback on your logs, it becomes part of the evaluation process. +After providing feedback on your logs, it becomes part of the evaluation process. - When you run an evaluation (as outlined in the [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) guide), the human feedback metric will be calculated based on the percentage of logs that received thumbs-up feedback. - Note: You need to select human feedback as an evaluator to receive its metrics. -2. The human feedback score is displayed as a percentage, showing how much of the dataset's responses were rated positively by human reviewers. ## 5. Review results After running the evaluation,review results on the Evaluations tab. You will be able to see the model's performance based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). +The human feedback score is displayed as a percentage, showing how manyy of the dataset's responses were rated positively. + For more information on running evaluations, refer to the documentation [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/). From 20dd23fc802adf115cb4dedf39389ea3a32694a1 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Wed, 9 Oct 2024 17:39:06 +0100 Subject: [PATCH 04/14] Added a Note --- .../docs/ai-gateway/evaluations/add-human-feedback.mdx | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index efbe4646c1e7225..ab83af7a67bd391 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -46,8 +46,14 @@ This tutorial will guide you through the process of adding human feedback to you ## 4. Evaluate human feedback After providing feedback on your logs, it becomes part of the evaluation process. - - When you run an evaluation (as outlined in the [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) guide), the human feedback metric will be calculated based on the percentage of logs that received thumbs-up feedback. - - Note: You need to select human feedback as an evaluator to receive its metrics. + +When you run an evaluation (as outlined in the [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) guide), the human feedback metric will be calculated based on the percentage of logs that received thumbs-up feedback. + +:::note[Note] + +You need to select human feedback as an evaluator to receive its metrics. + +::: ## 5. Review results From 62cfbf7242532764aa4f62c8ef524c1b71a6a518 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:01:33 +0100 Subject: [PATCH 05/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index ab83af7a67bd391..5a0c7af27484a68 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -16,7 +16,7 @@ This tutorial will guide you through the process of adding human feedback to you ## 1. Log into the dashboard 1. Log into the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. -2. Navigate to the AI Gateway section. +2. Go to **AI** > **AI Gateway**. ## 2. Access the Logs tab From 0b5ff38de6744a684f28d2c857136fc5c499239e Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:01:39 +0100 Subject: [PATCH 06/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index 5a0c7af27484a68..8a5e428a5ff358a 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -13,7 +13,7 @@ Human feedback measures the performance of your dataset based on direct human in This tutorial will guide you through the process of adding human feedback to your evaluations in AI Gateway. -## 1. Log into the dashboard +## 1. Log in to the dashboard 1. Log into the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account. 2. Go to **AI** > **AI Gateway**. From b37c865c3c5336fcf16191f499848d55be4451ee Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:01:50 +0100 Subject: [PATCH 07/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index 8a5e428a5ff358a..73ff36fe3a2053c 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -32,7 +32,7 @@ This tutorial will guide you through the process of adding human feedback to you ## 3. Provide human feedback -1. In the Logs tab, click on the log entry you want to review. This expands the log, allowing you to see more detailed information. +1. Click on the log entry you want to review. This expands the log, allowing you to see more detailed information. 2. In the expanded log, you can view additional details such as: - The user prompt. - The model's response. From 29f4a0bb0ded1610e694b4f827cf13f30883120a Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:02:15 +0100 Subject: [PATCH 08/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index 73ff36fe3a2053c..d549a31ef9da3f8 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -20,7 +20,7 @@ This tutorial will guide you through the process of adding human feedback to you ## 2. Access the Logs tab -1. In the dashboard, go to the Logs tab under the AI Gateway section. +1. Go to **Logs**. 2. The Logs tab displays all logs associated with your datasets. These logs show key information, including: - Timestamp: When the interaction occurred. - Status: Whether the request was successful, cached, or failed. From 43885f2fd8d8f6d27008f9aed2495835e0f5532f Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:03:55 +0100 Subject: [PATCH 09/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index d549a31ef9da3f8..1a745d65cd0b62d 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -35,7 +35,7 @@ This tutorial will guide you through the process of adding human feedback to you 1. Click on the log entry you want to review. This expands the log, allowing you to see more detailed information. 2. In the expanded log, you can view additional details such as: - The user prompt. - - The model's response. + - The model response. - HTTP response details. - Endpoint information. 3. You will see two icons: From 95f77f667e18cb41ee3369cbd3b050d93d9b8424 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:04:06 +0100 Subject: [PATCH 10/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index 1a745d65cd0b62d..af253a7e8576103 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -41,7 +41,7 @@ This tutorial will guide you through the process of adding human feedback to you 3. You will see two icons: - Thumbs up: Indicates positive feedback. - Thumbs down: Indicates negative feedback. -4. Click either the thumbs up or thumbs down icon based on how you rate the model's response for that particular log entry. +4. Click either the thumbs up or thumbs down icon based on how you rate the model response for that particular log entry. ## 4. Evaluate human feedback From b01a23e5ba9f528df1b4bf2054957507289d28c1 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:04:14 +0100 Subject: [PATCH 11/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index af253a7e8576103..c69416e99d776c8 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -57,7 +57,7 @@ You need to select human feedback as an evaluator to receive its metrics. ## 5. Review results -After running the evaluation,review results on the Evaluations tab. +After running the evaluation, review the results on the Evaluations tab. You will be able to see the model's performance based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). The human feedback score is displayed as a percentage, showing how manyy of the dataset's responses were rated positively. From 5e4c7dab5af923734a2ffdceadede047a9df96a9 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:04:25 +0100 Subject: [PATCH 12/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index c69416e99d776c8..89d55ccaddba9dc 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -58,7 +58,7 @@ You need to select human feedback as an evaluator to receive its metrics. ## 5. Review results After running the evaluation, review the results on the Evaluations tab. -You will be able to see the model's performance based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). +You will be able to see the performance of the model based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). The human feedback score is displayed as a percentage, showing how manyy of the dataset's responses were rated positively. From fdedd8440a3a37de373c8e398c34047763b3528b Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:08:03 +0100 Subject: [PATCH 13/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index 89d55ccaddba9dc..a7719cdf640b92e 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -45,7 +45,7 @@ This tutorial will guide you through the process of adding human feedback to you ## 4. Evaluate human feedback -After providing feedback on your logs, it becomes part of the evaluation process. +After providing feedback on your logs, it becomes a part of the evaluation process. When you run an evaluation (as outlined in the [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) guide), the human feedback metric will be calculated based on the percentage of logs that received thumbs-up feedback. From 033e1b2bb14c4295458bc47010bdd323a879ace0 Mon Sep 17 00:00:00 2001 From: daisyfaithauma Date: Thu, 10 Oct 2024 12:08:52 +0100 Subject: [PATCH 14/14] Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx Co-authored-by: Jun Lee --- src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx index a7719cdf640b92e..557ec3453fcb365 100644 --- a/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx +++ b/src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx @@ -60,6 +60,6 @@ You need to select human feedback as an evaluator to receive its metrics. After running the evaluation, review the results on the Evaluations tab. You will be able to see the performance of the model based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up). -The human feedback score is displayed as a percentage, showing how manyy of the dataset's responses were rated positively. +The human feedback score is displayed as a percentage, showing the distribution of positively rated responses from the database. For more information on running evaluations, refer to the documentation [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/).