Skip to content

Commit

Permalink
[AIG] How to add feedback documentatation (#17438)
Browse files Browse the repository at this point in the history
* How to add feedback documentatation

* grammar edit

* Update add-human-feedback.mdx

* Added a Note

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

* Update src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx

Co-authored-by: Jun Lee <[email protected]>

---------

Co-authored-by: Kathy <[email protected]>
Co-authored-by: Jun Lee <[email protected]>
  • Loading branch information
3 people authored and maheshwarip committed Dec 2, 2024
1 parent 52f9018 commit 0685c03
Showing 1 changed file with 65 additions and 0 deletions.
65 changes: 65 additions & 0 deletions src/content/docs/ai-gateway/evaluations/add-human-feedback.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
---
pcx_content_type: how-to
title: Add Human Feedback
sidebar:
order: 3
badge:
text: Beta
---

Human feedback is a valuable metric to assess the performance of your AI models. By incorporating human feedback, you can gain deeper insights into how the model's responses are perceived and how well it performs from a user-centric perspective. This feedback can then be used in evaluations to calculate performance metrics, driving optimization and ultimately enhancing the reliability, accuracy, and efficiency of your AI application.

Human feedback measures the performance of your dataset based on direct human input. The metric is calculated as the percentage of positive feedback (thumbs up) given on logs, which are annotated in the Logs tab of the Cloudflare dashboard. This feedback helps refine model performance by considering real-world evaluations of its output.

This tutorial will guide you through the process of adding human feedback to your evaluations in AI Gateway.

## 1. Log in to the dashboard

1. Log into the [Cloudflare dashboard](https://dash.cloudflare.com/) and select your account.
2. Go to **AI** > **AI Gateway**.

## 2. Access the Logs tab

1. Go to **Logs**.
2. The Logs tab displays all logs associated with your datasets. These logs show key information, including:
- Timestamp: When the interaction occurred.
- Status: Whether the request was successful, cached, or failed.
- Model: The model used in the request.
- Tokens: The number of tokens consumed by the response.
- Cost: The cost based on token usage.
- Duration: The time taken to complete the response.
- Feedback: Where you can provide human feedback on each log.

## 3. Provide human feedback

1. Click on the log entry you want to review. This expands the log, allowing you to see more detailed information.
2. In the expanded log, you can view additional details such as:
- The user prompt.
- The model response.
- HTTP response details.
- Endpoint information.
3. You will see two icons:
- Thumbs up: Indicates positive feedback.
- Thumbs down: Indicates negative feedback.
4. Click either the thumbs up or thumbs down icon based on how you rate the model response for that particular log entry.

## 4. Evaluate human feedback

After providing feedback on your logs, it becomes a part of the evaluation process.

When you run an evaluation (as outlined in the [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/) guide), the human feedback metric will be calculated based on the percentage of logs that received thumbs-up feedback.

:::note[Note]

You need to select human feedback as an evaluator to receive its metrics.

:::

## 5. Review results

After running the evaluation, review the results on the Evaluations tab.
You will be able to see the performance of the model based on cost, speed, and now human feedback, represented as the percentage of positive feedback (thumbs up).

The human feedback score is displayed as a percentage, showing the distribution of positively rated responses from the database.

For more information on running evaluations, refer to the documentation [Set Up Evaluations](/ai-gateway/evaluations/set-up-evaluations/).

0 comments on commit 0685c03

Please sign in to comment.