Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more metrics for evaluating test runs #230

Open
shivankacker opened this issue Jul 26, 2023 · 1 comment
Open

Add more metrics for evaluating test runs #230

shivankacker opened this issue Jul 26, 2023 · 1 comment

Comments

@shivankacker
Copy link
Member

The following metrics for comparing responses would be best:

  1. GLOVE + Cosine Similarity
  2. BertScore: compute token similarity using contextual embeddings
  3. Glove + Word2vec + BiLSTM : word embedding is first made with Glove and Word2Vec, two BLSTM networks are used separately for sentence embedding, these are passed through a classifier
  4. Word mover Distance
  5. Pretrained sentence encoders: Such as Google Sentence encoder
  6. Siamese Manhattan LSTM

cc. @bodhish

@bodhish
Copy link
Member

bodhish commented Jul 26, 2023

We will just implement 1/2 from the list; lets do some research on this before going ahead;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants