-
Notifications
You must be signed in to change notification settings - Fork 517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference finetuned model using LoRa in Huggingface format #442
Comments
update: |
Are you using the 7b parameter model? This was the one I tested my conversion script on |
Yes I used 7B. How did you create the inference pipeline? Let me test it with my model |
I added another commit to my PR which should help streamline the conversion process. I used the following generation config:
I would recommend trying to sample with minimal/default parameters at first though before running a more intricate sampling algorithm like beam search or typical sampling. |
If it generates the token when you call generate, this is likely an issue with the weights that your fine-tuning process has produced. But it may help to have the model in a huggingface format so you can experiment with different sampling approaches, and look at some of the lower-likelihood logits when the token gets generated to see if they make sense, etc. |
Thank you @wjurayj I really appreciate your help.
Maybe I should mention that I don't use LLaMA tokenizer, I used my own one that has 64K vocab size, so I changed the generated config file. Also I changed the ids for the pad and eos tokens; my eos token is 0 and pad token is 2 while the generated config shows them as 2 and 0 respectively. |
I fixed the issue! The problem was caused by the context! :) The context or the instruction I provided was not the exact of the one I provided in the training (there was a difference in the number of space!) |
Hello,
I used this script to merge Lora weights to the base model. Then, I used this script to convert my model to huggingface format.
But when I inference the model in Huggingface it never output end token, it looks like a pretrained model rather than a finetuned one.
here is my inference pipeline:
I'm not sure if the inference pipeline matches the one in this repository.
The reason why I want to inference my model there because I'm facing an issue in the generate script & I want to use beam search.
I appreciate your help.
The text was updated successfully, but these errors were encountered: