Skip to content

Commit

Permalink
solve bug
Browse files Browse the repository at this point in the history
  • Loading branch information
BobaZooba committed Nov 14, 2023
1 parent 220ebe5 commit 3e07291
Show file tree
Hide file tree
Showing 5 changed files with 42 additions and 6 deletions.
15 changes: 14 additions & 1 deletion GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,7 @@ from xllm.datasets import BaseDataset

In each new dataset, we must implement two methods:

- get_data
- get_data (classmethod)
- get_sample

Let's start implementing new dataset
Expand All @@ -320,6 +320,7 @@ from xllm.types import RawSample

class MyDataset(BaseDataset):

@classmethod
def get_data(cls, config: Config) -> Optional[Tuple[List[RawSample], Optional[List[RawSample]]]]:
...

Expand All @@ -332,6 +333,8 @@ class MyDataset(BaseDataset):
The input values of this method are cls from `MyDataset` and `Config`. By default, the default config from `xllm` is
used: `from xllm import Config`

It's a `classmethod`, so you can use it without creating an instance of the class.

The `get_data` method should return a tuple of two elements: **training data** and **eval data**. These data should be
lists of type `RawSample`. `RawSample`, in turn, is simply an alias for the following
type: `Dict[str, Union[str, int, float, List[str]]]`. In other words, each sample in the data should be a dictionary,
Expand Down Expand Up @@ -838,3 +841,13 @@ details, you can refer to this source: https://github.com/Dao-AILab/flash-attent
**Q:** Can I use `bitsandbytes int4`, `bitsandbytes int8` and `gptq model` at once?
**A:** You can choose either `bitsandbytes int4` or `bitsandbytes int8` only. Training through the gptq model is not
recommended, but it is supported. Your model must already be converted to the GPTQ format.

## Tale Quest

`Tale Quest` is my personal project which was built using `xllm` and `Shurale`. It's an interactive text-based game
in `Telegram` with dynamic AI characters, offering infinite scenarios

You will get into exciting journeys and complete fascinating quests. Chat
with `George Orwell`, `Tech Entrepreneur`, `Young Wizard`, `Noir Detective`, `Femme Fatale` and many more

Try it now: [https://t.me/talequestbot](https://t.me/TaleQuestBot?start=Z2g)
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -632,7 +632,7 @@ in `Telegram` with dynamic AI characters, offering infinite scenarios
You will get into exciting journeys and complete fascinating quests. Chat
with `George Orwell`, `Tech Entrepreneur`, `Young Wizard`, `Noir Detective`, `Femme Fatale` and many more

Try it now: [https://t.me/talequestbot](https://t.me/PapayaAIBot?start=Z2g)
Try it now: [https://t.me/talequestbot](https://t.me/TaleQuestBot?start=Z2g)

### Please support me here

Expand Down
27 changes: 25 additions & 2 deletions examples/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,28 @@
# Examples of using XLLM

## Demo project

## Minimal example

`minimal_using`

## Minimal example using CLI

`minimal_using_cli`

## Notebooks

| Name | Comment | Link |
|-------------------------------------------|------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| X—LLM Prototyping | In this notebook you will learn the basics of the library | [![xllm_prototyping](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1zsNmJFns1PKZy5VE5p5nsQL-mZF7SwHf?usp=sharing) |
| Llama2 & Mistral AI efficient fine-tuning | 7B model training in colab using QLoRA, bnb int4, gradient checkpointing and X—LLM | [![Llama2MistalAI](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CNNB_HPhQ8g7piosdehqWlgA30xoLauP?usp=sharing) |

## Useful materials

- [X—LLM Repo](https://github.com/BobaZooba/xllm): main repo of the `xllm` library
- [Quickstart](https://github.com/KompleteAI/xllm/tree/docs-v1#quickstart-): basics of `xllm`
- [Examples](https://github.com/BobaZooba/xllm/examples): minimal examples of using `xllm`
- [Guide](https://github.com/BobaZooba/xllm/blob/main/GUIDE.md): here, we go into detail about everything the library can
do
- [Demo project](https://github.com/BobaZooba/xllm-demo): here's a minimal step-by-step example of how to use X—LLM and fit it
into your own project
- [WeatherGPT](https://github.com/BobaZooba/wgpt): this repository features an example of how to utilize the xllm library. Included is a solution for a common type of assessment given to LLM engineers, who typically earn between $120,000 to $140,000 annually
- [Shurale](https://github.com/BobaZooba/shurale): project with the finetuned 7B Mistal model
Original file line number Diff line number Diff line change
Expand Up @@ -1843,7 +1843,7 @@
"You will get into exciting journeys and complete fascinating quests. Chat\n",
"with `George Orwell`, `Tech Entrepreneur`, `Young Wizard`, `Noir Detective`, `Femme Fatale` and many more\n",
"\n",
"Try it now: [https://t.me/talequestbot](https://t.me/PapayaAIBot?start=Z2g)"
"Try it now: [https://t.me/talequestbot](https://t.me/TaleQuestBot?start=Z2g)"
],
"metadata": {
"id": "Oz4LrVcZlE6P"
Expand Down
2 changes: 1 addition & 1 deletion examples/notebooks/🦖_X—LLM_Prototyping.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2711,7 +2711,7 @@
"You will get into exciting journeys and complete fascinating quests. Chat\n",
"with `George Orwell`, `Tech Entrepreneur`, `Young Wizard`, `Noir Detective`, `Femme Fatale` and many more\n",
"\n",
"Try it now: [https://t.me/talequestbot](https://t.me/PapayaAIBot?start=Z2g)"
"Try it now: [https://t.me/talequestbot](https://t.me/TaleQuestBot?start=Z2g)"
],
"metadata": {
"id": "5wJJrKnglAkK"
Expand Down

0 comments on commit 3e07291

Please sign in to comment.