-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathner_classification_custom_large.log
50 lines (50 loc) · 4.03 KB
/
ner_classification_custom_large.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Some weights of the model checkpoint at /cache/nikolal/xlmrl_sl-bcms_exp/checkpoint-42000 were not used when initializing XLMRobertaForTokenClassification: ['lm_head.bias', 'lm_head.layer_norm.weight', 'lm_head.dense.weight', 'lm_head.layer_norm.bias', 'lm_head.dense.bias']
- This IS expected if you are initializing XLMRobertaForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing XLMRobertaForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of XLMRobertaForTokenClassification were not initialized from the model checkpoint at /cache/nikolal/xlmrl_sl-bcms_exp/checkpoint-42000 and are newly initialized: ['classifier.bias', 'classifier.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
INFO:simpletransformers.ner.ner_model: Converting to features started.
['B-per', 'O', 'B-deriv-per', 'B-misc', 'I-misc', 'I-per', 'B-org', 'I-org', 'B-loc', 'I-loc', 'B-*', 'I-*']
(73943, 3) (9122, 3) (9206, 3)
sentence_id words labels
0 0 @vukomand B-per
1 0 Gospođo O
2 0 Dijana B-per
3 0 koje O
4 0 lekove O
Training of pre-trained model started. Current model: /cache/nikolal/xlmrl_sl-bcms_exp/checkpoint-42000
INFO:simpletransformers.ner.ner_model: Continuing training from checkpoint, will skip to saved global_step
INFO:simpletransformers.ner.ner_model: Continuing training from epoch 241
INFO:simpletransformers.ner.ner_model: Continuing training from global step 42000
INFO:simpletransformers.ner.ner_model: Will skip the first 66 steps in the current epoch
INFO:simpletransformers.ner.ner_model: Training of xlmroberta model complete. Saved to models/.
INFO:simpletransformers.ner.ner_model: Converting to features started.
Training of pre-trained model completed.
Model saved in models/
Training started. Current model: xlmrl_sl-bcms-42
INFO:simpletransformers.ner.ner_model: Starting fine-tuning.
/home/tajak/NER-recognition/ner/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of `lr_scheduler.step()` before `optimizer.step()`. "
INFO:simpletransformers.ner.ner_model: Training of xlmroberta model complete. Saved to models/.
INFO:simpletransformers.ner.ner_model: Converting to features started.
Training completed.
It took 10.54 minutes for 73943 instances.
INFO:simpletransformers.ner.ner_model:{'eval_loss': 0.11088847609848547, 'precision': 0.8703170028818443, 'recall': 0.8342541436464088, 'f1_score': 0.8519040902679831}
INFO:simpletransformers.ner.ner_model: Converting to features started.
Evaluation completed.
It took 0.25 minutes for 9122 instances.
Macro f1: 0.805, Micro f1: 0.987
Accuracy: 0.987
Run 1 finished.
Training started. Current model: xlmrl_sl-bcms-42
INFO:simpletransformers.ner.ner_model: Starting fine-tuning.
INFO:simpletransformers.ner.ner_model: Training of xlmroberta model complete. Saved to models/.
INFO:simpletransformers.ner.ner_model: Converting to features started.
Training completed.
It took 10.5 minutes for 73943 instances.
INFO:simpletransformers.ner.ner_model:{'eval_loss': 0.1311925091069106, 'precision': 0.8731988472622478, 'recall': 0.8370165745856354, 'f1_score': 0.8547249647390692}
Evaluation completed.
It took 0.28 minutes for 9122 instances.
Macro f1: 0.793, Micro f1: 0.987
Accuracy: 0.987
Run 2 finished.