Replies: 1 comment 1 reply
-
Hi @Naren83 , The ids are mapped differently. First, you will need to find the correct mapping between "your dataset" -> "COCO's id": Let's say you mapped
will have to be changed to:
Another alternative would be to run YOLOv5 in inference mode changing the ids in your yaml, so it can match your 0, 1 and 2 classes. Note that this may not be a fair comparison, as you may be comparing your model against another (more robust one) that was trained with all COCO classes, and, thus, was trained to distinguish better many other objects classes. Hope that helps |
Beta Was this translation helpful? Give feedback.
-
Hello @rafaelpadilla ,
My Gound truth annotations are in yolo format like this 0 0.4359375 0.52109375 0.3375 0.95390625 with class IDs of 0,1,2 with 0 being door , 1 being chair and 2 being table.
I have trained a YOLO v5 model,got the predicted annotations and I used your repo to perform the evaluation metrics and I got the metrics for the all the three classes and the mAP.
but, I want to compare my model with the state of the art models and I need to calculate the evaluation metrics for all the state of the art models. For example I want to compare my model with the state of art model Yolo v5l. I took the weights of yolov5l and performed the detections on my test images and I got all the annotations for them.
The problem here is the yolov5l is trained on the coco metrics and the predicted annotations class lDs are different to the class IDs I have for my test images which are the ground truth annotations, so I couldnt perform evaluation metrics using your repo.
for example the ground truth annotation for a test image is 0 0.3515625 0.49609375 0.246875 0.49765625
0 0.45 0.65078125 0.36875 0.69765625
The predicted annotation for the same image using the weights of yolov5l is 56 0.87618 0.45 0.6125 0.365625 0.58125
56 0.907485 0.346875 0.465625 0.253125 0.428125
Can you please help me solve this ?
Beta Was this translation helpful? Give feedback.
All reactions