You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.
I have some images that contain a mixture of seen and unseen object classes. I have my own custom object detection model based on YOLOv5, and it is able to output bounding box and class label and confidence. Is it possible to input these YOLOv5 results into Oscar+ and thus only use the text generation part of Oscar+ to generate a caption of the image? Original image with bounding box can be inputted, but I do not want Oscar+ to do the object detection part as my own model take care of some unseen objects.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I have some images that contain a mixture of seen and unseen object classes. I have my own custom object detection model based on YOLOv5, and it is able to output bounding box and class label and confidence. Is it possible to input these YOLOv5 results into Oscar+ and thus only use the text generation part of Oscar+ to generate a caption of the image? Original image with bounding box can be inputted, but I do not want Oscar+ to do the object detection part as my own model take care of some unseen objects.
The text was updated successfully, but these errors were encountered: