-
Notifications
You must be signed in to change notification settings - Fork 65
Face Parameters
The Face Detection and Recognition module comes with a drag-and-drop people registration tool for predefined people recognition. To run face recognition, person detection must already be running.
If using Docker, first add all the face photos you are using to the open_ptrack_config/detection/conf
folder. (You can add these to the folder on you host machine, not the Docker container, as the two directories are linked). We recommend at least 15 photos of each face you would like to track, from a variety of angles. Organize these photos in folders named the same as the person for easy access later (i.e. a folder named conf/Blair
contains 15 photos of Blair, conf/Sam
contains 15 photos of Sam).
Inside of the Docker container, with the master PC already running, use the following command:
rosrun recognition drag_and_drop.py
This will open a pop-up window. While this window has a load option, we have found it is easier to use an existing Linux file manager, thunar
, to load our images. From another terminal inside the Docker container, run the following to install and launch thunar:
apt-get update
apt-get install thunar
thunar
The final command will launch a file explorer inside the Docker container. Navigate to the detection/conf
folder where your images are. Inside of each name folder, select the first item and then Shift+click the final image to select all images and drag them drag-and-drop window. You will have to type the person's name for every photo. To avoid spelling mistakes, we recommend using all lower-case letters, and copy-pasting the name into the field for each photo.
N.B You cannot drag a directory into the drag-and-drop window. Instead, select all the relevant images from inside the directory and drag them over.
Once you've named all photos for a single person, select the Save button from the window. This will save your work so that on subsequent face tracking runs, you can simply load this bulk file rather than re-naming each image again.
Repeat this process for each person that will be tracked. Once all images have been loaded, in another terminal from inside the Docker container, launch the race recognition node:
rosrun recognition face_recognition_node
Then on the drag-and-drop window, select Send. For each sensor, launch the face detection and feature extraction node:
roslaunch recognition face_detection_and_feature_extraction.launch sensor_name:="<sensor>"
For single camera, sensor_name
is the type of imager + _head
(e.g. kinect2_head
, zed_head
, etc).
To launch a visualizer:
rosrun recognition recognition_visualization_node.py
Markers will turn green when a face has been recognized.
The following topics are available:
/face_recognition/people_tracks
/face_recognition/people_names
people_tracks contains tracked people. The data are same as /tracker/people_tracks except that people IDs are replaced according to face recognition results. If a person ID is larger than 10000, it means that his/her face has not been recognized by the system yet.
people_names is the result of predefined people recognition. It contains associations between people IDs and predefined people names.
See here next for guidelines on running Face Detection and Recognition in a Multi-Sensor configuration
- System Requirements
- Supported Hardware
- Initial Network Configuration
- Example Hardware List for UCLA Setup
- Making the Checkerboard
- Time Synchronization
- Pre-Tracking Configuration
- Camera Network Configuration
- Single Camera
- Setting Parameters
- Multi-Sensor Person Tracking
- HOG vs YOLO Detectors
- World Coordinate Settings
- Single Camera
- Pose Initialization
- Multi Sensor Pose Annotation
- Pose Best Practices
- Setting Parameters
- Single Camera
- Setting Parameters
- Multi Sensor Object Tracking
- YOLO Custom Training & Testing
- Yolo Trainer
- Single Camera
- Setting Parameters
- Multi Sensor Face Detection and Recognition
- Face Detection and Recognition Data Format
How to receive tracking data in: