Tracking position and pose of a moving drone using L151 #13635
Replies: 12 comments 11 replies
-
Hi @ar-ramchandra Thanks very much for your questions! I would advise using RealSense 400 Series cameras instead of L515, as the L515 model is now retired and is no longer supported in recent versions of the RealSense SDK software.
https://www.intelrealsense.com/everdrone/
It would not be necessary to sync the shutters of the multiple cameras in this particular application.
https://docs.wpilib.org/en/stable/docs/software/vision-processing/apriltag/apriltag-intro.html
Alternatively, there are some non-Intel open source Apriltag detection projects that may be able to work with D455 / D456, such as apriltag_ros |
Beta Was this translation helpful? Give feedback.
-
Although L515 are retired, they are still available from the official RealSense store on a 'while stocks last' basis . The most recent version of the RealSense SDK that is recommended for use with L515 though is '2.50.0'. However, they are not suited for use outdoors. https://store.intelrealsense.com/buy-intel-realsense-lidar-camera-l515.html I am not aware of a method for obtaining the position of an object observed by the camera using intrinsics and extrinsics. A way to get the 3D 'pose' of an object without using a T265 is to use OpenCV's SolvePNP algorithm described at #10233 (comment) If you have an Nvidia Jetson computing board then object pose can also be obtained with a RealSense camera using ROS and Docker. https://github.com/pauloabelha/Deep_Object_Pose A RealSense model that may fit on a small drone is D405, as it is lightweight, small and square. Its depth sensing works best at close range though (up to 0.5 meters), but it can depth-sense up to a couple of meters from the camera. https://store.intelrealsense.com/buy-intel-realsense-depth-camera-d405.html |
Beta Was this translation helpful? Give feedback.
-
You can transform multiple coordinate sets to have a single common 3D position and rotation using the rs2_transform_point_to_point() instruction, as described at the C++ link below. https://dev.intelrealsense.com/docs/api-how-to#get-and-apply-depth-to-color-extrinsics An example of using this technique is 'stitching' together multiple 3D pointclouds from different views to create a combined 360 degree image. The section of official documentation linked to below also has information about rs2_transform_point_to_point() https://dev.intelrealsense.com/docs/projection-in-intel-realsense-sdk-20#extrinsic-camera-parameters |
Beta Was this translation helpful? Give feedback.
-
You are very welcome. I'm pleased that I could be of help! The extrinsics of a particular camera can be retrieved from it with the instruction "get_extrinsics_to, which is demonstrated in the 'get-and-apply-depth-to-color-extrinsics' link above. In RealSense, extrinsics describe the real-world distance and rotation angle between two particular sensors on the camera (such as between depth and RGB). The intrinsic and extrinsic values are unique to each individual camera due to the manufacturing process at the factory. You can also retrieve a full listing of a camera's intrinsics and extrinsic values using the rs-enumerate-devices tool's calibration information mode by launching it with the command below.
RealSense 400 Series cameras (except for the D405 model) can also operate in low-light and dark / night conditions thanks to the infrared emitter component they are equipped with, which casts an infrared light source that can illuminate the camera's view of the observed scene whilst being invisible to the human eye. An example of this is using it in a medical ward to observe humans at night with a clear, bright image whilst not casting any visible light that would disturb the patients in the beds. If your preference is to use L515 then there is no obvious reason why they could not be used to observe drones indoors in the same way that a 400 Series camera can. However, if the views of the multiple L515s are overlapping then you may need to use the L515's specific form of multiple-camera hardware sync in order to negate possible interference caused by the overlap. https://dev.intelrealsense.com/docs/lidar-camera-l515-multi-camera-setup The 400 Series models do not experience interference from one another when their views overlap. You should also bear in mind whether you are likely to need additional L515 cameras in future, in case they become unavailable later. |
Beta Was this translation helpful? Give feedback.
-
L515 has a range of 9 meters, but its best accuracy is up to around 3 meters and then accuracy declines beyond that distance. The 400 Series models have a 10 meters range. In that range, the D45x class of cameras (D455, D456, D457) have the best accuracy at long distance, having the same accuracy at 6 meters from the camera that other models have at only 3 meters. D45x type cameras can depth-sense as far as 20 meters, though accuracy will decline beyond the 6 meter point. 400 Series cameras do not have problems with depth-sensing in the presence of LED light. The L515 guide at the link below demonstrates that this camera can be used in the presence of LED light, though you may need to set the 'Low Ambient Light' camera configuration preset otherwise performnce could be reduced. https://www.intelrealsense.com/optimizing-the-lidar-camera-l515-range/ |
Beta Was this translation helpful? Give feedback.
-
I do not have an exact comparison of L515 vs 400 Series accuracy. However, one of the notable features of L515 is that it can accurately analyze 23 million depth points per second. https://www.intelrealsense.com/lidar-camera-l515/ D45x type cameras have an accuracy of '<2% at 4 m' and D43x have an accuracy of '<2% at 2 m', reflecting how D45x cameras have better accuracy over long distance. However, these figures do not convert well to working out accuracy at a specific distance (1 m, 2 m, 3 m, etc). The best way to think about it is 'D45x has half the error at a particular distance that D43x has'. There is a math formula at the link below for calculating error over distance (called RMS Error) at the link below if you are interested, though it only applies to the 400 Series models. You can also obtain an automated calculation of RMS Error by using the SDK's Depth Quality Tool so that you do not have to do the calculations yourself. Regarding L515 and reflective objects, it can have problems with those. Using the 'Short Range' preset to reduce the camera's Laser Power and Receiver gain values can help to reduce the negative effects of reflections. Having said that, the 400 Series also has difficulty analyzing highly reflective surfaces for depth information. I suspect though that small reflective balls like that would not have a large impact on the depth image. At worst, there might be small black holes on the depth image that correspond to the position and size of the balls. |
Beta Was this translation helpful? Give feedback.
-
Reducing the parameters should not have a negative effect. Often, using the Short Range preset provides a noticable improvement when there are problems in an image when using the camera's default preset of Max Range. Apriltag markers can work well with L515, like in the Intel video below where Apriltags are used to calibrate nine L515 cameras together. |
Beta Was this translation helpful? Give feedback.
-
L515 does not have a calibration tool. Intel's original press release for the L515's launch stated that the canera retains its depth accuracy throughout its lifespan without the need for calibration. |
Beta Was this translation helpful? Give feedback.
-
There does not need to be an object in front of the camera to get the extrinsics. |
Beta Was this translation helpful? Give feedback.
-
You do not need anything in front of the camera to obtain the extrinsics when using a single camera. When obtaining the extrinsics of multiple cameras though, the cameras should be placed around the edges of a flat chessboard image to enable the positions of the cameras relative to each other to be calculated. An example of doing this is Intel's Python box_dimensioner_multicam box volume measurement example program. When the program is run, it automatically calculates the camera extrinsics relative to each other and only after that asks for an object to be placed on the chessboard in view of the cameras. |
Beta Was this translation helpful? Give feedback.
-
box_dimensioner_multicam is particularly senstitive to distance from the board, and has an ideal distance for camera placement of around 0.5 meters from the 6x9 inch board. I think if you wanted to place the cameras high up, you might have to increase the size of the chessboard so that the squares are much larger so the cameras can still see them from high up. box_dimensioner_multicamcan be customized for larger boards by increasing the values of board height, width and square size parameters in its script. |
Beta Was this translation helpful? Give feedback.
-
Your plan sounds reasonable and achievable. Whilst L515 can depth sense up to 9 m, accuracy starts to reduce after 3 to 4 meters. You could likely obtain a good depth measurement at 6 meters. I don't know what the level of accuracy would be at that distance though. Most RS2 functions that are supported for the 400 Series cameras are also supported for L515, including rs2_deproject_pixel_to_point() and rs2_transform_point_to_point() |
Beta Was this translation helpful? Give feedback.
-
Hi im going to purchase 4 realsense cameras to tract the position of drones flying around in the view of these cameras.
My idea is to get 3d position (x,y,z) in meters of multiple drones by using these camears in realtime.
Here are some questions to help me understand the amount of work required for me to have a working system.
Any advice / guidence is deeply appreciated !
Any advice is deeply appreciated.
Beta Was this translation helpful? Give feedback.
All reactions