Skip to content

JOiiNT-LAB/hri_body_detect

 
 

Repository files navigation

hri_body_detect

skeleton detection

Overview

⚠️ some of the links are yet to be updated and may be pointing to the original ROS page. As soon as all the involved components will be officially documented for ROS 2 as well, we will update this document.

hri_body_detect is a ROS4HRI-compatible 2D and 3D body pose estimation node.

It is built on top of Google Mediapipe 3D body pose estimation.

The node provides the 2D and 3D pose estimation for the detected humans in the scene, implementing a robust solution to self-occlusions.

This node performs the body-pose detection pipeline, publishing information under the ROS4HRI naming convention regarding the body ids (on the /humans/bodies/tracked topic), the bodies bounding box, and the jointstate of the bodys' skeleton.

To estimate the body position, the node does not need a RGB-D camera, only RGB is required. However, using RGB-D camera provides a more accurate depth estimation.

Important: to estimate the body depth without using a depth sensor, a calibrated RGB camera is required. You can follow this tutorial to properly calibrate your camera.

Launch

The launch file hri_body_detect.launch.py is intended to be used in PAL robots following PAPS-007 For general usage, use hri_body_detect_with_args.launch.py:

ros2 launch hri_body_detect hri_body_detect_with_args.launch.py <parameters>

ROS API

Parameters

Node parameters:

  • use_depth (default: False): whether or not to rely on depth images for estimating body movement in the scene. When this is False, the node estimates the body position in the scene solving a P6P problem for the face and approximating the position from this, using pinhole camera model geometry.
  • stickman_debug (default: False): whether or not to publish frames representing the body skeleton directly using the raw results from mediapipe 3D body pose estimation. These debug frames are not oriented to align with the body links (ie, only the 3D location of the frame is useful).
  • detection_conf_thresh (default: 0.5): threshold to apply to the mediapipe pose detection. Higher thresholds will lead to less detected bodies, but also less false positives.
  • use_cmc (default: False): whether or not to enable camera motion compensation in the tracker. It compensates the movement of the camera respect to the world during tracking, but it is CPU intensive as it is computing the optical flow.

hri_body_detect_with_args.launch parameters:

  • use_depth (default: False): equivalent to use_depth node parameter.
  • stickman_debug (default: False): equivalent to stickman_debug node parameter.
  • detection_conf_thresh (default: 0.5): equivalent to detection_conf_thresh node parameter.
  • use_cmc (default: False): equivalent to use_cmc node parameter.
  • rgb_camera (default: ): rgb camera topics namespace.
  • rgb_camera_topic (default: $(arg rgb_camera)/image_raw): rgb camera raw image topic.
  • rgb_camera_info (default: $(arg rgb_camera)/camera_info): rgb camera info topic.
  • depth_camera (default: ): depth camera topics namespace.
  • depth_camera_topic (default: $(arg depth_camera)/image_rect_raw): depth camera rectified raw image topic.
  • depth_camera_info (default: $(arg depth_camera)/camera_info): depth camera info topic.

Topics

hri_body_detect follows the ROS4HRI conventions (REP-155). In particular, refer to the REP to know the list and position of the 2D/3D skeleton points published by the node.

Subscribed topics

Published topics

Visualization

It is possible to visualize the results of the body pose estimation in rviz using the hri_rviz Skeleton plugin. A visualization example is reported in the image above.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%