ARAÚJO, A. C.; http://lattes.cnpq.br/3459625680216881; ARAÚJO, Arthur Cruz de.
Resumo:
The popularity of multi robot systems has propelled research on collaborative robot-robot
interaction: recognizing other robot’s actions is very useful in cooperation and for assisting
single robots in stable pick and place tasks. Lots of work has been done in human hand
tracking and recognition for interaction. Analogously, those factors motivated the development
of a vision system with the prime goal of providing a robot with awareness about
others robots’ states, without knowledge concerning their joints’ dynamics, for example. A
3D model-based is proposed, with the main goals of detecting a manipulator’s gripper on
a scene, tracking it as it moves and continuously determine its state (opened or closed).
The system comprises mainly of a registration pipeline – a combination of the Sample
Consensus Initial Alignment (SAC-IA) and the Iterative Closest Point (ICP) algorithms;
a particle filter used for tracking the gripper and a state classifier based on a measure of
similarity between pointclouds. It runs under the Robot Operating System (ROS) and
combines point cloud processing methods with the aforementioned algorithms, while using
cloud models of the gripper to identify its state, i.e. whether it is opened or closed, as well as
if it’s grasping an object. The system was implemented and evaluated with a test platform
composed of a 6-DOF Kinova Jaco2 robotic arm, with a three-fingered gripper, and a
Microsoft Kinect RGB-D (red, green and blue with per-pixel depth information) camera.
In general, the experimental results were satisfactory: tracking had a good performance
for well-behaved trajectories and the detection of not only the state of the gripper, which
showed robustness to self-occlusions, but also the presence of an object, were successful.
One of the observed limitations was the tracker’s low sensitivity to rotations of the gripper.
Also, using only two reference models could as well be seen as a limitation. The system’s
assumptions and limitations being respected, object detection would perform well, and also
its grasping. Beyond the discussion of results, demonstration videos are available online
for better understanding. Future works might explore generalization of the detector to
similar manipulators, as well as contemplate the insertion of this system in a collaborative
scenario.