ORKÍøÕ¾½Ì³Ì»ã×Ü - ͼÎÄ ÏÂÔØ±¾ÎÄ

ORK½Ì³ÌĿ¼

1 ORKÖ®Install ................................................................................................................................. 2 2 ORKÖ®Quick Guide ....................................................................................................................... 3 3 ORKÖ®tutorials ............................................................................................................................. 5

3.1 ¶ÔÏóʶ±ðÊý¾Ý¿â£¨DB£© .................................................................................................. 5

3.1.1 ¼ò½é ......................................................................................................................... 5 3.1.2 ×¼±¸¶ÔÏóµÄmesh ................................................................................................... 5 3.1.3 DBÖд´½¨¶ÔÏó ....................................................................................................... 5 3.1.4 ÊÖ¶¯¸ø¶ÔÏóÌí¼Ómesh ........................................................................................... 6 3.1.5 ¶ÔÏó¿ÉÊÓ»¯ ............................................................................................................. 6 3.1.6 ɾ³ý¶ÔÏó ................................................................................................................. 7 3.2 ʹÓÃTabletop½øÐжÔÏóʶ±ð ............................................................................................ 7

3.2.1 ÉèÖù¤×÷»·¾³ ......................................................................................................... 7 3.2.2 ѰÕÒÆ½Ãæ ................................................................................................................. 8 3.2.3 ¶ÔÏóѰÕÒ ................................................................................................................. 8 3.2.4 ³£¼ûÎÊÌâ»Ø´ð ......................................................................................................... 9

4 ORKÖ®Usage ............................................................................................................................... 10

4.1 ORK Infrastructure ........................................................................................................... 10

4.1.1 Database ................................................................................................................. 10 4.1.2ÅäÖÃÎļþ ................................................................................................................ 11 4.2 Data Capture ..................................................................................................................... 11

4.2.1 3D Camera ............................................................................................................. 12 4.2.2 Setup....................................................................................................................... 12 4.2.3 Capture ................................................................................................................... 13 4.2.4 upload ..................................................................................................................... 16 4.2.5 API£º ..................................................................................................................... 17 4.3 Training ............................................................................................................................. 17

4.3.1 ConfigFile .............................................................................................................. 17 4.3.2 Use ......................................................................................................................... 17 4.3.3 CommandLineInterface .......................................................................................... 17 4.4Detection ............................................................................................................................ 18

4.4.1 Use ......................................................................................................................... 18 4.4.2 Command Line Interface ........................................................................................ 18 4.4.3 ConfigurationFile ................................................................................................... 19

5 ORKÖ®ROS integration ............................................................................................................... 20

5.1 ROSÏûÏ¢ .......................................................................................................................... 20 5.2 ¶ÔÏóÐÅÏ¢·þÎñ .................................................................................................................. 22 5.3 Actionlib·þÎñ ................................................................................................................... 22 5.4 ·¢²¼ºÍ¶©ÔÄ ...................................................................................................................... 23 5.5 RViz²å¼þ .......................................................................................................................... 24 6 ORKÖ®Recognition Pipelines ...................................................................................................... 26

6.1 LINE-MOD ....................................................................................................................... 26 6.2 tabletop .............................................................................................................................. 26 6.3 TOD .................................................................................................................................. 30 6.4 transparent objects ............................................................................................................. 32 7 ORKÖ®ToolÖ®Reconstruction .................................................................................................... 35

1 / 36

1 ORKÖ®Install

1 °²×°openni

sudo apt-get install openni2-utils

2 SensorKinect£º

ÃüÁ$git clone https://github.com/avin2/SensorKinect.git Èç¹ûûÓа²×°git£¬Ôòsudo apt-get installÖ®¡«

¹ý³Ì±È½ÏÂý£¬½áÊøºó»áÔÚµ±Ç°Â·¾¶³öÏÖÒ»¸öÎļþ¼ÐSensorKinect£¬cd½øSensorKinect/Platform/Linux/CreateRedist£¬Ö®ºó$ ./RedistMaker£¬ÕâʱÔÚÉϲãĿ¼LinuxϳöÏÖRedistÎļþ¼Ð¡£´ËÊ±ÍøÉÏ˵½ø¸ÃĿ¼$ ./install.sh£¬µ«Êµ¼ÊÉÏ»¹Òª½øÒ»²ãĿ¼²ÅÓÐinstal.shÎļþ¡£µ«ÊÇÃ²ËÆÖ´ÐÐÕâ¸öÐèÒªrootȨÏÞ£¬ÎÒ²»ÖªµÀÔõôŪ£¬$sudo suºóÒ²²»ÐУ¬×îºó·¢ÏÖ»¹ÊÇRedistÎļþ¼ÐÀïÃæÓÐÒ»¸öFinalÎļþ¼Ð£¬ÀïÃæÓÐÒ»¸öѹËõ°üSensor-Bin-Linux-x86-v5.0.5.1.tar.bz2£¬ÎÒË÷ÐÔ°ÑËü¿½³öÀ´£¬½âѹËõºó½øÈ¥ £¨?/SensorKinect/Platform/Linux/CreateRedist/Sensor-Bin-Linux-x86-v5.0.5.1/£©£¬ÔÚÀïÃæ$./install.sh£¬¾¹È»¾Í¿ÉÒÔÁË¡£Ë³±ã˵һÏ£¬¿ÉÄÜÔÚÕâЩ¹ý³ÌÖÐÊäÈë$ ./install.sh »áÌáʾûÓÐÃüÁîÖ®ÀàµÄ£¬¿ÉÒÔÔÚinstall.shÎļþÉÏÓÒ»÷->ÊôÐÔ->ȨÏÞ£¬Ñ¡ÖС°ÔÊÐíÒÔ³ÌÐòÖ´ÐÐÎļþ¡±£¬¾Í¿ÉÒÔÁË¡£

»òÕßÔÚRedistÎļþ¼ÐÕÒµ½install.shÎļþ£¬ÔËÐÐsudo ./install.sh

3 °²×°usb¿â--LibUSB

1) Run: sudo apt-get install libusb-1.0-0-dev 2) Run: sudo apt-get install freeglut3-dev

2 / 36

2 ORKÖ®Quick Guide

1¡¢Êý¾Ý¿âÖпÉÊÓ»¯Êý¾Ý

rosrun object_recognition_core push.sh

2¡¢ÉèÖû·¾³

erminal 1:

roscore

Terminal 2:

roslaunch openni_launch openni.launch

3¡¢»ñÈ¡¶ÔÏó £¨1£©Ô¤ÀÀ

rosrun object_recognition_capture capture --seg_z_min 0.01 -o silk.bag --preview

£¨2£©Õýʽ»ñÈ¡

rosrun object_recognition_capture capture --seg_z_min 0.01 -o silk.bag

£¨3£©½«Êý¾ÝÉÏ´«µ½DBÖÐ

rosrun object_recognition_capture upload -i silk.bag -n 'Silk' milk soy silk --commit

4¡¢ÑµÁ·¶ÔÏó

£¨1£©Éú³Émesh

rosrun object_recognition_reconstruction mesh_object --all --visualize --commit £¨2£©²é¿´

http://localhost:5984/or_web_ui/_design/viewer/meshes.html £¨3£©ÑµÁ·

rosrun object_recognition_core training \\

-c `rospack find object_recognition_tod`/conf/training.ork \\ --visualize 5¡¢¼ì²â¶ÔÏó

rosrun object_recognition_core detection \\

-c `rospack find object_recognition_tod`/conf/detection.ros.ork \\ --visualize

3 / 36

4 / 36

3 ORKÖ®tutorials

3.1 ¶ÔÏóʶ±ðÊý¾Ý¿â£¨DB£©

ÔÚORKÖУ¬ËùÓеĶ«Î÷¶¼´æ´¢ÔÚÊý¾Ý¿âÀ¶ÔÏó¡¢Ä£ÐÍ¡¢ÑµÁ·Êý¾Ý¡£±¾½Ì³Ì½²Êö¹ØÓÚDB»ù´¡ÄÚÈÝ£¬ÒÔÏ£º

£¨1£©½«Ò»¸ö¶ÔÏóµÄmeshÌí¼Óµ½DBÖУ» £¨2£©Ñ§Ï°ÈçºÎÊÖ¶¯Ìí¼Ó¶ÔÏóµ½DBÖУ» £¨3£©ORKÊý¾Ý¿âÖÐÊý¾ÝµÄ¿ÉÊÓ»¯¡£

3.1.1 ¼ò½é

ÄãҪȷ±£×ñÑ­ºËÐÄÊý¾Ý¿âÖ¸ÁîµÄ²½Ö裬ÓÈÆäÊÇDBµÄ3D¿ÉÊÓ»¯¡£ÎÒÃǽ«Ê¹ÓõÄÀý×ÓÊÇ¿ÉÀÖ¹Þ£¬Òò¾ßÓÐÆäÒ»¶¨µÄͨÓÃÐÔ£ºÏÖʵÉú»îÖеÄʵÑ飬»ñÈ¡±êÖ¾ÐԵĺìÉ«¹Þ×Ó£¬ÕâЩ²»Ó¦µ±ÓÐÌ«¶àµÄÍâ¹Û±ä»¯¡£

3.1.2 ×¼±¸¶ÔÏóµÄmesh

¶ÔÏóµÄmesh£¨Íø¸ñ£©¶ÔÓÚÔÚORKÖнøÐжÔÏó¼ì²âÊǷdz£ÖØÒªµÄ£¬Æä¸ñʽÊÇ.stl/obj¡£Äã¿ÉÒÔ°´ÕÕORKµÄ²¶»ñ¹ý³Ì£¨¿ìËٽ̳ÌÖÐÓкܺõĽâÊÍ£©À´×¼±¸¶ÔÏóµÄmesh¡£´ËÍ⣬Äã¿ÉÒÔʹÓÃÈκÎÉú³ÉmeshµÄÈí¼þÀ´×¼±¸mesh¡£»òÕߣ¬Äã¿ÉÒÔʹÓÃÍøÉÏÃâ·ÑµÄmesh¡£

Ò»µ©ÓÐÁËmesh£¬ÔÚ½«ÆäÉÏ´«µ½DB֮ǰ£¬È·±£Æä³ß´çÕýÈ·²¢×¢ÒâÆä³õʼµã¡£ÔÚblenderµÄÆÁÄ»¿ìÕÕÏ£¬Äã¿ÉÒÔ¿´µ½¹Þ×ÓµÄmeshµÄ³õʼµãºÍÆ¿×ÓµÄmesh²»Ò»Ñù¡£

ÔÚORKÖУ¬ÓÉORK·µ»ØµÄ¶ÔÏóλÖÃÊǶÔÏómeshµÄ³õʼµãµÄλÖá£

3.1.3 DBÖд´½¨¶ÔÏó

ORKÖ÷ÒªÊÇʶ±ð¶ÔÏóµÄ£¬ËùÒÔÊ×ÏÈÐèÒª½«¶ÔÏó´æ´¢ÔÚDBÖС£Ò»Ð©ÏñORK 3d captureÕâÑùµÄ¹ÜµÀ¿ÉÒÔÓÃÀ´´´½¨¡£µ«ÊÇÄãÒ²Òª´ÓÄںˣ¨core£©ÖÐÓýű¾¶ÔÆä½øÐд¦Àí¡£

rosrun object_recognition_core object_add.py -n coke -d \ È»ºó¾Í¿ÉÒÔͨ¹ý·ÃÎÊÍøÖ·À´¼ì²éÊý¾Ý¿âÖеÄÕâ¸ö¶ÔÏó¡£

http://localhost:5984/_utils/database.html?object_recognition/_design/objects/_view/by_object_name

5 / 36

µã»÷¶ÔÏ󣬿ÉÒÔ¿´µ½¶ÔÏóµÄÐÅÏ¢£¬ÓÈÆäÊǶÔÏóµÄid£º

3.1.4 ÊÖ¶¯¸ø¶ÔÏóÌí¼Ómesh

Ê×ÏÈ£¬Ê¹ÓÃDB£¨°üÀ¨¶ÔÏ󣩽ӿÚÀ´¼ì²é¶ÔÏóµÄid£ºDBµÄÿ¸öÔªËØ¶¼ÓÐÆä×Ô¼ºµÄ¹þÏ£±í×÷ΪΨһ±êʶ£¨·ÀÖ¹Äã¸ø²»Í¬µÄ¶ÔÏóÏàͬµÄÃüÃû£©£¬Õâ¾ÍÊÇÄãÓ¦µ±ÈçºÎÈ¥²Î¿¼¶ÔÏó¡£

ÉÏ´«mesh£º

rosrun object_recognition_core mesh_add.py YOUR_OBJECT_ID `rospack find object_recognition_tutorials`/data/coke.obj --commit

3.1.5 ¶ÔÏó¿ÉÊÓ»¯

ÏÖÔÚ£¬ÈçºÎÏò½«DBÖеĶÔÏó¿ÉÊÓ»¯£¬Äã¿ÉÒÔÈ¥¿ÉÊÓ»¯URL£º

http://localhost:5984/or_web_ui/_design/viewer/meshes.html

Ä㽫Äܹ»¿´µ½ÒÔÏ£º

6 / 36

3.1.6 ɾ³ý¶ÔÏó

ɾ³ý¶ÔÏó£¨Ò²¿ÉÒÔɾ³ýÊý¾Ý¿âÖеÄËùÓÐÆäËûÔªËØ£¬ÏñÄ£ÐÍ/ѵÁ·Êý¾Ý£©

rosrun object_recognition_core object_delete.py OBJECT_ID

3.2 ʹÓÃTabletop½øÐжÔÏóʶ±ð

TabletopÊÇÒ»¸öÓÃÓÚ¶ÔÏóʶ±ðµÄ¼òµ¥Í¨µÀ£¬½öÐèÒªÒ»¸öÓÃÓÚѵÁ·/¼ì²âµÄ¶ÔÏómesh£¬ÒÔÏÂÄÚÈÝ£º

£¨1£©Ñ§Ï°ÈçºÎʹÓÃtabletopͨµÀÀ´·¢ÏÖÆ½Ã棻

£¨2£©Ñ§Ï°ÈçºÎʹÓÃtabletopͨµÀÀ´·¨ÏßÌØ¶¨¶ÔÏóµÄÀàÐÍ£» £¨3£©Ê¹ÓÃORK Rviz²å¼þ¡£

3.2.1 ÉèÖù¤×÷»·¾³

1£©Ó²¼þ

Ò»¸ö3DÏà»ú£¨ÀýÈçKinect¡¢Xtion£©£¬Ò»¸öÄܹ»ÔËÐÐROSµÄ¼ÆËã»ú£¬Ò»Ð©½ÏΪƽµÄ±íÃæ£¨ÀýÈç×À×Ó¡¢Ç½¡¢µØÃ棩£¬»òÕßһЩ¿ÉÀÖ¹Þ×Ó¡£

2£©Èí¼þ ¼ÆËã»úÐèÒª°²×°ORK¡£ÐèÒªrqt_reconfigureºÍRVizÀ´ÅäÖÃ3DÏà»ú²¢¶ÔÆ½ÃæºÍ¶ÔÏó½øÐпÉÊÓ»¯¡£Èç¹ûÒª°²×°ÕâЩ¹¤¾ß£¬ÔËÐÐÒ»ÏÂÃüÁ

sudo apt-get install ros--rviz ros--rqt_reconfigure ros--openni*

3£©ÅäÖÃ3DÏà»úºÍRViz²ÎÊý ÔÚ¶ÀÁ¢µÄÖÕ¶Ë£¬ÔËÐÐÒÔÏÂÃüÁ

roslaunch openni2_launch openni2.launch rosrun rviz rviz

ÉèÖù̶¨¿ò¼Ü£¨RVix´°¿ÚµÄ×óÉÏ£©µ½/camera_depth_optical_frame¡£Ìí¼ÓPlintCloud2ÏÔʾ£¬ÉèÖñêÌâµ½/camera/depth/points¡£±³¾°»»³ÉÁÁ»ÒÉ«Äܹ»Ìá¸ßÊÓ¾õ¡£ÕâÊÇÉî¶ÈÏà»ú¿ò¼ÜÖеÄδע²áµÄµãÔÆ¡£Ëü²»ÄܺÍRGBÏà»úµÄͼÏñÆ¥Åä¡£ÏÖÔÚ£¬¿´Ò»¸ö×¢²á¹ýµÄµãÔÆ£¬ÆäÓëRGBÊý¾Ý±£³ÖÒ»Ö¡£´ò¿ª¶¯Ì¬ÖØÅä×¼½çÃæ£º

7 / 36

rosrun rqt_reconfigure rqt_reconfigure

´ÓÏÂÀ­²Ëµ¥ÖÐÑ¡Ôñ/camera/driver £¬Æô¶¯ depth_registration ¸´Ñ¡¿ò£¬Í˻ص½RViz£¬½«PoindCloud2±êÌâת»»µ½/camera/depth_registered/points£¬ÉèÑÕÉ«±ä»»µ½RGB8£¬½«¿´µ½ÆÁÄ»3DµãÔÆµÄÒ»ÖÖÑÕÉ«¡£Ïêϸ˵Ã÷¿ÉÒԲο¼ http://wiki.ros.org/openni2_launch¡£

3.2.2 ѰÕÒÆ½Ãæ

ΪʹÓÃORK_Tabletop·¢ÏÖÆ½Ã棬ÔËÐÐÏÂÁÐÃüÁ

rosrun object_recognition_core detection -c `rospack find object_recognition_tabletop`/conf/detection.table.ros.ork

È»ºóµ½RvizͼÐδ°¿Ú£¬Ìí¼ÓORKTableÏÔʾ£¬Èç¹ûÄãµÄÏà»úÕý¶Ô×ÅÆ½Ã棬¿ÉÒÔ¿´µ½Í¨¹ýORK_Tabletop¼ì²âµÄÒ»Ð©Æ½Ãæ¡£

3.2.3 ¶ÔÏóѰÕÒ

Èç¹ûÄã°´ÕÕ°²×°Ïòµ¼(http://wg-perception.github.io/object_recognition_core/install.html#install)£¬¾ÍÖªµÀORKʹÓÃcouchDBÀ´¹ÜÀí¶ÔÏóÊý¾Ý¿â¡£ÎªµÃµ½tabletop¼ì²â¶ÔÏó£¬ÎÒÃÇÐèÒªÓöÔÏó3DÄ£ÐÍÀ´Î¬³ÖÊý¾Ý¿â¡£

µ±ÄãÊ×ÏȰ²×°ORKµÄʱºò£¬Êý¾Ý¿âÊǿյġ£ÐÒÔ˵ÄÊÇork½Ì³Ì×Ô´øÒ»¸ö¿ÉÀÖ¹Þ×ÓµÄ3DÄ£ÐÍ£¬ËùÒÔÏÂÔØ½Ì³ÌΪ£º

git clone https://github.com/wg-perception/ork_tutorials

È»ºóÉÏ´«µ½ORKÊý¾Ý¿âÖУº

rosrun object_recognition_core object_add.py -n \\-d %universal can of coke\rosrun object_recognition_core mesh_add.py

Èç¹ûÄã×öÁËÉÏ´«¶ÔÏóµÄ²½Ö裬Ȼºóµ±Äã´ò¿ªÁ´½Óhttp://localhost:5984/or_web_ui/_design/viewer/objects.html£¬Ä㽫¿´µ½¿ÉÀÖ¶ÔÏóÔÚÊý¾Ý¿âÖÐÁгö¡£ËùÓеͼÉèÖúã¬À´¿´Ò»ÏÂork_tabletopÈçºÎ¼ì²â¿ÉÀÖ¹Þ×Ó£¬ÔÚÖÕ¶ËÔËÐÐ

rosrun object_reconition_core detection -c `rospack find object_recognition_tabletop`/conf/detection.object.ros.ork`

»Øµ½RViz£¬Ìí¼ÓOrkObjectÏÔʾ£¬ÏÖÔÚÈç¹ûÓпÉÀÖ¹Þ×Ó·ÅÔÚ¼ì²âÆ½ÃæÉÏ£¬ork_tabletop½«»á¿´µ½Ëû£¬Æ¯ÁÁµÄRViz½çÃæ½«Õ¹Ê¾ÈçÏ£º

8 / 36

×¢Ò⣺ÔÚͼÏñÖУ¬½ö½öÄܹ»¿´µ½¿ÉÀÖ£¬ÒòΪOrkTableÔÚRViz½Ó¿ÚÀïûÓб»Ñ¡ÖС£³ý·ÇÄãȷʵûÓÐÑ¡ÖкÐ×Ó£¬·ñÔòÔÚÆ¯ÁÁµÄRVizÖв»Ó¦µ±³öÏÖÕâÑùµÄÇé¿ö¡£

3.2.4 ³£¼ûÎÊÌâ»Ø´ð

9 / 36

4 ORKÖ®Usage

4.1 ORK Infrastructure

object_recognition_coreÓÃÓÚ¶ÔÏóʶ±ð£¬ÆäÌṩһ¸ö»ù´¡¹¹¼þÓÃÓÚ¼òµ¥¿ª·¢ºÍ¶ÔÏóʶ±ð¹ÜµÀµÄʹÓᣵ±Ê¹ÓÃ/¿ª·¢Ò»¸ö×Ô¼º´´½¨»ò·¢ÏÖµÄз½·¨ÊÇ£¬Í¨³£×ÜÊÇÔÙÏÖÈçϵÄÄÚÈÝ£º

? ´æ´¢/²éѯʹÓÃѵÁ·Êý¾Ý ? ´æ´¢/²éѯ/ʹÓÃÄ£ÐÍ ? ѵÁ·¶ÔÏó

? ´¦Àíµ¼ÈëµÄÊý¾Ý£¨Ïà»ú¡¢ROSrosbag,ROStopics£© ? Êý¾Ýµ¼³ö ? ÓëROS¼¯³É

? ¶à¹ÜµÀͬʱÔËÐнøÐбȽÏ

ORKͨ¹ýÌṩ»ù´¡ÀàºÍÁé»îµÄ»ù´¡¹¹¼þÄܹ»ÊµÏÖÓÒ²àµÄËùÓй¦ÄÜ£¬ÉõÖÁ¸ü¶à

4.1.1Database

1£©Implementation

CouchDB£º{'type':'CouchDB','root':'http://localhost:5984','collection':'object_recognition'} Filesystem£º{'path':'/tmp','type':'filesystem','collection':'object_recognition'} Empty(onlyfortesting)£º{'type':'empty'} 2£©CouchDB ? ÏÂÔØ£º

sudoapt-getinstallcouchdb ? ²âÊÔ

curl-XGEThttp://localhost:5984 ·µ»Ø½á¹û£º

%{\3£©ÅäÖà ? Ð޸ĵØÖ·

[httpd]port=5984bind_address=0.0.0.0 ? ÖØÆô

sudoservicecouchdbrestart 4£©WebUI

? È·ÈÏÊÇ·ñÓÐcouchapp£º sudopipinstall-Ucouchapp£» ? DBµÄ¿ÉÊÓ»¯¹¤¾ß£º

10 / 36

rosrunobject_recognition_corepush.sh£» ? ä¯ÀÀURL£º

http://localhost:5984/or_web_ui/_design/viewer/index.html 5£©¿â

¶ÔÏóʶ±ð¹¤¾ßʹÓÃlibCURL»òpython-couchdbÀ´²Ù×ÝÊý¾Ý¿â¡£ http://localhost:5984/_utils

4.1.2ÅäÖÃÎļþ

ORKÖÐÓÐÁ½¸ö²½ÖèÐèÒªÅäÖúãºTrainingºÍDetection¡£Õâ¸ö²½Öè·Ç³£Áé»î£¬¿ÉÒÔÓúü¸ÖÖÊäÈë/Êä³ö»òÕßÈκιܵÀÀ´¶¨Òå¡£Æäͨ¹ýÅäÖÃÎļþÖеIJÎÊý´«µÝ½øÐÐÅäÖá£ÅäÖÃÎļþ¶¨ÒåÒ»¸ö»ò¶à¸öectoµ¥Ôª£¬ÕâЩµ¥ÔªÔڹܵÀÖнøÐÐÁ¬½ÓºÍÖ´ÐС£µ¥Ôª¶¨ÒåÈçÏ£º cell_name:

type:class_of_the_ecto_cell

module:Python_module_where_the_class_is

inputs:['other_cell_name_1','other_cell_name_2'](Optional) outputs:['other_cell_name_3','other_cell_name_4'](Optional) parameters:(Optional) any_valid_JSON Ò»¸ö¹ÜµÀ¿ÉÄÜ»áÊÇ£º cell1:

type:class1 module:module1 outputs:[cell2] parameters: parameter1:value1 cell2:

type:class2 module:module2

inputs:[cell1](Optional:actuallydoesnotneedtobeasit'sdefinedforcell1)

µÚ¶þ¸öµ¥ÔªÒ²¿ÉÒÔÓвÎÊý¡£Ò»µ©¶¨ÒåÁËÕâЩ¹ØÏµ£¬µ¥Ôª»áÒ»Æð±»ÕýÈ·µÄ³õʼ»¯¡¢Á¬½ÓºÍÖ´ÐС£Õâ¿´ÆðÀ´ËƺõÊÇÏ¡ÊèµÄÐÅÏ¢µ«ÊÇËüÕæµÄ¾ÍÕâô¼òµ¥¡£×î¼òµ¥µÄ¾ÍÊDz鿴²»Í¬¹ÜµÀµÄ²»Í¬ÅäÖÃÎļþ¡£

4.2 Data Capture

object_recognition_capture£ºÊý¾Ý»ñÈ¡¡£

Ö´ÐжÔÏóʶ±ðËù»ñÈ¡µÄÊý¾ÝÐèÒªÒÔÏ£º

11 / 36

? ÖªµÀÔÚÒ»¸öÎȶ¨ÊÀ½ç¿ò¼ÜÏÂÏà»úµÄλÖã» ? ÖªµÀ±»»ñȡͼÏñ/³¡¾°ÖжÔÏóµÄλÖã» ? »ñµÃÄܹ»×ã¹»¸²¸Ç¶ÔÏó±íÃæµÄÊӽǡ£

µÚÈýµãÒÀÀµÓÚÓÃËù»ñÈ¡Êý¾Ý×öʲô£¨ÀýÈç¼ÆËã3DÄ£Ð͵ĸñÍø£©£¬ÒÔ¼°Í¨¹ýÒ»¶¨ÊýÁ¿µÄÊÓͼ×öʲô¡£µÚÒ»µãĿǰÓÉÒ»¸öÌØÕ÷ÏÔÖøµÄģʽ£¨Ò»¸öµãģʽ»ò×Ô¶¨ÒåµÄģʽ£©ÊµÏÖ£¬ÕâÔÚSetup²¿·ÖÓÐÃèÊö¡£µÚ¶þµãµÄ½â¾öÊÇÓÉ·Ö¸î³ö¸ßÓÚÆ½ÃæµÄÈκÎÎïÌåÀ´¾ö¶¨¡£CaptureʹÓÃÓÉobject_recognition_coreºÍobject_recognition_captureËùÌṩµÄÊý¾Ý¿â»ù´¡¹¹¼þÀ´´æ´¢»ñÈ¡µÄÊý¾Ý¡£

4.2.1 3D Camera

Kinect

4.2.2 Setup

²¶»ñÊÇ»ùÓÚÊÓͼµÄ,ÐèÒªÒ»¸öÑϸñµÄ»ùÓÚ±»¹Û²ì¶ÔÏóµÄ»ù×¼µã¡£ÕâʹµÃÏà¶Ô׼ȷ½Ç¶ÈµÄµã×Ë̬µÄ¹À¼Æ¡¢Ò»ÖµĶÔÏó×ø±ê¿ò¼ÜºÍ¼òµ¥µÄ¶ÔÏó/±³¾°·Ö¸î³ÉΪ¿ÉÄÜ¡£ÉèÖù¤×÷ÊǼÙÉèÄúÓÐÒ»¸öRGBÉî¶ÈÉ豸,ÈçKinect¡£ÎÒÃÇÓÐÁ½¸ö·½·¨½øÐжÔÏó»ñÈ¡£¬Æä½á¹ûµÄÖÊÁ¿´óÖÂÏ൱:Ò»¸öµãģʽºÍÒ»¸öͨÓõÄģʽ(ÕâÊÇÓÐÓõÄ,Èç¹ûÄã²»ÄÜ´òÓ¡µãģʽ)

1£©DotPattern

²é¿´¸½¼þ:capture_board_big_5x3.svg.png

http://wg-perception.github.io/capture/_downloads/capture_board_big_5x3.svg http://wg-perception.github.io/capture/_downloads/capture_board_big_5x3.svg.pdf

12 / 36

µãģʽÓÐÁ½Ì×»ù×¼±ê¼Ç£º°×É«ÉϵĺڵãºÍºÚÉ«Éϵİ׵㣬ÕâÑùÁ½¸ö»ù×¼ÔÚ³¡¾°Öлᱻ¼ì²âµ½£¬²¢ÔÊÐíÔڱպϵıê¼Ç³¡¾°ÖнøÐÐ×Ë̬µÄ¹ÀËã¡£Äõ½Ò»¸öÈ«³ß´ç´òÓ¡µÄÖ½£¬ÉÏÃæÓлù×¼±ê¼Ç²¢½«Æä·ÅÔÚÆ½ÃæÉÏ¡£

×¢Ò⣺ģʽ£¨pattern£©µÄÎïÀí³ß´çÆäʵ²¢²»ÖØÒª£¬ÒòΪËüÊÇÔÚ3DºÍºÏÊʵıÈÀý³ßÖÐʹÓÃKinectÆ½ÃæÌ½²âÆ÷·¢Ïֵġ£

2£©ORBTemplate

Èç¹ûÄã²»Ïë´òÓ¡µãģʽ£¬Äã¿ÉÒÔÓÃÒ»¸öÆ½ÃæÌØÕ÷ºÜÏÔÖøµÄ±íÃæ¡£ÕâÉæ¼°µ½µÄÄÚÈÝÊÇ£¬»ñȡһ¸ö¹æ·¶»¯µÄ±íÃæÊÓͼ£¬ÒÔ±ãºóÃæ¿ÉÒÔ½¨Á¢Ò»¸ö¶ÔÏó×ø±êϵͳ²¢Ö´Ðзָ

Ê×ÏÈ»ñȡһ¸öcapture¹¤×÷¿Õ¼äµÄORBÄ£°å¡£ËüÓ¦À´×ÔÕýǰ·½Æ½ÃæÊÓ½Ç,ͼÏñµÄÖÐÐÄÓ¦¸ÃÓÉÆ½ÃæÌî³äµÄ¡£°´¡°s¡±À´±£´æÍ¼Ïñ¡£½á¹û½«±»·ÅÖÃÔÚ¸ø¶¨µÄĿ¼ÖÐ,ÀýÈçmy_textured_plane¡£°´¡°q¡±Í˳öÄ£°å²¶»ñ³ÌÐò¡£

rosrunobject_recognition_captureorb_template-omy_textured_plane

³¢ÊÔ×·×ÙÈ¥¿´¿´ÄãÊÇ·ñ»ñµÃÁËÒ»¸öÁ¼ºÃµÄÄ£°å£¬°´¡°q¡±Í˳ö¡£

rosrunobject_recognition_captureorb_track--track_directorymy_textured_plane

4.2.3 Capture

CaptureÊǵãµÄÈë¿Ú£¬ÒÔ±ãʹÓöÔÏó»ñȡϵͳ¡£capture³ÌÐò½«¹ÀËãÿ¸öÊÓ½ÇϵÄ×ËÊÆ,ÒÔ¼°»ùÓÚÉî¶ÈµÄÑÚĤ¡£ÕâʹµÃÊý¾ÝµÄROS°üÓÐÒÔÏÂÖ÷Ìâ:

types: geometry_msgs/PoseStamped [d3812c3cbc69362b77dc0b19b345f8f5] sensor_msgs/CameraInfo [c9a58c1b0b154e0e6da7578cb991d214] sensor_msgs/Image [060021388200f6f0f447d0fcd9c64743] topics: /camera/depth/camera_info 72 msgs : sensor_msgs/CameraInfo /camera/depth/image 72 msgs : sensor_msgs/Image 13 / 36

/camera/mask 72 msgs : sensor_msgs/Image /camera/pose 72 msgs : geometry_msgs/PoseStamped /camera/rgb/camera_info 72 msgs : sensor_msgs/CameraInfo /camera/rgb/image_color 72 msgs : sensor_msgs/Image ҪʹÓÃCapture£¬Ó¦µ±½«¶ÔÏó·ÅÔÚ»ù×¼ÃæµÄÖÐÐÄ£¬²¢ºÍËùÓÐCapture³¡¾°±£³ÖλÖÃÒ»Ö¡£»ºÂýµÄת¶¯»ù°å£¬³ÌÐò½«²¶»ñÄÇЩ¾ùÔÈ·Ö²¼ÔÚÊÓ½Ç×Ë̬·¶Î§ÄÚµÄÊÓͼ¡£

ÔÚÔ¤ÀÀģʽÏÂÔËÐв¶»ñ³ÌÐò£¬È·±£×Ë̬չʾ³öÀ´£¬Í¬Ê±¶ÔÏóµÄÑÚĤ²»Îª¿Õ¡£ÑÚĤ±íʾÒÔ×Ë̬¶ÔÏóΪÖÐÐĵÄËùÓÐÔ²ÖùÌåµÄ¶ÔÏ󣬲¢Í¨¹ýÃüÁîÐÐÖ¸¶¨³ß´ç¡£Õâ¾ÍÊÇËùνµÄ¶ÔÏó¼¯Èº£¬ÆäÓÃÓÚѵÁ·¡£

Èç¹ûÓõÄÊǵãģʽ£¬É¾³ýÏÂÃæµÄ-iÑ¡Ï

Rosrun object_recognition_capture capture ¨CI my_textured_plane --seg_z_min 0.01 ¨Co silk.bag

--preview

ÄúÓ¦¸Ã»á¿´µ½Ò»¸öµ¯³öͼƬÀàËÆÈçÏÂ:

ÊӽǵÄÑù±¾ÐòÁÐͨ¹ýʹÓÃÒ»¸ö·´Ïòµãģʽ»ù×¼ÑÚĤ»ñÈ¡µÄ¡£

µ±Ô¤ÀÀģʽ½ÏΪÂúÒâÊÇ£¬¾ÍÈÃËüÕæÕý¿ªÊ¼ÔËÐС£ÒÔϽ«²¶»ñ60¸öÊÓͼµÄ°ü£¬Ã¿¸öÊÓͼÕý³£·Ö²¼ÔÚÊÓ½ÇÇòÃæÉÏ¡£½öµ±»ñȡеÄÊÓͼʱ£¬ÑÚĤºÍ×Ë̬²ÅÖØÐÂˢС£µ±»ñÈ¡360¶ÈÊÓ½Çʱ£¬³ÌÐò½«½áÊø¡£°´¡°q¡±ÌáǰÍ˳ö¡£

14 / 36

rosrunobject_recognition_capturecapture-imy_textured_plane--seg_z_min0.01-osilk.bag °ïÖú£º usage: capture [-h] [-o BAG_FILE] [-a RADIANS] [-n NVIEWS] [--preview] [-i,--input INPUT] [-m,--matches] [--fps FPS] [--res RES] [--seg_radius_crop SEG_RADIUS_CROP] [--seg_z_crop SEG_Z_CROP] [--seg_z_min SEG_Z_MIN] [--niter ITERATIONS] [--shell] [--gui] [--logfile LOGFILE] [--graphviz] [--dotfile DOTFILE] [--stats] Captures data appropriate for training object recognition pipelines. Assumes that there is a known fiducial in the scene, and captures views of the object sparsely, depending on the angle_thresh setting. optional arguments: -h, --help show this help message and exit -o BAG_FILE, --output BAG_FILE A bagfile to write to. -a RADIANS, --angle_thresh RADIANS The delta angular threshold in pose.Frames will not be recorded unless they are not closer to any other pose by this amount. default(0.174532925199) -n NVIEWS, --nviews NVIEWS Number of desired views. default(36) --preview Preview the pose estimator. -i,--input INPUT The directory of the template to use. If empty, it uses the opposite dot pattern -m,--matches Visualize the matches. camera: --fps FPS The temporal resolution of the captured data --res RES The image resolution of the captured data. seg options: --seg_radius_crop SEG_RADIUS_CROP The amount to keep in the x direction (meters) relative to the coordinate frame defined by the pose. ~ (default: 0.20000000298) --seg_z_crop SEG_Z_CROP The amount to keep in the z direction (meters) relative to the coordinate frame defined by the pose. ~ (default: 0.5) --seg_z_min SEG_Z_MIN The amount to crop above the plane, in meters. ~ (default: 0.00749999983236 15 / 36

4.2.3.1 multiple capture sessions

´Ó²»Í¬Êӽǵĵã»ñȡһ¸ö¶ÔÏóµÄ¶à¸ö°ü£¬ÔÚÉÏ´«Ö®Ç°ÐèÒªÁ¬½Ó¡£

Usage: concat.py OUTPUT INPUT1 [INPUT2 ...]

4.2.4 upload

Ò»µ©»ñÈ¡ÁËÊÓͼµÄ°ü£¬¾Í»á½«°üÉÏ´«µ½Êý¾Ý¿âÖС£Õâ¸ö¡°upload¡±°üº¬ËùÓÐÊÓͼµÄ°ü£¬Ìí¼ÓһЩ¶ÔÏóµÄÔªÐÅÏ¢¡£¼ÙÉèÿ¸ö°ü¶¼ÓÐÒ»¸ö¶ÔÏ󣬸öÔÏóÓÐÒ»¸öÒÑÖªµÄ×ø±ê¿ò¼Ü¡£

1£©use % rosrun object_recognition_capture upload -a 'Ethan Rublee' -e 'erublee@willowgarage.com' -i silk.bag -n 'silk' -d 'A carton of Silk brand soy milk.' --commit milk, soy, kitchen, tod Uploaded session with id: 4ad9f2d3db57bbd414e5e987773490a0 Èç¹ûûÓС°--commit¡±£¬½Å±¾Ö´ÐеÄʱºò²»Ìá½»µ½ÈκεÄÊý¾Ý¿âÖС£ ÉÏ´«ºó£¬²é¿´DB£º

http://localhost:5984/_utils/database.html?object_recognition/_design/objects/_view/by_object_name

2£©command line interface usage: upload [-h] [-i BAG_FILE] [-n OBJECT_NAME] [-d DESCRIPTION] [-a AUTHOR_NAME] [-e EMAIL_ADDRESS] [--visualize] [--db_type DB_TYPE] [--db_root DB_ROOT_URL] [--db_collection DB_COLLECTION] [--commit] [--niter ITERATIONS] [--shell] [--gui] [--logfile LOGFILE] [--graphviz] [--dotfile DOTFILE] [--stats] TAGS [TAGS ...] Uploads a bag, with an object description to the db. positional arguments: TAGS Tags to add to object description. optional arguments: -h, --help show this help message and exit -i BAG_FILE, --input BAG_FILE A bag file to upload. -n OBJECT_NAME, --object_name OBJECT_NAME -d DESCRIPTION, --description DESCRIPTION -a AUTHOR_NAME, --author AUTHOR_NAME -e EMAIL_ADDRESS, --email EMAIL_ADDRESS --visualize Turn on visualization Database Parameters: --db_type DB_TYPE The type of database used: one of [CouchDB]. Default: CouchDB --db_root DB_ROOT_URL 16 / 36

The database root URL to connect to. Default: http://localhost:5984 --db_collection DB_COLLECTION The database root URL to connect to. Default: object_recognition --commit Commit the data to the database

3£©willowusers

һЩ֮ǰ»ñÈ¡µÄ°ü´æÔÚÓÚÄÚ²¿£¬±¸·ÝÈçÏ£º

rsync-vPa/wg/wgss0_shelf1/object_recognition_capture./

4.2.5 API£º

×Éѯ»ñÈ¡µ¥ÔªµÄAPI£¬µã»÷£º

http://wg-perception.github.io/capture/reference.html

4.3 Training

Ò»µ©¹Û²âÊý¾Ý´æ´¢ÔÚÊý¾Ý¿âÖУ¬¾Í¿ÉÒÔͨ¹ýËûÃǹ¹½¨Ä£ÐÍ

4.3.1 ConfigFile

ѵÁ·,¾ÍÏñʶ±ð,ÐèÒªÒ»¸öÅäÖÃÎļþ,µ«Ëüͨ³£Ö»°üº¬Ò»¸öµ¥Ôª,Æä¶¨ÒåÁËÒ»¸ö´ÓÊý¾Ý¿âÖжÁÈ¡Êý¾ÝµÄ¹ÜµÀ²¢¼ÆËãÄ£ÐÍ,´ÓÊý¾Ý¿âÖжÁÈ¡Êý¾ÝºÍ¼ÆËãÄ£ÐÍ¡£È»ºó,¿ÉÒÔͬʱͨ¹ýÖ´Ðкü¸¸ö¹ÜµÀÀ´ÑµÁ·ºÃ¼¸¸öÄ£ÐÍ¡£

4.3.2 Use

ѵÁ·½Å±¾¿ÉÒÔÈçϱ»Ö´ÐУº rosrunobject_recognition_coretraining\\

-c`rospackfindobject_recognition_tod`/conf/training.ork\\ --visualize

Äã¿ÉÒÔÑ¡ÔñÈκÎÅäÖÃÎļþ,Ò»²¿·ÖÔÚobject_recognition_server/confÖÐÌṩ¡£

Ò»¸öµäÐ͵ÄÃüÁîÐлỰÏñÕâÑù:

%apps/training-cconfig_training.txt--commit PrepareG2O:processingimage65/65 PerformingfullBA:

iteration=0chi2=168324165740673896546304.000000time=39.2803cumTime=39.2803lambda=154861.907021edges=64563schur=1Persisted

4.3.3 CommandLineInterface

usage: training [-h] [-c CONFIG_FILE] [--visualize] [--commit] optional arguments: -h, --help show this help message and exit -c CONFIG_FILE, --config_file CONFIG_FILE Config file --visualize If set and the pipeline supports it, it will display some windows with temporary results 17 / 36

--commit Commit the data to the database.

4.4Detection

4.4.1 Use

³ÖÐø¼ì²â£º

rosrun object_recognition_core detection -c `rospack find object_recognition_tod`/conf/detection.ros.ork

actionlibserver£¨ÔÚµ±Ç°¿ìÕÕÖмìË÷±êʶµÄ¶ÔÏ󣩣º

rosrun object_recognition_ros server -c` rospack find object_recognition_tod`/conf/detection.ros.ork

Óÿͻ§¶Ë½øÐмì²â·þÎñÆ÷£º Rosrun object_recognition_ros client

ʹÓÃroslaunch

roslaunch object_recognition_ros server.robot.launch

µäÐ͵ÄÃüÁîÐлỰ£º

% apps/detection -c `rospack find object_recognition_tod`/conf/detection.ros.ork [ INFO] [1317692023.717854617]: Initialized ros. node_name: /ecto_node_1317692023710501315 Threadpool executing [unlimited] ticks in 5 threads. [ INFO] [1317692024.254588151]: Subscribed to topic:/camera/rgb/camera_info with queue size of 0 [ INFO] [1317692024.255467268]: Subscribed to topic:/camera/depth_registered/camera_info with queue size of 0 [ INFO] [1317692024.256186358]: Subscribed to topic:/camera/depth_registered/image with queue size of 0 [ INFO] [1317692024.256863212]: Subscribed to topic:/camera/rgb/image_color with queue size of 0 model_id: e2449bdc43fd6d9dd646fcbcd012daaa span: 0.433393 meters 1 ***Starting object: 0 * starting RANSAC added : 1 added : 0 * n inliers: 1824 [-0.056509789, 0.99800211, 0.028263446; 0.94346958, 0.062639669, -0.32548648; -0.32660651, 0.0082725696, -0.94512439] [-0.32655218; 0.03684178; 0.85040951] ********************* found 1poses [ INFO] [1317692117.187226953]: publishing to topic:/object_ids [ INFO] [1317692117.188155476]: publishing to topic:/poses

4.4.2 Command Line Interface

usage: detection [-h] [-c CONFIG_FILE] [--visualize] [--niter ITERATIONS] [--shell] [--gui] [--logfile LOGFILE] [--graphviz] [--dotfile DOTFILE] [--stats] optional arguments: -h, --help show this help message and exit 18 / 36

-c CONFIG_FILE, --config_file CONFIG_FILE Config file --visualize If set and the pipeline supports it, it will display some windows with temporary results Ecto runtime parameters: --niter ITERATIONS Run the graph for niter iterations. 0 means run until stopped by a cell or external forces. (default: 0) --shell 'Bring up an ipython prompt, and execute asynchronously.(default: False) --gui Bring up a gui to help execute the plasm. --logfile LOGFILE Log to the given file, use tail -f LOGFILE to see the live output. May be useful in combination with --shell --graphviz Show the graphviz of the plasm. (default: False) --dotfile DOTFILE Output a graph in dot format to the given file. If no file is given, no output will be generated. (default: ) --stats Show the runtime statistics of the plasm.

4.4.3 ConfigurationFile

Ö÷ÒªÅäÖãºsource¡¢sink¡¢pipeline

http://wg-perception.github.io/object_recognition_core/detection/detection.html#configuration-file

19 / 36

5 ORKÖ®ROS integration

Õâ¸ö°ü¶¨ÒåÁ˺ü¸¸öROS½Ó¿ÚÓÃÓÚORK¡£ÓÉÓÚORK¶ÔÓÚROSÊÇδ֪µÄ£¬Æä¶àÖÖÊäÈë/Êä³öµ¥ÔªÊÇÌØ¶¨µÄROS£¬Í¬Ê±Ò²ÓÐRViz²å¼þ¡£

5.1 ROSÏûÏ¢

1£©¶ÔÏó¶¨Òå

OjjectId.msgÖн«Ò»¸ö¶ÔÏóΨһµÄ¶¨Òå³ÉÒ»ÖÖÀàÐͺÍÊý¾Ý¿â£º ################################################## OBJECT ID ######################################################### # Contains information about the type of a found object. Those two sets of parameters together uniquely define an # object # The key of the found object: the unique identifier in the given db string key # The db parameters stored as a JSON/compressed YAML string. An object id does not make sense without the corresponding # database. E.g., in object_recognition, it can look like: \'root':'http://localhost'}\# There is no conventional format for those parameters and it's nice to keep that flexibility. # The object_recognition_core as a generic DB type that can read those fields # Current examples: # For CouchDB: # type: 'CouchDB' # root: 'http://localhost:5984' # collection: 'object_recognition' # For SQL household database: # type: 'SqlHousehold' # host: 'wgs36' # port: 5432 # user: 'willow' # password: 'willow' # name: 'household_objects' # module: 'tabletop' string db ¸ü¶àµÄÐÅÏ¢±»ORK´æ´¢ÔÚÊý¾Ý¿âÖУ¬ÕâЩÐÅÏ¢¿ÉÒÔÔÚObjectInformation.msgÖмìË÷£º ############################################## VISUALIZATION INFO ###################################################### ################### THIS INFO SHOULD BE OBTAINED INDEPENDENTLY FROM THE CORE, LIKE IN AN RVIZ PLUGIN ################### # The human readable name of the object string name # The full mesh of the object: this can be useful for display purposes, augmented reality ... but it can be big # Make sure the type is MESH shape_msgs/Mesh ground_truth_mesh # Sometimes, you only have a cloud in the DB 20 / 36

# Make sure the type is POINTS sensor_msgs/PointCloud2 ground_truth_point_cloud 2£©Ê¶±ð¶ÔÏó

µ±¶ÔÏó±»Ê¶±ð³öµÄʱºò£¬±»Ê¶±ð¶ÔÏóÊý×é·¢²¼ÔÚRecognizedObjectArray.msgÖУº ##################################################### HEADER ########################################################### Header header # This message type describes a potential scene configuration: a set of objects that can explain the scene object_recognition_msgs/RecognizedObject[] objects ##################################################### SEARCH ########################################################### # The co-occurrence matrix between the recognized objects float32[] cooccurrence Æä°üº¬Á˺ü¸¸öRecognizedObject.msg£º ##################################################### HEADER ########################################################### # The header frame corresponds to the pose frame, NOT the point_cloud frame. Header header ################################################## OBJECT INFO ######################################################### # Contains information about the type and the position of a found object # Some of those fields might not be filled because the used techniques do not fill them or because the user does not # request them # The type of the found object object_recognition_msgs/ObjectType type #confidence: how sure you are it is that object and not another one. # It is between 0 and 1 and the closer to one it is the better float32 confidence ################################################ OBJECT CLUSTERS ####################################################### # Sometimes you can extract the 3d points that belong to the object, in the frames of the original sensors # (it is an array as you might have several sensors) sensor_msgs/PointCloud2[] point_clouds # Sometimes, you can only provide a bounding box/shape, even in 3d # This is in the pose frame shape_msgs/Mesh bounding_mesh # Sometimes, you only have 2d input so you can't really give a pose, you just get a contour, or a box # The last point will be linked to the first one automatically 21 / 36

geometry_msgs/Point[] bounding_contours #################################################### POSE INFO ######################################################### # This is the result that everybody expects : the pose in some frame given with the input. The units are radian/meters # as usual geometry_msgs/PoseWithCovarianceStamped pose 3£©×À×Ó

×À×Ó¶ÔÓÚʶ±ð¶ÔÏóÊǷdz£ÓÐÓõģ¬Tabletop̽²âÆ÷½øÐз¢ÏÖʱ£¬ÆäÓÐÁ½¸öÏûÏ¢£º # Informs that a planar table has been detected at a given location Header header # The pose gives you the transform that take you to the coordinate system # of the table, with the origin somewhere in the table plane and the # z axis normal to the plane geometry_msgs/Pose pose # There is no guarantee that the table does NOT extend further than the # convex hull; this is just as far as we've observed it. # The origin of the table coordinate system is inside the convex hull # Set of points forming the convex hull of the table geometry_msgs/Point[] convex_hull Header header # Just an array of tables object_recognition_msgs/Table[] tables 5.2 ¶ÔÏóÐÅÏ¢·þÎñ

ÓÐÒ»¸ö»ñÈ¡¶ÔÏóÐÅÏ¢µÄ·þÎñ£º¶ÔÆä²éѯ£¬½«¼ìË÷ÈκÎÒÑÖªµÄÊÂÎï¡£Æä¶¨ÒåÈçÏ£º # Retrieve extra data from the DB for a given object # The type of the object to retrieve info from object_recognition_msgs/ObjectType type --- # Extra object info object_recognition_msgs/ObjectInformation information 5.3 Actionlib·þÎñ

Actionlib·þÎñÓÐÖúÓÚÔÚµ±Ç°¿ìÕÕÖжÔÒÑÈ·ÈϵĶÔÏó½øÐмìË÷£¬Æä¶¨ÒåÈçÏ£º # Optional ROI to use for the object detection bool use_roi float32[] filter_limits --- 22 / 36

# Send the found objects, see the msg files for docs object_recognition_msgs/RecognizedObjectArray recognized_objects --- #no feedback Èç¹ûÒªÔËÐУ¬ÐèҪͨ¹ýÒÔÏÂÀ´Æô¶¯£º rosrun object_recognition_server server -c whatever_config_file.ork Ò²¿ÉÒÔÓÃclientÀ´²âÊÔ£¨ÔÚ¿Í»§¶Ë½øÐвâÊÔµÄÒâ˼£©£º rosrun object_recognition_server client Ó¦¸Ã»áÏÔʾ½ÓÊÕµ½µÄÏûÏ¢¡£

ÈôÒª²âÊÔ£¬ÉõÖÁ¿ÉÒÔÓÃÒ»¸ö²âÊԹܵÀÀ´ÔËÐзþÎñÆ÷£º

rosrun object_recognition_server server -c `rospack find object_recognition_ros`/conf/detection.test.ros.ork 5.4 ·¢²¼ºÍ¶©ÔÄ

Õâ¸ö°üÌṩÁ˶ÔÓÚROSÀ´Ëµ±È½ÏÌØ±ðµÄ·¢²¼ºÍ¶©ÔÄ¡£ 1£©×ÊÔ´ BagReader: type: BagReader module: object_recognition_ros.io.source.bag_reader parameters: # The bag file name. bag: data.bag # If the cropper cell is enabled crop_enabled: True # The ROS topic for the depth camera info. depth_camera_info: /camera/depth_registered/camera_info # The ROS topic for the depth image. depth_image_topic: /camera/depth_registered/image_raw # The ROS topic for the RGB camera info. rgb_camera_info: /camera/rgb/camera_info # The ROS topic for the RGB image. rgb_image_topic: /camera/rgb/image_color # The maximum x value (in the camera reference frame) x_max: 3.40282346639e+38 # The minimum x value (in the camera reference frame) x_min: -3.40282346639e+38 # The maximum y value (in the camera reference frame) y_max: 3.40282346639e+38 # The minimum y value (in the camera reference frame) y_min: -3.40282346639e+38 # The maximum z value (in the camera reference frame) z_max: 3.40282346639e+38 # The minimum z value (in the camera reference frame) z_min: -3.40282346639e+38 RosKinect: type: RosKinect module: object_recognition_ros.io.source.ros_kinect parameters: # If the cropper cell is enabled crop_enabled: True 23 / 36

# The ROS topic for the depth camera info. depth_camera_info: /camera/depth_registered/camera_info # The ROS topic for the depth image. depth_image_topic: /camera/depth_registered/image_raw # The ROS topic for the RGB camera info. rgb_camera_info: /camera/rgb/camera_info # The ROS topic for the RGB image. rgb_image_topic: /camera/rgb/image_color # The maximum x value (in the camera reference frame) x_max: 3.40282346639e+38 # The minimum x value (in the camera reference frame) x_min: -3.40282346639e+38 # The maximum y value (in the camera reference frame) y_max: 3.40282346639e+38 # The minimum y value (in the camera reference frame) y_min: -3.40282346639e+38 # The maximum z value (in the camera reference frame) z_max: 3.40282346639e+38 # The minimum z value (in the camera reference frame) z_min: -3.40282346639e+38 2£©sinks Publisher: type: Publisher module: object_recognition_ros.io.sink.publisher parameters: # The DB parameters db_params: # Determines if the topics will be latched. latched: True # The ROS topic to use for the marker array. markers_topic: markers # The ROS topic to use for the object meta info string object_ids_topic: object_ids # The ROS topic to use for the pose array. pose_topic: poses # Sets whether the point cloud clusters have to be published or not publish_clusters: True # The ROS topic to use for the recognized object recognized_object_array_topic: recognized_object_array 5.5 RViz²å¼þ

RVizÊÇROSµÄ¿ÉÊÓ»¯¹¤¾ß£¬ROS¶ÔÓÚ¶ÔÏóµÄ×Լ죨introspection£©Ö§³ÖÁ½ÖÖ²å¼þ¡£ 1£©Table²å¼þ

Õâ¸ö²å¼þ¿ÉÒÔչʾһ¸öTable.msg£¬Tabletop¹ÜµÀÖпÉÒÔÕÒµ½ÊµÀý¡£Ëüչʾ×À×ÓÍ¹ÃæµÄÍâÐΡ¢Ò»¸ö±ß½ç¿òºÍÒ»¸ö¶¥²¿×À×Ó¶¥²¿µÄÏòÁ¿¡£

24 / 36

2£©¶ÔÏó²å¼þ

¶ÔÏó²å¼þ¿ÉÒÔչʾRecognizedObject.msgs£¬ORKµÄĬÈÏÊä³ö¡£

25 / 36

6 ORKÖ®Recognition Pipelines

¿ò¼ÜÖпÉÒÔÔËÐкü¸ÖÖ¶ÔÏóʶ±ðͨµÀ£º Techniques LINE-MOD2D/3D 2D and/or 3D ? ? ? Types of object rigid, Lambertian rigid, Lambertian rotationally symmetric also finds planar surfaces rigid, Lambertian textured rigid and transparent ? ? Limitations does not work with partial occlusions scales linearly with the number of objects the object is assumed to be on a table with no 3d rotation ? ? tabletop3D ? TOD2D and 3D 2D and 3D ? ? ? transparent objectsTraining has to be done on a painted version of the object

6.1 LINE-MOD

object_recognition_linemod£ºÓÃLINE-MOD½øÐжÔÏóʶ±ð¡£

¸ÃͨµÀÔËÐÐLINE-MOD£¬Á˽â¸ü¶àµÄÐÅÏ¢²é¿´ÍøÒ³http://ar.in.tum.de/Main/StefanHinterstoisser¡£ËüÊÇÓÃÓÚʶ±ðÒ»°ã¸ÕÐÔÎïÌåµÄ×îºÃ·½·¨Ö®Ò»£¬ÆäʹÓ÷dz£¿ìËÙµÄÄ£ÐÍÆ¥ÅäÀ´´¦Àí¡£Õâ¸ö°üÖÐËùʹÓõİ汾ºÍ³õʼµÄÂÛÎÄ£¨OpenCV£©ÊÇÒ»ÑùµÄ£¬µ«ÈÃÆäÕý³£¹¤×÷µÄ¼¼ÇÉÔÚǰ/ºóµÄ´¦Àí²½ÖèÖС£

1£©Ô¤´¦Àí²½Öè

ʹÓÃÒ»¸ö¡°ÊÓͼ×Ô¶¯Éú³ÉÆ÷¡±À´Éú³ÉËùÓеÄÄ£ÐÍ¡£Æ½ÃæÄÚÏà»ú¶àÑù»¯µÄÐýת×Å£¬´¦Àí²»Í¬µÄ³ß¶ÈºÍ¹Û²ìµã¡£´øÓÐÉî¶ÈºÍÑÚĤµÄ³ÉǧÉÏÍòµÄͼÏñ±»Éú³É£¬²¢Ìṩ¸øOpenCVѵÁ·Æ÷¡£

2£©´¦Àí²½Öè

OpenCV̽²âÆ÷±»¼òµ¥µÄ³ÆÎª£¨Ô­ÎĺóÃæÒ²Ã»ÓУ©¡£¡£¡£¡£ 3£©ºóÐø´¦Àí²½Öè

Ö´ÐÐÒ»¸öICP²½ÖèÀàËÆÓÚACCV£¬¿ÉÄÜÊÇ£º

Linear Least-Squares Optimization for Point-to-Plane ICPSurface Registration

6.2 tabletop

TabletopÊÇhttp://www.ros.org/wiki/tabletop_object_detectorÖз½·¨µÄÒ»²¿·Ö£¬Æä×î³õÓÉMarius MujaÔÚFLANN¿ò¼ÜÖпª·¢¡£

Õâ¸ö¶ÔÏó¼ì²â·½·¨ÓÐÁ½²¿·Ö£ºÒ»¸ö×À×Ó̽²âÆ÷£¨finder£©ºÍÒ»¸ö¶ÔÏóʶ±ðÆ÷¡£Ê¶±ð²¿·ÖÖ»Äܹ»Ê¶±ðÄÇЩÐýת¶Ô³ÆµÄ¶ÔÏó¡£TabletopËùʹÓõĴ«¸ÐÆ÷Êý¾ÝÓÉÀ´×ÔÏÁÕ­µÄÁ¢ÌåÕÕÆ¬»òKinectÏà»úÖеĵãÔÆ×é³É¡£TabletopÖ´ÐÐÒÔϲ½Ö裺

? ·Ö¸î£ºÍ¨¹ý·¢ÏÖµãÔÆÖеÄÖ§³ÅÆ½ÃæÀ´Ì½²â×À×Ó£¬¶øµãÔÆÊÇ»ùÓÚ3D·¨ÏòÁ¿½øÐзÖÎö

µÄ£»×À×ÓÉÏ·½µÄµã±»ÈÏΪÊôÓÚÒ××¥¾ÙµÄ¶ÔÏó¡£ÎÒÃÇʹÓþÛÀàËã·¨À´Ê¶±ðµ¥¸ö¶ÔÏó¡£

26 / 36

ÎÒÃǽ«µãͳһ¿¼Âǵ½Ò»¸öµ¥Ò»µÄ¶ÔÏóÖÐ×÷Ϊ´Ø¡£

? ʶ±ð£º¶Ôÿһ¸ö´Ø£¬ÓÃÒ»¸ö¼òµ¥µÄµü´úÄâºÏ¼¼Êõ£¨ÀàËÆICP£©À´²é¿´ÆäÓëÄ£ÐÍÊý¾Ý

¿âÖеÄÿ¸ömesh¶ÔÓ¦µÄÔõôÑù¡£Èç¹ûÕÒµ½ÁËÒ»¸öºÜºÃµÄÆ¥Å䣬ģÐ͵ÄÊý¾Ý¿âidºÍ´ØÍ¬Ê±·µ»Ø£»×¢Ò⵱ǰµÄÆ¥Åä·½·¨ÊÇÔÚ2DÖвÙ×÷µÄ£¬ÒòΪ¼±ÓÚÎÒÃǵļÙÉ裨¶ÔÏó¾²Ö¹´¹Ö±ÔÚÒÑÖª×À×ӵıíÃæ£©ÆäËû4¸öά¶ÈÊǹ̶¨µÄ¡£ 1£©×À×Ó̽²âÆ÷

×À×Ó̽²âÆ÷ѰÕÒ³¡¾°ÖÐµÄÆ½Ã棬²¢ÔÚÆäÉϽøÐжÔÏó·Ö¸î¡£Èç¹ûÄãÐèÒª½«´Ø×÷Ϊ¸øÒ»¸öÔ¤´¦Àí²½Ö裬Õâ¸ö·½·¨±¾ÉíÊÇÓÐÓõģ¿£¿£¿¡£

Ö´ÐÐÏÂÃæÃüÁîÆô¶¯¹ÜµÀ£º

rosrun object_recognition_ros server -c `rospack find object_recognition_tabletop`/conf/detection.table.ork

ÔÚROSģʽÖз¢±íÁ˺ü¸¸öÖ÷Ì⣺

? ÓÃÓÚÆ½ÃæÉϴصÄMarkerArray.msg£¨/tabletop/clusters£© ? ÓÃÓÚ²»Í¬×À×Ó£¨RViz ork²å¼þÓÃÓÚ¶ÔÆä¿ÉÊÓ»¯£©µÄTable.msg 2£©¶ÔÏó̽²âÆ÷

¹ÜµÀʶ±ð¶ÔÏó¹¦ÄܵIJ¿·ÖÈçÏ£º´ÓÉÏÒ»½×¶ÎÖжøÀ´µÄ¶Ô´Ø½øÐзָÔÚÊý¾Ý¿âÖÐÕÒµ½¿ÉÄܵĺòÑ¡Õߣ¬È»ºóÔÚ×îÖÕµÄICP²½ÖèÖУ¬ÊÔͼ½«ÆämeshÆ¥Åäµ½±»¹Û²ìµÄ´ØÖС£

Ö´ÐÐÕâЩÈÎÎñtabletopÒÀÀµÈçϼÙÉ裻

? ¶ÔÏó·ÅÔÚ×À×ÓÉÏ£¬¶ø×À×ÓÊdz¡¾°ÖеÄÖ§³ÅÆ½Ãæ£»

? Á½¸öÎïÌåÖ®¼äµÄ×îС¾àÀ볬¹ýÒ»¸ö¸ø¶¨µÄãÐÖµ£¨ÑÝʾÖÐÊÇ3ÀåÃ×£©£»

? ´ËÍ⣬¶ÔÏóʶ±ð½ö½öÖ»ÄÜ´¦ÀíÁ½¸ö×ÔÓɶȣºÑØ×ÅxºÍyÖᣨzÖá¼Ù¶¨Ïà¶ÔÓÚ×À×ÓÖ¸

Ïò¡°ÏòÉÏ¡±£©×ª»¯¡£Òò´Ë£¬ÎªÁËʶ±ðÒ»¸ö¶ÔÏó£º ? Æä±ØÐëÊÇÐýת¶Ô³ÆµÄ£»

? ±ØÐëÓÐÒ»¸öÒÑÖªµÄ¶¨Ïò£¬ÀýÈçÒ»¸ö±­×Ó»òÍë´¹Ö±×øÂäÔÚ×À×ÓÉÏ¡£

×é¼þµÄÊä³ö°üÀ¨×À×ÓµÄλÖá¢È·ÈÏµÄµã´ØºÍÏàÓ¦µÄÊý¾Ý¿â¶ÔÏóid£¬ÒÔ¼°·¨ÏßÀàËÆÓÚÊý¾Ý¿â¶ÔÏóµÄµã´ØµÄÆ¥Åä×Ë̬¡£TabletopʹÓÃCouchDBÊý¾Ý¿â£¬Æä°üº¬ÓйضÔÏóʶ±ðµÄϸ½Ú¡£ÕâЩϸ½Ú°üÀ¨¶ÔÏóid£¨¶ÔÏóÌí¼Óµ½Êý¾Ý¿âÊÇÆäid×Ô¶¯Éú³É£©¡¢¶ÔÏóÃû³Æ¡¢¶ÔÏó×÷Õß¡¢¶ÔÏó±ê¼ÇµÄ3D¸ñÍø¡£ÔÚʶ±ð²½ÖèÖÐ3D¸ñÍø½«±»tabletopʹÓá£ÉèÖÃÒ»¸öCouchDBºÍ¹ÜÀí¶ÔÏó¶¼»áÔÚÕâÀï±»½âÊÍ¡£

3£©¹ÜÀítabletopµÄÊäÈë²ÎÊý

Tabletop¹ÜÀíÒ»¶¨ÊýÁ¿µÄ²ÎÊý£¬ÏñÊäÈëͼÏñÁ÷¡¢Êý¾Ý¿âϸ½ÚºÍ¼ì²âãÐÖµ£¬ËùÓеÄÕâЩ²ÎÊý¶¨ÒåÔÚÒ»¸öÅäÖÃÎļþÖУ¬ÔÚtabletopÃüÁîÖÐÒÔ-c²ÎÊý¡£ÅäÖÃÎļþÐèÒªÒÔYAML¸ñʽÐÎʽ¡£Ò»¸öÀý×ÓÓÉopenni_launch°ü·¢²¼µÄ¡¢ÓÃÓÚtabletop´¦ÀíͼÏñµÄÅäÖÃÎļþÈçÏ£º source1: type: RosKinect module: 'object_recognition_ros.io' 27 / 36

parameters: rgb_frame_id: camera_rgb_optical_frame rgb_image_topic: /camera/rgb/image_rect_color rgb_camera_info: /camera/rgb/camera_info depth_image_topic: /camera/depth_registered/image_raw depth_camera_info: /camera/depth_registered/camera_info sink1: type: TablePublisher module: 'object_recognition_tabletop' inputs: [source1] sink2: type: Publisher module: 'object_recognition_ros.io' inputs: [source1] pipeline1: type: TabletopTableDetector module: 'object_recognition_tabletop' inputs: [source1] outputs: [sink1] parameters: table_detector: min_table_size: 4000 plane_threshold: 0.01 pipeline2: type: TabletopObjectDetector module: 'object_recognition_tabletop' inputs: [source1, pipeline1] outputs: [sink2] parameters: object_ids: 'all' tabletop_object_ids: 'REDUCED_MODEL_SET' threshold: 0.85 db: type: CouchDB root: http://localhost:5984 collection: object_recognition ¡°source1¡±¶¨ÒåÁËtabletopÐèÒª¼ì²â²½ÖèµÄͼÏñ±êÌâ¡£»ù±¾ÉÏ£¬tabletopÐèÒªÒ»¸öÉî¶ÈͼÏñ±êÌâ¡¢Ò»¸ö²ÑɫͼÏñ±êÌâºÍ×÷ΪÊäÈëµÄÏà»úÐÅÏ¢¡£¡°sink1¡±ºÍ¡°sink2¡±¶¨ÒåtabletopµÄ

28 / 36

Êä³öÈçºÎÄܹ»½øÒ»²½µÄ´¦Àí¡£ÔÚÕâ¸öÀý×ÓÖУ¬ËûÃÇ¿¼ÂÇ·¢²¼tabletopµÄ¼ì²â½á¹û¡£¡°pipelin1¡±¿¼ÂǼì²âÆ½ÃæµÄ±íÃæ¡£¡°pipelin2¡±¿¼ÂǼì²âÖ÷Æ½ÃæÉϵĶÔÏó¡£

Äã¿ÉÒÔÐÞ¸ÄÕâЩ²ÎÊý£¬ÀýÈçÊäÈëͼÏñ±êÌâ¡¢¼ì²âãÐÖµ¡¢CouchDBµÄURIµÈµÈ 4£©ÊµÀý

ÕâÀïÊdz¡¾°Ê²Ã´Ñù×Ó

½Ó×ÅͨµÀÕÒµ½Æ½ÃæºÍÆäÉÏÃæµÄ´Ø

È»ºóÈ·ÈÏ´Ø×÷ΪÊý¾Ý¿âÖеĶÔÏó

29 / 36

6.3 TOD

object_recognition_tod: ÎÆÀí¶ÔÏó¼ì²â¡£ 1£©ÑµÁ·

ÔÚÅäÖÃÎļþÖÐÐèÒªÖ¸¶¨ËùʹÓõŦÄÜ/ÃèÊö·ûÒÔ¼°ËÑË÷²ÎÊý¡£DB²ÎÊýÊDZê×¼µÄObjectDbParameters ²ÎÊý¡£Ò»¸öµäÐ͵ÄÅäÖÃÎļþÏñÒÔÏÂËùʾ£º

# info about the db pipeline1: type: TodTrainer module: object_recognition_tod submethod: descriptor: type: 'ORB' parameters: feature: type: ORB module: ecto_opencv.features2d n_features: 1000 n_levels: 3 scale_factor: 1.2 descriptor: type: ORB module: ecto_opencv.features2d search: key_size: 24 multi_probe_level: 2 n_tables: 8 radius: 55 ratio: 0.8 type: 'LSH' db: type: 'CouchDB' root: 'http://localhost:5984' collection: 'object_recognition' # The list of object_ids to analyze 30 / 36

object_ids: \ ѵÁ·Ê±£¬¶ÔÏóÌØÕ÷ºÍÃèÊöÔÚ²»Í¬µÄÊӽDZ»ÌáÈ¡¡£¶Ôÿ¸öÊÓ½ÇÀ´Ëµ£¬Èç¹û»ñÈ¡£¨ÕâÊÇΨһ֧³ÖµÄ·½·¨£¬Ç¿ÁÒÍÆ¼ö£©ÁËÉî¶ÈÖµ£¬Ôò3DλÖÃÒ²»á±»´æ´¢¡£ÄãÒ²¿ÉÒÔͨ¹ýÆô¶¯ apps/feature_viewerÓ¦ÓóÌÐòÀ´²é¿´ÌØÕ÷ÖµµÄµãÔÆ¡£

$ /home/vrabaud/workspace/recognition_kitchen_groovy/src/object_recognition_tod/doc/source/../../apps/feature_viewer --help usage: feature_viewer [-h] [--db_type DB_TYPE] [--db_root DB_ROOT_URL] [--db_collection DB_COLLECTION] [--commit] [--niter ITERATIONS] [--shell] [--gui] [--logfile LOGFILE] [--graphviz] [--dotfile DOTFILE] [--stats] object_id positional arguments: object_id The id of the object for which the TOD model will be displayed. optional arguments: -h, --help show this help message and exit Database Parameters: --db_type DB_TYPE The type of database used: one of [CouchDB]. Default: CouchDB --db_root DB_ROOT_URL The database root URL to connect to. Default: http://localhost:5984 --db_collection DB_COLLECTION The database root URL to connect to. Default: object_recognition --commit Commit the data to the database. Ecto runtime parameters: --niter ITERATIONS Run the graph for niter iterations. 0 means run until stopped by a cell or external forces. (default: 0) --shell 'Bring up an ipython prompt, and execute asynchronously.(default: False) --gui Bring up a gui to help execute the plasm. --logfile LOGFILE Log to the given file, use tail -f LOGFILE to see the live output. May be useful in combination with --shell --graphviz Show the graphviz of the plasm. (default: False) --dotfile DOTFILE Output a graph in dot format to the given file. If no file is given, no output will be generated. (default: ) --stats Show the runtime statistics of the plasm. 2£©¼ì²â

Ò»¸öµäÐ͵ÄÅäÖÃÎļþÈçÏ£º

source1: type: 'OpenNI' module: 'object_recognition_core.io.source' parameters: image_mode: 'SXGA_RES' depth_mode: 'VGA_RES' 31 / 36

image_fps: 'FPS_15' depth_fps: 'FPS_30' #Use this instead to receive images via ROS #source1: # type: ros_kinect # rgb_frame_id: '/camera_rgb_optical_frame' pipeline1: type: 'TodDetector' module: 'object_recognition_tod' subtype: type: 'ORB' inputs: [source1] parameters: object_ids: \ feature: type: ORB module: ecto_opencv.features2d n_features: 5000 n_levels: 3 scale_factor: 1.2 descriptor: type: ORB module: ecto_opencv.features2d search: type: LSH module: ecto_opencv.features2d key_size: 16 multi_probe_level: 1 n_tables: 10 radius: 35 ratio: 0.8 n_ransac_iterations: 2500 min_inliers: 8 sensor_error: 0.01 db: type: CouchDB root: http://localhost:5984 collection: object_recognition ¼ì²âʱ£¬¼ÆË㵱ǰͼÏñµÄÌØÕ÷Öµ/ÃèÊö·ûºÅ£¬²¢½«ÆäÓëÊý¾Ý¿â½øÐбȽϡ£È»ºó£¬ÓÃ×îÁÚ½ü·¨£¨£©À´¼ì²éÕâЩÃèÊö·ûºÅµÄ×îÁÚ½üÊÇ·ñÓÐÀàËÆµÄ3DÅäÖá£Èç¹ûÊäÈëµÄÊý¾ÝÊÇ3D£¬ÄÇôËüÊÇÒ»¸ö3D¶Ô3DµÄ±È½Ï£¬µ«ÊÇÈç¹ûÊäÈëÊý¾Ýλ2D£¬ÄÇôËüÊÇÒ»¸öPnPÀàÐ͵ÄÎÊÌ⣨£©¡£ËùÒÔ»ù±¾ÉÏ£¬ÄãÖ»Äܵõ½Ò»¸öÒÔRGBDÊäÈëµÄ¶ÔÏóµÄ×Ë̬¡£

6.4 transparent objects

object_recognition_transparent_objects: ͸Ã÷¶ÔÏóµÄʶ±ð¡£

¼Ù¶¨ÒÑÓÐÒ»¸ö¶ÔÏóµÄµãÔÆÄ£ÐÍ£¬ÄÇôTransparent bojects¹ÜµÀ¿ÉÒÔ¼ì²âºÍ¹ÀËã͸Ã÷¶ÔÏóµÄ×Ë̬¡£¹ÜµÀÍêÈ«¼¯³Éµ½ROKÀËùÒÔ¶ÔÏóʶ±ðÖеÄÕý³£µÄѵÁ·ºÍ¼ì²â¿ÉÓÃÓÚºóÐøµÄ×¥¾Ù¡£²é¿´ROS¿ìËÙÏòµ¼»ò¶ÔÏóʶ±ðÀ´ÏêϸÁ˽âÈçºÎʹÓÃËü¡£

1£©Àý×Ó

Äã¿ÉÒÔ²»Ê¹ÓöÔÏóʶ±ðÀ´ÔËÐÐÒ»¸ö¿ìËÙʵÀý£¬¿´¿´Ëã·¨ÈçºÎ¹¤×÷ÒÔ¼°ÈçºÎʹÓÃËüµÄ¡£ËùÓбØÐèµÄÊý¾ÝºÍÔ´´úÂëλÓÚsampleÎļþ¼Ð¡£Sample±àÒëµ½binÎļþÀÄã¿ÉÒÔͨ¹ýsample

32 / 36

Ŀ¼·¾¶×÷ΪÃüÁîÐвÎÊýÔËÐУ¬ÀýÈ磺

./bin/sample ../or-transparent-objects/sample/ Ò»»áºóÄã¾Í»á¿´µ½ÊäÈëÊý¾Ý

ÒÔ¼°Ëã·¨µÄ½á¹ûÀàËÆÓÚ

33 / 36

2£©²Î¿¼

[1] Ilya Lysenkov, Victor Eruhimov, and Gary Bradski, ¡°Recognition and Pose Estimation of Rigid Transparent Objects with a Kinect Sensor,¡± 2013 Robotics: Science and Systems Conference (RSS), 2013.

[2] Ilya Lysenkov, and Vincent Rabaud, ¡°Pose Estimation of Rigid Transparent Objects in

Transparent Clutter¡±, 2013 IEEE International Conference on Robotics and Automation (ICRA), 2013.

34 / 36

7 ORKÖ®ToolÖ®Reconstruction

Ojject_recognition_reconstruction£º3D¶ÔÏóÖØ½¨¡£

ReconstructionÌṩÁËÒ»¸öʵÓóÌÐòÀ´´´½¨Ò»¸ö¶ÔÏóµÄ3DÖØ½¨¡£Æä¸ù¾ÝcaptureÖÐÀ´µÄÊý¾ÝÀ´´´½¨Ò»¸ö½üËÆµÄÌØÕ÷²»Ã÷ÏÔµÄmesh¡£Ëã·¨Êǽö½ö´Ó¶à¸öÊÓ½ÇÀ´ºÏ²¢ÁËÉî¶ÈͼÏñ£¬²¢ÉÔ΢µÄ¶ÔÆä½øÐй⻬´¦Àí¡£Ã»×öʲôÇÉÃîµÄÊÂÇé¡£

1£©°²×°ÒÀÀµ¹ØÏµ

Ê×ÏȰ²×°meshlab£¬ÆäÓÃÓÚ½«Éî¶ÈµØÍ¼¾ÛºÏµ½mesh¡£ sudo apt-get install meshlab 2£©ÃüÁîÐÐ ´ÓÃüÁîÐÐÖУ¬ÄãÏëÖ´ÐÐÏÂÁÐÃüÁîÀ´¼ÆËãËùÓеÄmesh²¢Ìá½»¸ø±¾µØµÄÊý¾Ý¿â£º rosrun object_recognition_reconstruction mesh_object --all --visualize --commit »òÕßʹÓÃÊʵ±µÄÃüÁîÐвÎÊýÀ´µ÷ÓÅ£º $ /home/vrabaud/workspace/recognition_kitchen_groovy/src/object_recognition_reconstruction/doc/source/../../apps/mesh_object --help usage: mesh_object [-h] [-s SESSION_ID] [--all] [--visualize] [--db_type DB_TYPE] [--db_root DB_ROOT_URL] [--db_collection DB_COLLECTION] [--commit] [--niter ITERATIONS] [--shell] [--gui] [--logfile LOGFILE] [--graphviz] [--dotfile DOTFILE] [--stats] Computes a surface mesh of an object in the database optional arguments: -h, --help show this help message and exit -s SESSION_ID, --session_id SESSION_ID The session id to reconstruct. --all Compute meshes for all possible sessions. --visualize Turn on visualization Database Parameters: --db_type DB_TYPE The type of database used: one of [CouchDB]. Default: CouchDB --db_root DB_ROOT_URL The database root URL to connect to. Default: http://localhost:5984 --db_collection DB_COLLECTION The database root URL to connect to. Default: object_recognition --commit Commit the data to the database.

3£©Web½Ó¿Ú

ÖØ½¨Ò²ÌṩÁËÒ»¸öWeb½Ó¿Ú£¬¿ÉÒÔ¿ÉÊÓ»¯²»Í¬µÄmesh¡£ÔÚËù½¨µÄÎļþ¼ÐÖÐÔËÐУº make or_web_ui È»ºóÔÚhttp://localhost:5984/or_web_ui/_design/viewer/index.html¿ÉÊÓ»¯mesh¡£ 4£©¼¼ÇÉ

ÔÚmeshlabÀォ²ÊÉ«µãÔÆ´´½¨Ò»¸öÌØÕ÷ÏÔÖøµÄmesh

35 / 36

http://www.youtube.com/watch?v=JzmODsVQV7w£¨´ò²»¿ª£¿£©

36 / 36