CN102436261B - Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED) - Google Patents

Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED) Download PDF

Info

Publication number
CN102436261B
CN102436261B CN201110399896.1A CN201110399896A CN102436261B CN 102436261 B CN102436261 B CN 102436261B CN 201110399896 A CN201110399896 A CN 201110399896A CN 102436261 B CN102436261 B CN 102436261B
Authority
CN
China
Prior art keywords
robot
led
docking
camera
target interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110399896.1A
Other languages
Chinese (zh)
Other versions
CN102436261A (en
Inventor
魏洪兴
陈友东
李宁
郑随兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ao Bo (Jiangsu) Co., Ltd. robot
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN201110399896.1A priority Critical patent/CN102436261B/en
Publication of CN102436261A publication Critical patent/CN102436261A/en
Application granted granted Critical
Publication of CN102436261B publication Critical patent/CN102436261B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a butt joint positioning and navigation strategy for a robot based on a single camera and a light-emitting diode (LED). The strategy is realized by a complementary metal oxide semiconductor (CMOS) single camera and the LED integrating three colors, namely red, green and blue (RGB), in a group, which are arranged on the robot. The camera is fixedly arranged on an active butt joint surface of the robot and is used for observing LED imaging position and color relationship of a target robot; and each group of LED is arranged on all active butt joint surfaces and passive butt joint surfaces of the robot. A relative pose relationship between the active butt joint surface of the camera and a butt joint surface of the LED is calculated for relatively positioning through an imaging dimension relationship of the LED in the camera, and different color combinations of the LED on the current surface are distinguished through the camera so as to identify a state of the current surface for navigation. By the strategy, rich reference information can be provided through an imaging state of the LED, relative positioning, target searching and rapid navigation in the process of butting the robot are realized, and the butting efficiency of swarm robots is effectively improved.

Description

Robot docking location and air navigation aid based on monocular cam and LED
Technical field
The invention belongs to Group Robots field, more particularly, relate to location and navigation strategy in a kind of Group Robots self assembly docking operation.
Background technology
Along with the development of technology and social progress, people constantly increase the functional requirement of robot, in some non-structured complex environments, often need robot to have good environmental suitability and dirigibility, it is also the critical aspects of Group Robots research that Group Robots self assembly and via Self-reconfiguration feature can meet this requirement simultaneously.At present, the ability of the Group Robots of some colleges and universities' researchs aspect self assembly and via Self-reconfiguration is all also more limited both at home and abroad, especially for location and this difficult point of navigation strategy of robot in self assembling process, main scheme is all based on alignment guide modes such as the local finite sensing navigation such as infrared sensor, touch switch and the cooperation of mechanical profile, magnet, the location providing and navigation information are not enough, the effective scheme having use value and strategy are less, have limited the efficiency of Group Robots self assembly.
Summary of the invention
The existing technical scheme based on infrared sensor, touch switch, the cooperation of mechanical profile or magnet exists docking guidance information not enough, the problems such as efficiency is lower, in order to solve the deficiencies in the prior art, the present invention proposes a kind ofly can provide relatively abundant location and navigation information, docking location and the navigation scheme based on monocular cam and LED of the docking of guided robot rapid-assembling.The present invention has broken through the thinking of existing common technique scheme, adopts camera to observe color and the position of the LED of RGB, for location and navigation provide, enriches comprehensive information, and a kind of more effectively docking location and navigation strategy are provided.
Described docking location and navigation strategy are realized as follows:
Step 1: the rotation of the A of robot original place is adjusted, moves at random, until find the red top LED of the B of robot;
Step 2: the A of robot arrives according to the observation, and red top LED image space is adjusted self attitude, makes imaging in the camera of the red top A of LED robot be positioned at the central authorities of image level direction;
Step 3: two of the left and right bottom LED of calculating robot B is imaged onto the horizontal range of top LED imaging;
Step 4: according to the observation to step 3 in the LED image-forming range relation of the B of robot, the A of robot adjusts the pose of self;
The LED image-forming range relation of the described B of robot refers on the imaging plane of camera, and the imaging point of bottom LED is to the ratio r atio of the level interval of the imaging point of top LED, and the value of concrete pose adjustment based on ratio determined:
If ratio>1, the A of robot clockwise rotates an angle, then to left, until the imaging of red top LED is positioned at image level direction middle position again, returns to step 3;
If ratio<1, the A of robot rotates counterclockwise an angle, then to right translation, until the imaging of red top LED is positioned at image level direction middle position again, returns to step 3;
If ratio=1, the A of robot with ought the realizing and aligning above of the B of robot, the A of robot, by the color combination state that ought go up three LED above of observer robot B, takes next step action, navigation or docking, execution step 5;
Step 5: the color combination state that ought go up three LED above of observer robot B, the LED color assembled state of the B of robot arriving according to the observation, obtains navigation information; If obtaining to be target interface above, perform step 7, otherwise execution step 6;
Step 6: according to the navigation information in step 5, the next interface that arrives that the A of robot navigates and moves, then performs step 1;
Step 7: carry out docking operation.
The described A of robot and the B of robot include at least two sides, one of them is active mating face, all the other are passive interface, on described active mating face and passive interface, be respectively arranged with one group of LED, two simulation infrared receiving/transmission sensors and a pair of docking draw-in groove, on described active mating face, be also provided with a pair of docking buckle and camera; One group of described LED comprises two bottom LED and a top LED, and described camera position is observed from the dead ahead of active mating face, and camera is positioned at the leg-of-mutton geometric center position of top LED and bottom LED composition just; The position of LED, two bottom LED are arranged on the active mating face and passive interface of robot, and top LED is arranged in the position near body inner side, and two bottom LED are high for aspect ratio, and not coplanar with active mating face or passive interface.
The hardware system of docking of the present invention location and navigation strategy forms mainly and consists of camera part and RGB tri-look one LED sections, by the LED image-forming information to observing in camera, extracts, the object of achieve a butt joint navigation and relative positioning.
The camera that described camera part is mainly core by an image sensor chip of take CMOS forms, and camera is fixedly mounted on robot active mating face, adopt 240 * 320 pixels by VGA sequential with RGB565 formatted output view data.
That described LED section adopts is the LED of RGB tri-look one, and every 3 LED are on one group of all side that are arranged in a particular manner robot, by the different navigation information of the incompatible expression of different color-set; By specific arrangement, make position relationship and the relative pose between robot of 3 LED imaging in camera on the same face have definite funtcional relationship, thereby realize relative positioning accurately.
Described docking positioning strategy is by calculating the distance relation of LED imaging in camera, the ratio that utilizes two LED of the same face upper bottom portion to be imaged onto the horizontal range of top LED imaging calculates the relative pose between Liang Ge robot in conjunction with relevant geometric relationship, and constantly adjust according to current pose, until mutually align between robot.
Described docking navigation strategy is that the colouring information of observing the LED imaging on current interface by camera is realized, and current interface navigates to target interface as early as possible by the LED color combination guided robot that self is set, and achieves a butt joint.On each face, the combination of LED color has 3 3=27 kinds, can fully meet information representation needs.
Described location and the hardware system of navigation strategy are also included in and on each side of robot, are furnished with two simulation infrared receiving/transmission sensors, mainly at two interfaces, mutually press close to after LED shifts out camera visual angle, be used for the evading of relative positioning between auxiliary robot and obstacle.
Advantage of the present invention is:
(1) relative positioning between docking robot mainly adopts a monocular cam and LED to coordinate realization, and simplicity of design is with low cost; The output of camera data adopts RGB565 form, comprises abundant color space information, can be transformed into as required in multiple color space, conveniently carries out color identification.
(2) adopt LED color combination to represent the state of robot, on each face, LED color combination has 3 3=27 kinds, can provide abundant information for robot navigation.
(3) 3 LED on each interface have adopted a kind of special arrangement, robot does not need unnecessary movement repeatedly, by once observing the image space relation of LED in camera, can determine relative pose between Liang Ge robot and the relative orientation at target interface place.
(4) Strategy Design of the present invention is simple, and special arrangement and color combination by camera and LED can realize relevant location and navigation, has effectively improved the efficiency of robot docking assembling.
Accompanying drawing explanation
Fig. 1 the present invention is based on the robot docking location of monocular cam and LED and robot architecture's schematic diagram of navigation strategy;
Fig. 2 is that on active mating face, camera is arranged schematic diagram;
The location geometric relationship principle analysis figure of Tu3Shi robot;
Fig. 4 is that the robot based on monocular cam and LED docks the process flow diagram of location and navigation strategy method;
In figure:
1. active mating face; 2. passive interface; 3. camera; 4. top LED; 5. bottom LED; 6. simulate infrared receiving/transmission sensor; 7. dock draw-in groove; 8. dock buckle; 9.CMOS imaging sensor; 10. cam lens.
Embodiment
Below in conjunction with accompanying drawing, the present invention is described in further detail.
Refer to shown in Fig. 1~2, the location in the present invention and navigation strategy are observed LED by CMOS monocular cam and are realized location and the navigation between robot.Fig. 1 shows an elementary tactics of locating and navigating between Liao Liangge robot, described robot comprises at least two sides, one of them is active mating face 1, all the other are passive interface 2, on described active mating face 1 and passive interface 2, be respectively arranged with one group of LED(and comprise two bottoms LED5 and top LED4), two simulation infrared receiving/transmission sensors 6 and a pair of docking draw-in groove 7, on described active mating face 1, be also provided with a pair of docking buckle 8 and CMOS monocular cam 3(and be called for short camera).By the color of the upper LED group of the CMOS monocular cam 3 observer robot B on the active mating face 1 of the A of robot, information in conjunction with two simulation infrared receiving/transmission sensors 6 on the active mating face 1 of the A of robot self, make the rapid navigator fix of the A of robot with active mating face 1, achieve a butt joint with the B of robot, the docking of a plurality of robots forms Group Robots.Active mating face or the passive interface of needs docking are referred to as to target interface, and target interface has the LED composite signal different from all the other active mating faces or passive interface 2.
In the present invention, the docking of robot is to find target interface by the active mating face 1 of each robot, and active mating face 1 is connected with target interface, realize between Liang Ge robot and connecting, wherein LED color and image space information on the 3 object observing interfaces of the camera on active mating face 1, take location and navigation decision-making.The process that realizes location and navigation to hardware require fairly simple, relevant equipment in each robot mainly comprises a camera 3, two simulation infrared receiving/transmission sensors 6, top LED4 and two bottom LED5, and described LED is RGB three-color LED.By above basic hardware configuration, in conjunction with policing algorithm of the present invention, can think that robot self assembling process provides effective docking location and navigation feature.
Refer to shown in Fig. 2~3, the camera 3 that location of the present invention and navigation strategy are used and the installation of LED and position relationship have all carried out special design, described camera 3 positions as shown in Figure 2, from the dead ahead of active mating face 1, observe, camera 3 is positioned at the leg-of-mutton geometric center position of top LED4 and bottom LED5 composition just; The position of described LED is as shown in Fig. 1~2, two bottom LED5 with certain pitch arrangement on the active mating face 1 and passive interface 2 of robot, top LED4 is arranged in the position near body inner side, two bottom LED5 are high for aspect ratio, and not coplanar with active mating face 1 or passive interface 2; More than arrange and can guarantee the LED imaging relations difference that camera 3 is observed on different relative poses and have clear and definite function corresponding relation.
Refer to shown in Fig. 3, for positioning principle figure of the present invention, the structural parameters of described clear and definite function corresponding relation Shi You robot itself and relevant geometric optical theory are determined, angle [alpha] in figure, L and f are installation and the structural parameters of robot itself, wherein, angle α is half of drift angle of the isosceles triangle that forms of the vertical view projection of bottom LED and the vertical view projection of top LED, L is the distance between the vertical view projection of top LED and the vertical view projection of bottom LED, two bottom LED equate with the distance between the LED of top, three LED form isosceles triangle, f is the vertical range of camera lens 10 on the cmos imaging sensor 9 place planes of camera 3 and camera 3, angle beta and apart from d for needing the relative pose parameter of the Liang Ge robot (the Zhong Wei A of robot of the present invention and the B of robot) of docking, wherein, β is the drift angle (active mating face and target interface should be 0 degree to this angle under positive status) between the Liang Ge robot that needs to dock, d is the distance between active mating face and target interface, and what in the present invention, choose is the distance between active mating face and target interface upper top LED, L1 and L2 be respectively two bottom LED5 at camera imaging in-plane the real standard spacing to top LED4, L1' and L2' are respectively the imaging point of two bottom LED5 in camera imaging plane to the level interval of the imaging point of top LED4, utilize camera image, by measuring L1' and L2' and obtaining L1' and the ratio r atio of L2', just can calculate the relative pose Relation Parameters between Liang Ge robot---angle beta and apart from d.Geometric relationship according in Fig. 3, has:
L1=L·sin(α-β) (1)
L2=L·sin(α+β) (2)
ratio = L 1 &prime; L 2 &prime; = L 1 L 2 = sin ( &alpha; - &beta; ) sin ( &alpha; + &beta; ) - - - ( 3 )
d = L &CenterDot; cos ( &alpha; - &beta; ) + L 1 + L 1 &prime; L 1 &prime; &CenterDot; f - - - ( 4 )
As shown in Figure 4, robot docking location and navigation strategy based on monocular cam and LED of the present invention, the searching of realize target interface and navigation procedure as follows, the location navigation between the Zhong Yi A of robot of the present invention and the B of robot is that example illustrates, is specially:
Step 1: the rotation of the A of robot original place is adjusted, and moves at random, until find red top LED4;
Step 2: the red top LED4 image space that the A of robot arrives is according to the observation adjusted self attitude, makes the imaging of red top LED4 in camera 3 be positioned at the central authorities of image level direction; Described camera 3 is the cameras that are positioned on the active mating face of the A of robot.
Step 3: calculate the horizontal range that two of left and right bottom LED is imaged onto top LED imaging;
Step 4: according to the observation to LED image-forming range relation (being the horizontal range relation in step 3), the A of robot adjusts the pose of self;
Described LED image-forming range relation refers on the imaging plane of camera 3, and the imaging point of bottom LED5 is to the ratio r atio of the level interval of the imaging point of top LED4, and the value of concrete pose adjustment strategy based on ratio determined:
If ratio>1, the A of robot clockwise rotates an angle, then to left, until the imaging of red top LED4 is positioned at image level direction middle position again, returns to step 3.
If ratio<1, the A of robot rotates counterclockwise an angle, then to right translation, until the imaging of red top LED4 is positioned at image level direction middle position again, returns to step 3.
If ratio=1, the A of robot with ought the realizing and aligning above of the B of robot, the A of robot can take next step action by observing the color combination state that ought go up three LED above, navigation or docking, execution step 5.
Wherein, the A of robot clockwise or the size of the angle rotating counterclockwise and ratio relevant, robot decides the size of vernier angle according to current attitude misalignment degree, it is 1 larger that ratio departs from, corresponding corner also will be larger.Step 5: observation ought be gone up the color combination state of three LED above, the LED color assembled state arriving according to the observation, obtains navigation information; If obtaining to be target interface above, perform step 7, otherwise execution step 6;
Three described LED color assembled state, corresponding complete state space has 3 3=27 array modes, the mode that we select in the present embodiment has following several:
Three equal shiny reds of LED: the B of this robot, for docking robot, is target interface when above, the A of robot should dock with this target interface.
Top LED is red, and bottom LED is green: the B of this robot, for docking robot, is not target interface when above, and target interface is on right side (left side is the relative A of robot visual angle with right side) that ought be above, the A of the robot movement of should navigating to the right.
Top LED is red, bottom LED is blueness: the B of this robot is for docking robot, when above, be not target interface, target interface is in left side (left side and right side are with respect to the A of robot visual angle) that ought be above, the A of the robot movement of should navigating to the left.
Three equal sapphirines of LED: the B of this robot is for not docking robot, and in roaming state, also do not find one and docked robot.
Three equal bright greens of LED: the B of this robot is for not docking robot, and in navigational state, found and docked robot, but target interface also do not found.
Step 6: according to the navigation information in step 5, the next interface that arrives that the A of robot navigates and moves, then performs step 1;
Step 7: carry out docking operation.
Described docking operation starts from after the A of robot and target interface align, the A of robot moves on and constantly adjusts and keep positive status, when the target interface of the A of the robot active mating Mian He B of robot extremely approaches, LED can shift out the visual angle of camera 3, now the A of robot utilize self simulation infrared receiving/transmission sensor 6 range finding guarantee final stage distance to positive status, until the contact-making switch on active mating face is encountered target interface, trigger docking snap action, by docking draw-in groove clamping target interface, achieve a butt joint.
In the above embodiments, the A of robot of take finds and to have docked the B of robot and be illustrated as example, in real process, can be according to the assembled state of LED in 27, signalization combines navigator fix arbitrarily, the navigation of the A of robot and the B of robot moves through the automatic drive self arranging and realizes, and described automatic drive can be any implementation of the prior art, only needs robot to realize according to signal instruction mobile in the present invention.
Docking of the present invention location and navigation strategy, mainly based on camera and LED, wherein especially designed the special location arrangements relation of a kind of camera and LED, realized the quick and precisely localization method design of monocular cam, for Group Robots self assembly provides a kind of location and navigation strategy fast and effectively, docking and packaging efficiency have been improved on this basis.The present invention is applicable to location and the navigation of the docking of ground One-male unit robot and assembling process.
The above; it is only preferably one of embodiment of strategy of the present invention; such as the variation in the number at LED and arrangement parameter and corresponding non-principle all still can guarantee the realization of relevant design object; but protection scope of the present invention is not limited to this; anyly be familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or alternative, within all should being encompassed in protection scope of the present invention.

Claims (5)

1. the docking of the robot based on monocular cam and LED location and air navigation aid, is characterized in that, specifically comprises the steps:
Step 1: the rotation of the A of robot original place is adjusted, moves at random, until find the red top LED of the B of robot;
Step 2: the A of robot arrives according to the observation, and red top LED image space is adjusted self attitude, makes imaging in the camera of the red top A of LED robot be positioned at the central authorities of image level direction;
Step 3: two of the left and right bottom LED of calculating robot B is imaged onto the horizontal range of top LED imaging;
Step 4: according to the observation to step 3 in the LED image-forming range relation of the B of robot, the A of robot adjusts the pose of self;
The LED image-forming range relation of the described B of robot refers on the imaging plane of camera, and the imaging point of bottom LED is to the ratio r atio of the level interval of the imaging point of top LED, and the value of concrete pose adjustment based on ratio determined:
If ratio>1, the A of robot clockwise rotates an angle, then to left, until the imaging of red top LED is positioned at image level direction middle position again, returns to step 3;
If ratio<1, the A of robot rotates counterclockwise an angle, then to right translation, until the imaging of red top LED is positioned at image level direction middle position again, returns to step 3;
If ratio=1, the A of robot with ought the realizing and aligning above of the B of robot, the A of robot, by the color combination state that ought go up three LED above of observer robot B, takes next step action, navigation or docking, execution step 5;
Step 5: the color combination state that ought go up three LED above of observer robot B, the LED color assembled state of the B of robot arriving according to the observation, obtains navigation information; If obtaining to be target interface above, perform step 7, otherwise execution step 6;
Step 6: according to the navigation information in step 5, the A of robot navigation moves to next interface, then performs step 1;
Step 7: carry out docking operation;
The described A of robot and the B of robot include at least two sides, one of them is active mating face, all the other are passive interface, on described active mating face and passive interface, be respectively arranged with one group of LED, two simulation infrared receiving/transmission sensors and a pair of docking draw-in groove, on described active mating face, be also provided with a pair of docking buckle and camera; One group of described LED comprises two bottom LED and a top LED, and described camera position is observed from the dead ahead of active mating face, and camera is positioned at the leg-of-mutton geometric center position of top LED and bottom LED composition just; The position of LED, two bottom LED are arranged on the active mating face and passive interface of robot, and top LED is arranged in the position near body inner side, and two bottom LED are high for aspect ratio, and not coplanar with active mating face or passive interface.
2. robot docking location and the air navigation aid based on monocular cam and LED according to claim 1, is characterized in that: the color combination state of three LED described in step 5, corresponding complete state space has 3 3=27 array modes.
3. the robot docking based on monocular cam and LED according to claim 1 is located and air navigation aid, it is characterized in that: the color combination state of three LED described in step 5, specifically refers to several as follows:
Three equal shiny reds of LED: the B of this robot, for docking robot, is target interface when above, the A of robot should dock with this target interface;
Top LED is red, and bottom LED is green: the B of this robot, for docking robot, is not target interface when above, and target interface is on right side that ought be above, the A of the robot movement of should navigating to the right;
Top LED is red, and bottom LED is blueness: the B of this robot, for docking robot, is not target interface when above, and target interface is in left side that ought be above, the A of the robot movement of should navigating to the left;
Three equal sapphirines of LED: the B of this robot is for not docking robot, and in roaming state, also do not find one and docked robot;
Three equal bright greens of LED: the B of this robot is for not docking robot, and in navigational state, found and docked robot, but target interface also do not found.
4. the robot docking based on monocular cam and LED according to claim 1 is located and air navigation aid, it is characterized in that: after the target interface that the docking operation described in step 7 starts from the A of robot and the B of robot aligns, the A of robot moves on and constantly adjusts and keep positive status, when the target interface of the A of the robot active mating Mian He B of robot approaches, the LED of the B of robot can shift out the visual angle of camera, now the A of robot utilize self simulation infrared receiving/transmission sensor instrument distance guarantee final stage distance to positive status, until the contact-making switch on active mating face is encountered target interface, achieve a butt joint.
5. robot docking location and the air navigation aid based on monocular cam and LED according to claim 1, is characterized in that: described LED is RGB three-color LED.
CN201110399896.1A 2011-12-05 2011-12-05 Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED) Active CN102436261B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110399896.1A CN102436261B (en) 2011-12-05 2011-12-05 Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110399896.1A CN102436261B (en) 2011-12-05 2011-12-05 Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED)

Publications (2)

Publication Number Publication Date
CN102436261A CN102436261A (en) 2012-05-02
CN102436261B true CN102436261B (en) 2014-04-30

Family

ID=45984363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110399896.1A Active CN102436261B (en) 2011-12-05 2011-12-05 Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED)

Country Status (1)

Country Link
CN (1) CN102436261B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455038B (en) * 2012-06-01 2016-12-14 联想(北京)有限公司 A kind of electronic equipment and the method for adjustment direction
CN103389087B (en) * 2013-08-07 2016-06-15 上海海事大学 A kind of wheeled robot pose calculation method
CN103528583A (en) * 2013-10-24 2014-01-22 北京理工大学 Embedded robot locating device
CN106933225B (en) * 2013-11-04 2020-05-12 原相科技股份有限公司 Automatic following system
CN104898656A (en) * 2014-03-06 2015-09-09 西北农林科技大学 Farmland multiple robot following land cultivation system based on stereo visual sense visual sense and method for the same
CN105573316B (en) * 2015-12-01 2019-05-03 武汉科技大学 A kind of mobile Group Robots of autonomous formation
CN107168520B (en) * 2017-04-07 2020-12-18 北京小鸟看看科技有限公司 Monocular camera-based tracking method, VR (virtual reality) equipment and VR head-mounted equipment
WO2019041155A1 (en) * 2017-08-30 2019-03-07 Qualcomm Incorporated Robust navigation of a robotic vehicle
CN108460804A (en) * 2018-03-20 2018-08-28 重庆大学 A kind of Three Degree Of Freedom position and posture detection method of transhipment docking mechanism and transhipment docking mechanism based on machine vision
CN110842908A (en) * 2018-08-21 2020-02-28 广州弘度信息科技有限公司 Robot and auxiliary positioning method thereof
CN110333716A (en) * 2019-04-30 2019-10-15 深圳市商汤科技有限公司 A kind of motion control method, device and system
CN110065071B (en) * 2019-05-11 2021-11-09 西安电子科技大学 Group self-assembly robot configuration method based on three-element configuration description
CN110686650B (en) * 2019-10-29 2020-09-08 北京航空航天大学 Monocular vision pose measuring method based on point characteristics
CN111331606A (en) * 2020-03-27 2020-06-26 河北师范大学 Mobile splicing control method and system for mobile multiple robots
CN112327017B (en) * 2020-11-06 2022-02-11 广东电网有限责任公司电力科学研究院 Switching device and system of distribution automation equipment module

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002073170A (en) * 2000-08-25 2002-03-12 Matsushita Electric Ind Co Ltd Movable working robot
JP2005115623A (en) * 2003-10-07 2005-04-28 Fuji Heavy Ind Ltd Navigation system using image recognition
CN101121266A (en) * 2007-09-14 2008-02-13 哈尔滨工业大学 Miniature self-correcting reconfiguration device
CN101168371A (en) * 2007-11-16 2008-04-30 哈尔滨工业大学 Pedrail type self-reconstruction mini robot
CN102116625A (en) * 2009-12-31 2011-07-06 武汉大学 GIS (geographic information system)-GPS (global position system) navigation method of inspection robot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101364524B1 (en) * 2007-08-13 2014-02-19 삼성전자주식회사 Method and apparatus for searching a target location

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002073170A (en) * 2000-08-25 2002-03-12 Matsushita Electric Ind Co Ltd Movable working robot
JP2005115623A (en) * 2003-10-07 2005-04-28 Fuji Heavy Ind Ltd Navigation system using image recognition
CN101121266A (en) * 2007-09-14 2008-02-13 哈尔滨工业大学 Miniature self-correcting reconfiguration device
CN101168371A (en) * 2007-11-16 2008-04-30 哈尔滨工业大学 Pedrail type self-reconstruction mini robot
CN102116625A (en) * 2009-12-31 2011-07-06 武汉大学 GIS (geographic information system)-GPS (global position system) navigation method of inspection robot

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
可重构星球探测机器人的空间对接研究;张力平等;《中国机械工程》;20050430;第16卷(第8期);全文 *
基于二维位置敏感探测器的空间对接过程研究;张力平等;《西安交通大学学报》;20070930;第41卷(第9期);全文 *
张力平等.可重构星球探测机器人的空间对接研究.《中国机械工程》.2005,第16卷(第8期),
张力平等.基于二维位置敏感探测器的空间对接过程研究.《西安交通大学学报》.2007,第41卷(第9期),

Also Published As

Publication number Publication date
CN102436261A (en) 2012-05-02

Similar Documents

Publication Publication Date Title
CN102436261B (en) Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED)
CN102922521B (en) A kind of mechanical arm system based on stereoscopic vision servo and real-time calibration method thereof
US7800645B2 (en) Image display method and image display apparatus
CN110446159A (en) A kind of system and method for interior unmanned plane accurate positioning and independent navigation
CN102735217B (en) Indoor robot vision autonomous positioning method
CN103076005B (en) Optical imaging method integrating three-dimensional mapping and broad width imaging
CN105335733A (en) Autonomous landing visual positioning method and system for unmanned aerial vehicle
CN105278454A (en) Robot hand-eye positioning algorithm based on mechanical arm visual positioning system
US20230011911A1 (en) Primary-secondary type infrastructure disease detection and repair system and method
CN103970134A (en) Multi-mobile-robot system collaborative experimental platform and visual segmentation and positioning method thereof
Holz et al. Towards multimodal omnidirectional obstacle detection for autonomous unmanned aerial vehicles
CN102081296A (en) Device and method for quickly positioning compound-eye vision imitated moving target and synchronously acquiring panoramagram
CN106370160A (en) Robot indoor positioning system and method
CN101762277B (en) Six-degree of freedom position and attitude determination method based on landmark navigation
CN109387194A (en) A kind of method for positioning mobile robot and positioning system
Liu et al. A high-accuracy pose measurement system for robotic automated assembly in large-scale space
CN109407115A (en) A kind of road surface extraction system and its extracting method based on laser radar
Matsuoka et al. Measurement of large-scale solar power plant by using images acquired by non-metric digital camera on board UAV
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
CN105243653A (en) Fast mosaic technology of remote sensing image of unmanned aerial vehicle on the basis of dynamic matching
Park et al. Automated collaboration framework of UAV and UGV for 3D visualization of construction sites
Tanaka et al. Autonomous drone guidance and landing system using ar/high-accuracy hybrid markers
CN105959529B (en) It is a kind of single as method for self-locating and system based on panorama camera
Jingjing et al. Research on autonomous positioning method of UAV based on binocular vision
Zhou et al. Multi-robot real-time cooperative localization based on high-speed feature detection and two-stage filtering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20170110

Address after: 213161 room B-301, Changzhou Science & Education Center, Changzhou science and technology center, No. 18, Wu Zhong Road, Wujin District, Jiangsu, China

Patentee after: Ao Bo (Changzhou) Automation Technology Co. Ltd.

Address before: 100191 Haidian District, Xueyuan Road, No. 37,

Patentee before: Beijing Univ. of Aeronautics & Astronautics

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 213161 room B-301, Changzhou Science & Education Center, Changzhou science and technology center, No. 18, Wu Zhong Road, Wujin District, Jiangsu, China

Patentee after: Ao Bo (Jiangsu) Co., Ltd. robot

Address before: 213161 room B-301, Changzhou Science & Education Center, Changzhou science and technology center, No. 18, Wu Zhong Road, Wujin District, Jiangsu, China

Patentee before: Ao Bo (Changzhou) Automation Technology Co. Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Butt joint positioning and navigation strategy for robot based on single camera and light-emitting diode (LED)

Effective date of registration: 20180614

Granted publication date: 20140430

Pledgee: Beijing Jingxi Xinrong Cci Capital Ltd

Pledgor: Ao Bo (Jiangsu) Co., Ltd. robot

Registration number: 2018990000454

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20200421

Granted publication date: 20140430

Pledgee: Beijing Jingxi Xinrong Cci Capital Ltd

Pledgor: AOBO (JIANGSU) ROBOT Co.,Ltd.

Registration number: 2018990000454