CN107972027A - The localization method and device of robot, robot - Google Patents
The localization method and device of robot, robot Download PDFInfo
- Publication number
- CN107972027A CN107972027A CN201610942285.XA CN201610942285A CN107972027A CN 107972027 A CN107972027 A CN 107972027A CN 201610942285 A CN201610942285 A CN 201610942285A CN 107972027 A CN107972027 A CN 107972027A
- Authority
- CN
- China
- Prior art keywords
- robot
- predeterminable area
- coordinate
- predeterminable
- control instruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a kind of localization method and device of robot, robot.Wherein, this method includes:The first position of robot in multiple first predeterminable areas is obtained, wherein, the first predeterminable area includes at least one robot;Relative position of first predeterminable area with respect to the second predeterminable area is obtained, wherein, the second predeterminable area is the set of multiple first predeterminable areas;According to relative position and first position, first position of the robot in the first predeterminable area is mapped as the second place of the robot in the second predeterminable area.The present invention solves the technical problem of indoor multirobot position inaccurate in the prior art.
Description
Technical field
The present invention relates to robot field, in particular to a kind of localization method and device of robot, robot.
Background technology
During robot technology develops rapidly under the support of national policy, in order to show that present robot technology is sent out to people
The degree of exhibition and later application, many places all establish science and technology museum of robot, multiple machines are placed in scientific and technological museum
People, by the interaction of robot and people allow people experience facility that the intelligence of robot and scientific and technological level bring and
Pleasurable experiences.
Robot except shown during being interacted with people speech recognition, recognition of face, motion control, association
Make the ability for also having a very important recessive ability to be exactly robot localization and navigation outside ability, if cannot be accurately positioned
Robot, then can not be the correct path of robot planning, and robot can be caused to collide with each other or robot and surrounding objects
Produce collision.
Existing location technology is not suitable for indoor multirobot positioning, for example, GPS applies in general to outdoor positioning, in room
Interior dtr signal, positioning accuracy are also poor;The scope of the positioning methods such as RFID, bluetooth, Zigbee, Wi-Fi fingerprint positioning is smaller,
Positioning accuracy is also poor;Inertia sensing is constrained by external condition, can ensure degree of precision in a short time, but can be floated with the time
Move, cumulative errors are increasing;Laser radar and ultra wide band positioning accuracy are higher, but cost is higher, it has not been convenient to large-scale popularization
Use.
For it is above-mentioned the problem of, not yet propose effective solution at present.
The content of the invention
An embodiment of the present invention provides a kind of localization method and device of robot, robot, at least to solve existing skill
The technical problem of indoor multirobot position inaccurate in art.
One side according to embodiments of the present invention, there is provided a kind of localization method of robot, including:Obtain multiple first
The first position of robot in predeterminable area, wherein, first predeterminable area includes at least one robot;Described in acquisition
First predeterminable area with respect to the second predeterminable area relative position, wherein, second predeterminable area is multiple described first pre-
If the set in region;According to the relative position and the first position, by the robot in first predeterminable area
The first position be mapped as the second place of the robot in second predeterminable area.
Further, the first position is the robot in the two-dimensional coordinate system using the first reference point as coordinate origin
In coordinate (x1, y1), first reference point is any one point in first predeterminable area, and the relative position is
Using the second reference point as the coordinate (x2, y2) in the two-dimensional coordinate system of coordinate origin, described second joins first reference point
Examination point is any one point in second predeterminable area, according to the relative position and the first position, by the machine
The first position of the device people in first predeterminable area is mapped as the robot in second predeterminable area
The second place includes:X1 is added with x2, obtains x0;Y1 is added with y2, obtains y0;By the position indicated by coordinate (x0, y0)
Put the second place in second predeterminable area as the robot.
Further, obtaining the first position of robot in the first predeterminable area includes:Obtain first predeterminable area
Image;The robot is identified in described image;Position of the robot in described image is mapped as the machine
Device people in the position of first predeterminable area, and using the robot in the position of first predeterminable area as described in
First position.
Further, position of the robot in described image is mapped as the robot to preset described first
The position in region, and the robot is included in the position of first predeterminable area as the first position:Obtain
Coordinate of the robot in described image;Obtain the ratio of described image and the size of first predeterminable area;According to
Coordinate of the robot in described image described in the ratio enlargement, obtains seat of the robot in first predeterminable area
Mark, and using the position indicated by coordinate of the robot in first predeterminable area as the first position.
Further, each robot has different visual identifications in first predeterminable area, in described image
Identify that the robot includes:Identified and the corresponding robot of the visual identification by the visual identification.
Further, the visual identification is color, shape or the color and shape of some component of the robot
The combination of shape.
Further, the localization method further includes:The positional information of the robot is calculated according to the second place;
Control instruction is generated according to the positional information, the control instruction is used to indicate the robot according in the control instruction
Movable information movement;The control instruction is sent to the robot.
Further, the positional information of the robot includes the boundary line of the robot and first predeterminable area
Range information, the positional information of the robot is calculated according to the second place to be included:Calculate the second place and institute
The distance d (i) of the boundary line Li of the first predeterminable area is stated, wherein, i takes 1 to N integer successively, and N is first predeterminable area
Boundary line quantity;Numerical value is filtered out in this N number of distance from distance d (1) to distance d (N) less than or equal to pre-determined distance d0's
Distance, obtains target range;Generating control instruction according to the positional information includes:By the corresponding boundary line of the target range
As object boundary line;Control instruction is generated according to the object boundary line, the control instruction is used to control the robot
Moved to the direction away from the object boundary line.
Further, before the movable information of the robot is calculated according to the second place, the method is also wrapped
Include:It is first kind robot or the second class robot to judge the robot;If it is judged that the robot is described
A kind of robot, the control instruction allow the robot to be moved in first predeterminable area;If it is judged that
The robot is the second class robot, and the control instruction allows the robot in multiple first predeterminable areas
Interior movement.
Further, it is that first kind robot or the second class robot include to judge the robot:Identify the machine
The visual identification of device people;It is first kind mark or the second class mark to judge the visual identification;If the visual identification is
The first kind mark, it is the first kind robot to determine the robot, wherein, the first kind robot only allows
Movement in one first predeterminable area;If the visual identification is the second class mark, determine that the robot is
The second class robot, wherein, the second class robot allows to move in multiple first predeterminable areas.
Another aspect according to embodiments of the present invention, there is provided a kind of positioner of robot, including:First obtains list
Member, for obtaining the first position of robot in multiple first predeterminable areas, wherein, first predeterminable area is included at least
One robot;Second acquisition unit, for obtaining relative position of first predeterminable area with respect to the second predeterminable area, its
In, second predeterminable area is the set of multiple first predeterminable areas;Map unit, for according to the relative position
With the first position, the first position of the robot in first predeterminable area is mapped as the robot
The second place in second predeterminable area.
Further, the first position is the robot in the two-dimensional coordinate system using the first reference point as coordinate origin
In coordinate (x1, y1), first reference point is any one point in first predeterminable area, and the relative position is
Using the second reference point as the coordinate (x2, y2) in the two-dimensional coordinate system of coordinate origin, described second joins first reference point
Examination point is any one point in second predeterminable area, and the map unit includes:First computation subunit, for by x1
It is added with x2, obtains x0;Second computation subunit, for y1 to be added with y2, obtains y0;First determination subelement, for inciting somebody to action
The second place of the position as the robot in second predeterminable area indicated by coordinate (x0, y0).
Further, the first acquisition unit includes:Subelement is obtained, for obtaining the figure of first predeterminable area
Picture;First identification subelement, for identifying the robot in described image;Subelement is mapped, for by the robot
Position in described image is mapped as the robot and exists in the position of first predeterminable area, and by the robot
The position of first predeterminable area is as the first position.
Further, the first identification subelement includes:First acquisition module, for obtaining the robot described
Coordinate in image;Second acquisition module, the ratio of the size for obtaining described image and first predeterminable area;Amplification
Module, for coordinate of the robot in described image according to the ratio enlargement, obtains the robot described
The coordinate of one predeterminable area, and using the position indicated by coordinate of the robot in first predeterminable area as described in
First position.
Further, each robot has different visual identifications, first identification in first predeterminable area
Subelement includes:Identification module, for being identified and the corresponding machine of the visual identification by the visual identification
People.
Further, the visual identification is color, shape or the color and shape of some component of the robot
The combination of shape.
Further, the positioner further includes:Computing unit, the robot is calculated according to the second place
Positional information;Generation unit, generates control instruction, the control instruction is used to indicate the robot according to the positional information
Moved according to the movable information in the control instruction;Transmitting element, for sending the control instruction to the robot.
Further, the positional information of the robot includes the boundary line of the robot and first predeterminable area
Range information, the computing unit includes:3rd computation subunit, presets for calculating the second place with described first
The distance d (i) of the boundary line Li in region, wherein, i takes 1 to N integer successively, and N is the boundary line of first predeterminable area
Quantity;Subelement is screened, is less than or equal to pre-determined distance for filtering out numerical value in this N number of distance from distance d (1) to distance d (N)
The distance of d0, obtains target range;The generation unit includes:Second determination subelement, for the target range to be corresponded to
Boundary line as object boundary line;Subelement is generated, for generating control instruction, the control according to the object boundary line
Instruction is used to control the robot to move to the direction away from the object boundary line.
Further, the positioner further includes:Judging unit, in the computing unit according to the second
Before putting the movable information for calculating the robot, it is first kind robot or the second class robot to judge the robot;
If it is judged that the robot is the first kind robot, the control instruction allows the robot at one described the
Movement in one predeterminable area;If it is judged that the robot is the second class robot, described in the control instruction permission
Robot moves in multiple first predeterminable areas.
Further, the judging unit includes:Second identification subelement, for identifying the visible mark of the robot
Know;Judgment sub-unit, for judging that the visual identification is first kind mark or the second class mark;3rd determination subelement,
If being the first kind mark for the visual identification, it is the first kind robot to determine the robot, wherein, institute
Stating first kind robot only allows to move in first predeterminable area;4th determination subelement, if for described
Visual identification is the second class mark, and it is the second class robot to determine the robot, wherein, the second class machine
People allows to move in multiple first predeterminable areas.
Another aspect according to embodiments of the present invention, there is provided a kind of robot, including:Above-described robot determines
Position device.
In embodiments of the present invention, the set in multiple less regions constitutes a larger region, each less
The relative position in the relatively large region in region is fixed and unique, obtain position of the robot in less region and
The less region is relative to the relative position in larger region, and according to position of the robot in less region and this is smaller
Region relative to larger region relative position by position of the robot in less region be mapped as robot compared with
Position in big region, has reached the technique effect of indoor multirobot accurate positioning, and then solves room in the prior art
The technical problem of interior multirobot position inaccurate.
Brief description of the drawings
Attached drawing described herein is used for providing a further understanding of the present invention, forms the part of the present invention, this hair
Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is the flow chart of the localization method of robot according to embodiments of the present invention;
Fig. 2-1 is the schematic diagram of the first predeterminable area according to embodiments of the present invention;
Fig. 2-2 is the signal of the first predeterminable area according to embodiments of the present invention with respect to the relative position of the second predeterminable area
Figure;
Fig. 2-3 is the schematic diagram of the second place of robot according to embodiments of the present invention in the second predeterminable area.
Fig. 3-1 is the schematic diagram of robot according to embodiments of the present invention in the position of the first predeterminable area;
Fig. 3-2 is the first reference point D1 according to embodiments of the present invention in the two dimension using the second reference point D2 as coordinate origin
The schematic diagram of position in coordinate system;
Fig. 3-3 is robot according to embodiments of the present invention in the two-dimensional coordinate system using the second reference point D2 as coordinate origin
In position schematic diagram;
Fig. 4 is the schematic diagram of indoor multirobot positioning according to embodiments of the present invention;
Fig. 5 is the schematic diagram of the positioner of robot according to embodiments of the present invention.
Embodiment
In order to make those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention
Attached drawing, is clearly and completely described the technical solution in the embodiment of the present invention, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
Member's all other embodiments obtained without making creative work, should all belong to the model that the present invention protects
Enclose.
It should be noted that term " first " in description and claims of this specification and above-mentioned attached drawing, "
Two " etc. be for distinguishing similar object, without for describing specific order or precedence.It should be appreciated that so use
Data can exchange in the appropriate case, so as to the embodiment of the present invention described herein can with except illustrating herein or
Order beyond those of description is implemented.In addition, term " comprising " and " having " and their any deformation, it is intended that cover
Cover it is non-exclusive include, be not necessarily limited to for example, containing the process of series of steps or unit, method, system, product or equipment
Those steps or unit clearly listed, but may include not list clearly or for these processes, method, product
Or the intrinsic other steps of equipment or unit.
According to embodiments of the present invention, there is provided a kind of embodiment of the localization method of robot is, it is necessary to illustrate, attached
The step of flow of figure illustrates can perform in the computer system of such as a group of computer-executable instructions, though also,
So logical order is shown in flow charts, but in some cases, can be with different from shown by order execution herein
Or the step of description.
Fig. 1 is the flow chart of the localization method of robot according to embodiments of the present invention, as shown in Figure 1, this method includes
Following steps:
Step S102, obtains the first position of robot in multiple first predeterminable areas, wherein, wrapped in the first predeterminable area
Include at least one robot.
Step S104, obtains relative position of first predeterminable area with respect to the second predeterminable area, wherein, the second predeterminable area
It is the set of multiple first predeterminable areas.
Step S106, according to relative position and first position, first position of the robot in the first predeterminable area is reflected
Penetrate the second place in the second predeterminable area for robot.
The localization method for the robot that the embodiment of the present invention is provided does not limit the walking manner of robot, for example,
The walking manner of robot can be biped walking, wheel type mobile (comprehensive, differential mobile platform).
Second predeterminable area includes multiple first predeterminable areas.Phase of each first predeterminable area with respect to the second predeterminable area
It is fixed and unique to position.One or more robots are included in each first predeterminable area.If thinking positioning robot,
Need to obtain two positional informations, wherein, a positional information is position of the robot in the first predeterminable area (first
Put), another positional information is relative position of first predeterminable area with respect to the second predeterminable area.According to the two positional informations
It can obtain position (second place) of the robot in the second predeterminable area.
It is respectively the first predeterminable area Z1, first pre- for example, it is assumed that the second predeterminable area includes 9 the first predeterminable areas
If region Z2, the first predeterminable area Z3 ..., the first predeterminable area Z9.Fig. 2-1 shows the first predeterminable area Z6, by Fig. 2-1
In it can be seen that, have 2 robots in the first predeterminable area Z6, be respectively robot A and robot B.Fig. 2-2 shows 9
A first predeterminable area with respect to the second predeterminable area relative position, wherein, it is default that the collection of 9 the first predeterminable areas is combined into second
Region.According to the relative position of opposite second predeterminable areas of the first predeterminable area Z6 and robot A in the first predeterminable area Z6
Position (first position), positions (second place) of the robot A in the second predeterminable area can be obtained.It is default according to first
Region Z6 with respect to the second predeterminable area position (first position) in the first predeterminable area Z6 of relative position and robot B,
It can obtain positions (second place) of the robot B in the second predeterminable area.Fig. 2-3 shows that robot A and robot B exist
The second place in second predeterminable area.
In embodiments of the present invention, the set in multiple less regions constitutes a larger region, each less
The relative position in the relatively large region in region is fixed and unique, obtain position of the robot in less region and
The less region is relative to the relative position in larger region, and according to position of the robot in less region and this is smaller
Region relative to larger region relative position by position of the robot in less region be mapped as robot compared with
Position in big region, has reached the technique effect of indoor multirobot accurate positioning, solves indoor more in the prior art
The technical problem of robot localization inaccuracy.
Coordinate representation positional information can be used.Alternatively, first position is robot former by coordinate of the first reference point
Point two-dimensional coordinate system in coordinate (x1, y1), the first reference point be the first predeterminable area in any one point, relative position
It is being by the coordinate (x2, y2) in the two-dimensional coordinate system of coordinate origin, the second reference point of the second reference point for the first reference point
Any one point in second predeterminable area.According to relative position and first position, by robot in the first predeterminable area
The detailed process that first position is mapped as the second place of the robot in the second predeterminable area is as follows:X1 is added with x2, is obtained
To x0;Y1 is added with y2, obtains y0;By the position indicated by coordinate (x0, y0) as robot in the second predeterminable area
The second place.
First reference point is any one point in the first predeterminable area;Second reference point is appointing in the second predeterminable area
One point of meaning.Using the first reference point as in the two-dimensional coordinate system of coordinate origin, the coordinate of some robot R1 is (x1, y1).
Using the second reference point as in the two-dimensional coordinate system of coordinate origin, the coordinate of the first reference point is (x2, y2).Make x0=x1+x2,
Y0=y1+y2, coordinate (x0, y0) are robot R1 using the second reference point as the seat in the two-dimensional coordinate system of coordinate origin
Mark, i.e. by position (second place) of the position indicated by coordinate (x0, y0) as robot R1 in the second predeterminable area.
For example, in Fig. 3-1, Fig. 3-2 and Fig. 3-3, D1 represents the first reference point, and D2 represents the second reference point.In Fig. 3-1
In shown coordinate system, the coordinate of robot A is (0.5,0.8), i.e. x1=0.5, y1=0.8.In the coordinate shown in Fig. 3-2
In system, using the second reference point D2 as coordinate origin, the coordinate of the first reference point D1 is (2,1), i.e. x2=2, y2=1.X0=
X1+x2=0.5+2=2.5.Y0=y1+y2=0.8+1=1.8.Coordinate (2.5,1.8) is robot A with the second reference
Point D2 is the coordinate in the two-dimensional coordinate system of coordinate origin, i.e. by the position indicated by coordinate (2.5,1.8) as robot A
Position (second place) in the second predeterminable area.In the coordinate system shown in Fig. 3-1, the coordinate of robot B be (0.9,
0.3), i.e. x1=0.9, y1=0.3.In the coordinate system shown in Fig. 3-2, using the second reference point D2 as coordinate origin, first
The coordinate of reference point D1 is (2,1), i.e. x2=2, y2=1.X0=x1+x2=0.9+2=2.9.Y0=y1+y2=0.3+1=
1.3.Coordinate (2.9,1.3) is robot B using the second reference point D2 as the coordinate in the two-dimensional coordinate system of coordinate origin,
That is, by position (second place) of the position indicated by coordinate (2.9,1.3) as robot B in the second predeterminable area.
Alternatively, each robot has different visual identifications in the first predeterminable area.Obtain the first predeterminable area
Image;In the picture by visual identification come identify with the corresponding robot of visual identification, by the position of robot in the picture
Put and be mapped as robot in the position of the first predeterminable area, and using robot in the position of the first predeterminable area as first
Put.
Visual identification for the color of some component of robot, shape or color and shape combination.
Using multiple cameras, each camera covers first predeterminable area, has multiple machines in the first predeterminable area
Device people, each robot have the visual identification associated with its ID.
Camera shoots the image of the first predeterminable area, the position of visual identification is identified in the picture, so as to obtain
The position for the robot that visual identification is associated, it should be noted that what is obtained is the position of robot in the picture, and not
It is robot in the position of the first predeterminable area.
If it is intended to robot is obtained in the position of the first predeterminable area (first position), it is necessary to by robot in the picture
Position be mapped as robot in the position of the first predeterminable area, and using robot in the position of the first predeterminable area as
One position.The detailed process of mapping is as follows:Obtain the coordinate of robot in the picture;Acquisition image is big with the first predeterminable area
Small ratio;According to the coordinate of ratio enlargement robot in the picture, coordinate of the robot in the first predeterminable area is obtained, and
Using the position indicated by coordinate of the robot in the first predeterminable area as first position.
That is, the coordinate according to image and the ratio enlargement robot of the size of the first predeterminable area in the picture, just obtains
Robot is in the position of the first predeterminable area (first position).
For example, the size of the image of the first predeterminable area of video camera shooting is 10cm × 10cm, robot R1 is in image
In coordinate be (8cm, 9cm).The size of first predeterminable area is 2m × 2m.Since the length of side of the first predeterminable area is image side
Long 20 times, are multiplied by 20, i.e. obtain coordinate of the robot in the first predeterminable area by the coordinates of robot R1 in the picture
(1.6m × 1.8m), the position of coordinate (1.6m × 1.8m) instruction are robot the position of the first predeterminable area (first
Put).
The embodiment of the present invention includes multiple local submodule master control systems and a server, wherein, each local submodule
Block master control system is responsible for first predeterminable area.The image of camera collection can the computing in local submodule master control system
After processing obtains the second coordinate of the robot in the second predeterminable area, by second coordinate of the robot in the second predeterminable area
It is relatively low by communication module (such as wifi module) upload server, requirement of this method to server, it is not easy to which that generation is prolonged
When.Upload onto the server and focused on after the compression of images that camera can also be gathered.
For example, using multiple cameras, each camera covers first predeterminable area, has in the first predeterminable area more
A robot, each robot have that a color color lump is associated with its ID, this color color lump can be the dress with robot
The appearance design color of jewelry such as cap, scarf etc. or robot, camera shoot the image of the first predeterminable area,
The position of different machines people in the images is reflected by specific color lump in the image of the first predeterminable area of tracking in real time.
Alternatively, first position of the robot in the first predeterminable area is being mapped as robot in the second predeterminable area
In the second place after, according to the positional information of second place calculating robot;Control instruction, control are generated according to positional information
System instruction is used to indicate that robot is moved according to the movable information carried in control instruction;Control instruction is sent to robot.
, can by algorithm comparison computing after server gets the second place of the robot in the second predeterminable area
With the movable information of real-time computer device people, the movable information of robot includes path, direction, step number etc..
Control instruction is generated according to movable information.Control instruction is transferred to the motion control of robot by communication module
Module, the motion-control module of robot control the operation of robot according to control instruction, for example, controlling robot to which side
To movement, the how many steps of movement etc., pass through the movable information of the position computer device people according to robot in the second predeterminable area
And robot motion is controlled, enable to robot not collide with each other, robot will not collide other articles or people.
Alternatively, the positional information of robot includes the range information of the boundary line of robot and the first predeterminable area, root
Positional information according to second place calculating robot includes:Calculate the second place and the distance of the boundary line Li of the first predeterminable area
D (i), wherein, i takes 1 to N integer successively, and N is the quantity of the boundary line of the first predeterminable area;From distance d (1) to distance d
(N) distance that numerical value is less than or equal to pre-determined distance d0 is filtered out in this N number of distance, obtains target range;Given birth to according to positional information
Include into control instruction:Using the corresponding boundary line of target range as object boundary line;Control is generated according to object boundary line to refer to
Order, control instruction are used to control robot to move to the direction away from object boundary line.
Only allow to move in first predeterminable area for some robots (such as first kind robot), do not allow
Another the first predeterminable area is moved to from first predeterminable area.Therefore, robot and the first predeterminable area are known in time
Boundary line distance with regard to extremely important, when border of the robot away from the first predeterminable area is nearer, robot should be controlled in time
Movement, make it away from the border of the first predeterminable area.
First predeterminable area has N bars boundary line, respectively boundary line L1, L2 ... LN.Calculating robot and this N respectively
The distance of bar boundary line, obtains distance d (1) to distance d (N).
Pre-determined distance d0 can be previously set, if the distance of robot and boundary line is less than pre-determined distance d0,
Then illustrate that robot is relatively near, it is necessary to adjust the direction of motion of robot with frontier distance, it is pre- to run to first to avoid robot
If the place beyond region.
The distance that numerical value is less than or equal to pre-determined distance d0 is filtered out in this N number of distance from distance d (1) to distance d (N), is obtained
To target range.Using the corresponding boundary line of target range as object boundary line.Control instruction, control are generated according to object boundary line
System instruction is used to control robot to move to the direction away from object boundary line.
By the distance of each bar boundary line of the first predeterminable area where calculating robot and its, in robot and certain
When the distance on border is close, the traffic direction of robot is adjusted, makes it away from the border, robot can be precisely controlled in
Operation, avoids and collides with the robot of other the first predeterminable areas in one the first predeterminable area.
Alternatively, before the movable information according to second place calculating robot, it is first kind machine to judge robot
People or the second class robot;If it is judged that robot is first kind robot, control instruction allows robot at one the
Movement in one predeterminable area;If it is judged that robot is the second class robot, control instruction allows robot multiple first
Movement in predeterminable area.
It is that first kind robot or the second class robot include to judge robot:Identify the visual identification of robot;Sentence
Disconnected visual identification is first kind mark or the second class mark;If visual identification is first kind mark, it is the to determine robot
A kind of robot, wherein, first kind robot only allows to move in first predeterminable area;If visual identification is second
Class identifies, and it is the second class robot to determine robot, wherein, the second class robot allows to transport in multiple first predeterminable areas
It is dynamic.
There are an ornament, such as scarf, cap, gloves etc. with each robot, by identifying ornament
Color can recognize that the classification belonging to robot.For example, if the color of ornament were solid color, ornament first
Class identifies, and the first kind identifies the corresponding artificial first kind robot of machine, and first kind robot only allows default at one first
Moved in region.If the color of ornament is colored, ornament identifies for the second class, and the second class identifies corresponding machine
Artificial second class robot, the second class robot allow to move in multiple first predeterminable areas.
An embodiment of the present invention provides the method for two kinds of different control robot scopes of activities.Method one:All machines
People (first kind robot) only allows movable in first predeterminable area.Method two:Most of robot (first kind machine
People) only allow the activity in first predeterminable area, small part robot (the second class robot) allows pre- multiple first
It is if movable in region.
Two methods are each advantageous, now describe in detail.
In method one, the scope of the movement of the robot in the first different predeterminable areas is not intersected, for example, second
Predeterminable area includes 3 the first predeterminable areas altogether, and respectively the first predeterminable area Z1, the first predeterminable area Z2, first are preset
Region Z3.Wherein, there are 2 robots in the first predeterminable area Z1, then this 2 robots only allow in the first predeterminable area Z1
Interior activity, not in movable in the first predeterminable area Z2 or the first predeterminable area Z3.There are 3 machines in first predeterminable area Z2
Device people, then it is movable that this 3 robots only allow in the first predeterminable area Z2, not in the first predeterminable area Z1 or the
It is movable in one predeterminable area Z3.There are 7 robots in first predeterminable area Z3, then this 7 robots only allow pre- first
If movable in the Z3 of region, not in movable in the first predeterminable area Z1 or the first predeterminable area Z2.The advantages of this method
It is:Due to setting the scope of activities of robot in the range of the covering of single camera so that in each camera coverage
Robot it is less so that the negligible amounts of visual identification, when visual identification is identified in image processing process, due to visible mark
The negligible amounts of knowledge so that the difficulty of image procossing substantially reduces, and can have in the range of different camera coverings identical
Visual identification, i.e. cannot have identical visual identification in same first predeterminable area, can in the first different predeterminable areas
To there is identical visual identification.
In method two, it is movable in first predeterminable area that most of robot only allows, and small part robot permits
Perhaps it is movable in multiple first predeterminable areas.It can give small part robot that special visual identification is set (for example, special face
Color), it is movable in multiple first predeterminable areas that these robots with special visual identification allow, can be from a shooting
Moved in the range of head covering in the range of the covering of another camera, remaining robot only allows default at one first
It is movable in region.The difficulty of image procossing does not increase very much, still in the difficulty ratio method one of image procossing in method two
The degree of flexibility of robot adds very big in this method, since a part of robot can be movable on a large scale so that opening up
During showing robot, spectators is produced the interest of bigger, lift the likability of spectators.
The positioning accuracy of the localization method of robot provided in an embodiment of the present invention is good, and positioning accuracy is very high, error energy
Enough control within 30cm, far above the positioning accuracy of the positioning method such as Wi-Fi, bluetooth, Zigbee, RFID, and due to making
With multiple cameras, orientation range is expanded, and cost is also positioned far below ultra wide band and laser radar.
An embodiment of the present invention provides a kind of alignment system of robot.The system includes camera module 40, master control system
System module 42, communication module 44, server module 46 and motion-control module 48.Camera module 40 includes multiple cameras.
As illustrated in FIG. 4, each camera covers a scope, and camera is installed at the top of museum, on museum ground
On, there is certain scope of activities in each robot, and the image of the first predeterminable area of shooting is uploaded to master control by each camera
System module 42, master control system module 42 is according to robot in the position of the first predeterminable area and the first predeterminable area opposite second
The relative position of predeterminable area, calculates robot in the position of the second predeterminable area (second place).Master control system module 42
Robot is transferred to server module 46 in the position of the second predeterminable area (second place) by communication module 44.Server
Module 46 is according to robot in the motion path of the position of the second predeterminable area (second place) real-time planning robot, generation control
Control signal, is transferred to the motion-control module 48 of robot by signal processed by communication module 44.The motion control of robot
The distance that module 48 is moved and moved to which direction according to control signal control robot, it is mutual so as to avoid robot
Collision, it also avoid robot and bumps against with other object or person.
Infrared sensor or sonar can also be increased in the embodiment of the present invention so that the positioning to robot is more accurate
Really.
According to embodiments of the present invention, a kind of positioner of robot is additionally provided.The positioner of the robot can be with
The localization method of above-mentioned robot is performed, the localization method of above-mentioned robot can also be real by the positioner of the robot
Apply.
Fig. 5 is the schematic diagram of the positioner of robot according to embodiments of the present invention, as shown in figure 5, the device includes
First acquisition unit 10, second acquisition unit 20 and map unit 30.
First acquisition unit 10, for obtaining the first position of robot in the first predeterminable area, wherein, the first preset areas
Domain includes at least one robot.
Second acquisition unit 20, for obtaining relative position of first predeterminable area with respect to the second predeterminable area, wherein, the
Two predeterminable areas are the set of multiple first predeterminable areas.
Map unit 30, for according to relative position and first position, by robot in the first predeterminable area first
Position is mapped as the second place of the robot in the second predeterminable area.
Alternatively, first position is robot using the first reference point as the coordinate in the two-dimensional coordinate system of coordinate origin
(x1, y1), the first reference point are any one point in the first predeterminable area, and relative position is the first reference point with the second ginseng
Examination point is the coordinate (x2, y2) in the two-dimensional coordinate system of coordinate origin, and the second reference point is any one in the second predeterminable area
A point.Map unit includes:First computation subunit, for x1 to be added with x2, obtains x0;Second computation subunit, is used for
Y1 is added with y2, obtains y0;First determination subelement, for the position indicated by coordinate (x0, y0) to be existed as robot
The second place in second predeterminable area.
Alternatively, first acquisition unit includes:Subelement is obtained, for obtaining the image of the first predeterminable area;First knows
Small pin for the case unit, for identifying robot in the picture;Subelement is mapped, for the position of robot in the picture to be mapped as machine
Device people in the position of the first predeterminable area, and using robot in the position of the first predeterminable area as first position.
Alternatively, the first identification subelement includes:First acquisition module, for obtaining the coordinate of robot in the picture;
Second acquisition module, the ratio of the size for obtaining image and the first predeterminable area;Amplification module, for according to ratio enlargement
The coordinate of robot in the picture, obtains coordinate of the robot in the first predeterminable area, and by robot in the first preset areas
Position indicated by the coordinate in domain is as first position.
Alternatively, each robot has different visual identifications in the first predeterminable area, and the first identification subelement includes:
Identification module, for being identified and the corresponding robot of visual identification by visual identification.
Alternatively, it is seen that be identified as the combination of the color of some component of robot, shape or color and shape.
Alternatively, positioner further includes:Computing unit, according to the positional information of second place calculating robot;Generation
Unit, generates control instruction, control instruction is used to indicate robot according to the movable information in control instruction according to positional information
Movement;Transmitting element, for sending control instruction to robot.
Alternatively, the positional information of robot includes the range information of the boundary line of robot and the first predeterminable area, meter
Calculating unit includes:3rd computation subunit, for calculating the second place and the distance d (i) of the boundary line Li of the first predeterminable area,
Wherein, i takes 1 to N integer successively, and N is the quantity of the boundary line of the first predeterminable area;Subelement is screened, for from distance d
(1) distance that numerical value is less than or equal to pre-determined distance d0 is extremely filtered out in this N number of distance of distance d (N), obtains target range;Generation
Unit includes:Second determination subelement, for using the corresponding boundary line of target range as object boundary line;Generate subelement,
For generating control instruction according to object boundary line, control instruction is used to control robot to transport to the direction away from object boundary line
It is dynamic.
Alternatively, positioner further includes:Judging unit, in computing unit according to second place calculating robot's
Before movable information, it is first kind robot or the second class robot to judge robot;If it is judged that robot is first
Class robot, control instruction allow robot to be moved in first predeterminable area;If it is judged that robot is the second class
Robot, control instruction allow robot to be moved in multiple first predeterminable areas.
Alternatively, judging unit includes:Second identification subelement, for identifying the visual identification of robot;Judge that son is single
Member, for judging that visual identification is first kind mark or the second class mark;3rd determination subelement, if for visual identification
It is first kind mark, it is first kind robot to determine robot, wherein, first kind robot only allows first preset areas
Moved in domain;4th determination subelement, if being the second class mark for visual identification, it is the second class machine to determine robot
People, wherein, the second class robot allows to move in multiple first predeterminable areas.
Another aspect according to embodiments of the present invention, there is provided a kind of robot, including:Above-described robot determines
Position device.
In the above embodiment of the present invention, the description to each embodiment all emphasizes particularly on different fields, and does not have in some embodiment
The part of detailed description, may refer to the associated description of other embodiment.
In several embodiments provided by the present invention, it should be understood that disclosed technology contents, can pass through others
Mode is realized.Wherein, device embodiment described above is only schematical, such as the division of the unit, Ke Yiwei
A kind of division of logic function, can there is an other dividing mode when actually realizing, for example, multiple units or component can combine or
Person is desirably integrated into another system, or some features can be ignored, or does not perform.Another, shown or discussed is mutual
Between coupling, direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, unit or module
Connect, can be electrical or other forms.
The unit illustrated as separating component may or may not be physically separate, be shown as unit
The component shown may or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On unit.Some or all of unit therein can be selected to realize the purpose of this embodiment scheme according to the actual needs.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units integrate in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated unit is realized in the form of SFU software functional unit and is used as independent production marketing or use
When, it can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part to contribute in other words to the prior art or all or part of the technical solution can be in the form of software products
Embody, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or network equipment etc.) perform each embodiment the method for the present invention whole or
Part steps.And foregoing storage medium includes:USB flash disk, read-only storage (ROM, Read-Only Memory), arbitrary access are deposited
Reservoir (RAM, Random Access Memory), mobile hard disk, magnetic disc or CD etc. are various can be with store program codes
Medium.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For member, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (21)
- A kind of 1. localization method of robot, it is characterised in that including:The first position of robot in multiple first predeterminable areas is obtained, wherein, first predeterminable area includes at least one A robot;Relative position of first predeterminable area with respect to the second predeterminable area is obtained, wherein, second predeterminable area is more The set of a first predeterminable area;According to the relative position and the first position, by the robot in first predeterminable area described first Position is mapped as the second place of the robot in second predeterminable area.
- 2. localization method according to claim 1, it is characterised in that the first position is the robot with first Reference point is the coordinate (x1, y1) in the two-dimensional coordinate system of coordinate origin, and first reference point is first predeterminable area In any one point, the relative position is first reference point in the two-dimensional coordinate using the second reference point as coordinate origin Coordinate (x2, y2) in system, second reference point are any one point in second predeterminable area,According to the relative position and the first position, by the robot in first predeterminable area described first Position, which is mapped as the second place of the robot in second predeterminable area, to be included:X1 is added with x2, obtains x0;Y1 is added with y2, obtains y0;By the second of the position indicated by coordinate (x0, y0) as the robot in second predeterminable area Put.
- 3. localization method according to claim 1, it is characterised in that obtain first of robot in the first predeterminable area Put including:Obtain the image of first predeterminable area;The robot is identified in described image;Position of the robot in described image is mapped as the robot in the position of first predeterminable area, and And using the robot in the position of first predeterminable area as the first position.
- 4. localization method according to claim 3, it is characterised in that reflect position of the robot in described image It is the robot in the position of first predeterminable area to penetrate, and by the robot in the position of first predeterminable area Put includes as the first position:Obtain coordinate of the robot in described image;Obtain the ratio of described image and the size of first predeterminable area;According to coordinate of the robot in described image described in the ratio enlargement, it is default described first to obtain the robot The coordinate in region, and using the position indicated by coordinate of the robot in first predeterminable area as described first Put.
- 5. localization method according to claim 3, it is characterised in that each robot has in first predeterminable area Different visual identifications,Identify that the robot includes in described image:Identified and the corresponding robot of the visual identification by the visual identification.
- 6. localization method according to claim 5, it is characterised in that the visual identification is some of the robot The combination of the color of component, shape or color and shape.
- 7. localization method according to claim 1, it is characterised in that the localization method further includes:The positional information of the robot is calculated according to the second place;Control instruction is generated according to the positional information, the control instruction is used to indicate that the robot refers to according to the control Movable information movement in order;The control instruction is sent to the robot.
- 8. localization method according to claim 7, it is characterised in that the positional information of the robot includes the machine People and the range information of the boundary line of first predeterminable area, the position that the robot is calculated according to the second place are believed Breath includes:The second place and the distance d (i) of the boundary line Li of first predeterminable area are calculated, wherein, i takes 1 to N's successively Integer, N are the quantity of the boundary line of first predeterminable area;The distance that numerical value is less than or equal to pre-determined distance d0 is filtered out in this N number of distance from distance d (1) to distance d (N), obtains mesh Subject distance;Generating control instruction according to the positional information includes:Using the corresponding boundary line of the target range as object boundary line;Control instruction is generated according to the object boundary line, the control instruction is used to control the robot to away from the mesh Mark the direction movement of boundary line.
- 9. localization method according to claim 7, it is characterised in thatBefore the movable information of the robot is calculated according to the second place, the method further includes:It is first kind robot or the second class robot to judge the robot;If it is judged that the robot is the first kind robot, the control instruction allows the robot in an institute State movement in the first predeterminable area;If it is judged that the robot is the second class robot, the control instruction allows the robot in multiple institutes State movement in the first predeterminable area.
- 10. localization method according to claim 9, it is characterised in that judge the robot be first kind robot also It is that the second class robot includes:Identify the visual identification of the robot;It is first kind mark or the second class mark to judge the visual identification;If the visual identification is the first kind mark, it is the first kind robot to determine the robot, wherein, institute Stating first kind robot only allows to move in first predeterminable area;If the visual identification is the second class mark, it is the second class robot to determine the robot, wherein, institute Stating the second class robot allows to move in multiple first predeterminable areas.
- A kind of 11. positioner of robot, it is characterised in that including:First acquisition unit, for obtaining the first position of robot in multiple first predeterminable areas, wherein, described first is default Region includes at least one robot;Second acquisition unit, for obtaining relative position of first predeterminable area with respect to the second predeterminable area, wherein, it is described Second predeterminable area is the set of multiple first predeterminable areas;Map unit, for according to the relative position and the first position, by the robot in first preset areas The first position in domain is mapped as the second place of the robot in second predeterminable area.
- 12. positioner according to claim 11, it is characterised in that the first position is the robot with the One reference point is the coordinate (x1, y1) in the two-dimensional coordinate system of coordinate origin, and first reference point is first preset areas Any one point in domain, the relative position are that first reference point is sat in the two dimension using the second reference point as coordinate origin Coordinate (x2, y2) in mark system, second reference point are any one point in second predeterminable area,The map unit includes:First computation subunit, for x1 to be added with x2, obtains x0;Second computation subunit, for y1 to be added with y2, obtains y0;First determination subelement, for the position indicated by coordinate (x0, y0) is default described second as the robot The second place in region.
- 13. positioner according to claim 11, it is characterised in that the first acquisition unit includes:Subelement is obtained, for obtaining the image of first predeterminable area;First identification subelement, for identifying the robot in described image;Subelement is mapped, it is pre- described first for position of the robot in described image to be mapped as the robot If the position in region, and using the robot in the position of first predeterminable area as the first position.
- 14. positioner according to claim 13, it is characterised in that the first identification subelement includes:First acquisition module, for obtaining coordinate of the robot in described image;Second acquisition module, the ratio of the size for obtaining described image and first predeterminable area;Amplification module, for coordinate of the robot in described image according to the ratio enlargement, obtains the robot In the coordinate of first predeterminable area, and the position indicated by by coordinate of the robot in first predeterminable area As the first position.
- 15. positioner according to claim 13, it is characterised in that each robot tool in first predeterminable area There is different visual identifications,The first identification subelement includes:Identification module, for being identified and the corresponding robot of the visual identification by the visual identification.
- 16. positioner according to claim 15, it is characterised in that the visual identification is a certain for the robot The combination of the color of a component, shape or color and shape.
- 17. positioner according to claim 11, it is characterised in that the positioner further includes:Computing unit, the positional information of the robot is calculated according to the second place;Generation unit, according to the positional information generate control instruction, the control instruction be used for indicate the robot according to Movable information movement in the control instruction;Transmitting element, for sending the control instruction to the robot.
- 18. positioner according to claim 17, it is characterised in that the positional information of the robot includes the machine Device people and the range information of the boundary line of first predeterminable area, the computing unit include:3rd computation subunit, for calculating the second place and the distance d of the boundary line Li of first predeterminable area (i), wherein, i takes 1 to N integer successively, and N is the quantity of the boundary line of first predeterminable area;Subelement is screened, is less than or equal to pre-determined distance for filtering out numerical value in this N number of distance from distance d (1) to distance d (N) The distance of d0, obtains target range;The generation unit includes:Second determination subelement, for using the corresponding boundary line of the target range as object boundary line;Subelement is generated, for generating control instruction according to the object boundary line, the control instruction is used to control the machine Device people moves to the direction away from the object boundary line.
- 19. positioner according to claim 17, it is characterised in that the positioner further includes:Judging unit, for before the computing unit calculates the movable information of the robot according to the second place, It is first kind robot or the second class robot to judge the robot;If it is judged that the robot is the first kind robot, the control instruction allows the robot in an institute State movement in the first predeterminable area;If it is judged that the robot is the second class robot, the control instruction allows the robot in multiple institutes State movement in the first predeterminable area.
- 20. positioner according to claim 19, it is characterised in that the judging unit includes:Second identification subelement, for identifying the visual identification of the robot;Judgment sub-unit, for judging that the visual identification is first kind mark or the second class mark;3rd determination subelement, if being the first kind mark for the visual identification, it is described to determine the robot First kind robot, wherein, the first kind robot only allows to move in first predeterminable area;4th determination subelement, if being the second class mark for the visual identification, it is described to determine the robot Second class robot, wherein, the second class robot allows to move in multiple first predeterminable areas.
- 21. a kind of robot, it is characterised in that include the positioner of claim 11 to 20 any one of them robot.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610942285.XA CN107972027B (en) | 2016-10-25 | 2016-10-25 | Robot positioning method and device and robot |
PCT/CN2017/092032 WO2018076777A1 (en) | 2016-10-25 | 2017-07-06 | Robot positioning method and device, and robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610942285.XA CN107972027B (en) | 2016-10-25 | 2016-10-25 | Robot positioning method and device and robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107972027A true CN107972027A (en) | 2018-05-01 |
CN107972027B CN107972027B (en) | 2020-11-27 |
Family
ID=62005141
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610942285.XA Active CN107972027B (en) | 2016-10-25 | 2016-10-25 | Robot positioning method and device and robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107972027B (en) |
WO (1) | WO2018076777A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110281232A (en) * | 2019-05-10 | 2019-09-27 | 广州明珞汽车装备有限公司 | Method, system, device and the storage medium of the whole robot location of Fast Circle |
CN112077841A (en) * | 2020-08-10 | 2020-12-15 | 北京大学 | Multi-joint linkage method and system for manipulator precision of elevator robot arm |
CN112223281A (en) * | 2020-09-27 | 2021-01-15 | 深圳市优必选科技股份有限公司 | Robot and positioning method and device thereof |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110297493B (en) * | 2019-07-11 | 2022-04-12 | 磐典商务服务(上海)有限公司 | Tracking and positioning method and device for electronic tag based on robot |
CN118024242A (en) * | 2020-12-31 | 2024-05-14 | 北京极智嘉科技股份有限公司 | Robot and positioning method thereof |
CN112843717B (en) * | 2021-03-12 | 2024-02-13 | 网易(杭州)网络有限公司 | Resource allocation method and device, storage medium and computer equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000055657A (en) * | 1998-08-05 | 2000-02-25 | Clarion Co Ltd | Position-measuring device |
CN101509781A (en) * | 2009-03-20 | 2009-08-19 | 同济大学 | Walking robot positioning system based on monocular cam |
CN104776832A (en) * | 2015-04-16 | 2015-07-15 | 浪潮软件集团有限公司 | Method, set top box and system for positioning objects in space |
CN105300375A (en) * | 2015-09-29 | 2016-02-03 | 塔米智能科技(北京)有限公司 | Robot indoor positioning and navigation method based on single vision |
CN105554472A (en) * | 2016-01-29 | 2016-05-04 | 西安电子科技大学 | Video monitoring system covering environment and method for positioning robots by same |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1752263B1 (en) * | 2005-08-10 | 2008-07-16 | Honda Research Institute Europe GmbH | Increasing robustness of online calibration through motion detection |
KR100843085B1 (en) * | 2006-06-20 | 2008-07-02 | 삼성전자주식회사 | Method of building gridmap in mobile robot and method of cell decomposition using it |
EP2460629B1 (en) * | 2009-07-28 | 2022-06-29 | Yujin Robot Co., Ltd. | Control method for localization and navigation of mobile robot and mobile robot using same |
CN103640018B (en) * | 2013-12-13 | 2014-09-03 | 江苏久祥汽车电器集团有限公司 | SURF (speeded up robust feature) algorithm based localization method |
US9342888B2 (en) * | 2014-02-08 | 2016-05-17 | Honda Motor Co., Ltd. | System and method for mapping, localization and pose correction of a vehicle based on images |
-
2016
- 2016-10-25 CN CN201610942285.XA patent/CN107972027B/en active Active
-
2017
- 2017-07-06 WO PCT/CN2017/092032 patent/WO2018076777A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000055657A (en) * | 1998-08-05 | 2000-02-25 | Clarion Co Ltd | Position-measuring device |
CN101509781A (en) * | 2009-03-20 | 2009-08-19 | 同济大学 | Walking robot positioning system based on monocular cam |
CN104776832A (en) * | 2015-04-16 | 2015-07-15 | 浪潮软件集团有限公司 | Method, set top box and system for positioning objects in space |
CN105300375A (en) * | 2015-09-29 | 2016-02-03 | 塔米智能科技(北京)有限公司 | Robot indoor positioning and navigation method based on single vision |
CN105554472A (en) * | 2016-01-29 | 2016-05-04 | 西安电子科技大学 | Video monitoring system covering environment and method for positioning robots by same |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110281232A (en) * | 2019-05-10 | 2019-09-27 | 广州明珞汽车装备有限公司 | Method, system, device and the storage medium of the whole robot location of Fast Circle |
CN112077841A (en) * | 2020-08-10 | 2020-12-15 | 北京大学 | Multi-joint linkage method and system for manipulator precision of elevator robot arm |
CN112223281A (en) * | 2020-09-27 | 2021-01-15 | 深圳市优必选科技股份有限公司 | Robot and positioning method and device thereof |
Also Published As
Publication number | Publication date |
---|---|
WO2018076777A1 (en) | 2018-05-03 |
CN107972027B (en) | 2020-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107972027A (en) | The localization method and device of robot, robot | |
Wan et al. | Teaching robots to do object assembly using multi-modal 3d vision | |
CN106647766A (en) | Robot cruise method and system based on complex environment UWB-vision interaction | |
Schmidt et al. | Depth camera based collision avoidance via active robot control | |
CN108829137A (en) | A kind of barrier-avoiding method and device of robot target tracking | |
CN109313417A (en) | Help robot localization | |
CN107662195A (en) | A kind of mechanical hand principal and subordinate isomery remote operating control system and control method with telepresenc | |
CN106197452A (en) | A kind of visual pattern processing equipment and system | |
CN106162144A (en) | A kind of visual pattern processing equipment, system and intelligent machine for overnight sight | |
CN106548675A (en) | Virtual military training method and device | |
CN110260866A (en) | A kind of robot localization and barrier-avoiding method of view-based access control model sensor | |
EP3578321A1 (en) | Method for use with a machine for generating an augmented reality display environment | |
CN114347033A (en) | Robot article grabbing method and device, robot and storage medium | |
CN109443345A (en) | For monitoring the localization method and system of navigation | |
CN110009689A (en) | A kind of image data set fast construction method for the robot pose estimation that cooperates | |
CN109462739A (en) | Power plant equipment O&M method and system | |
KR20190063967A (en) | Method and apparatus for measuring position using stereo camera and 3D barcode | |
CN106127119B (en) | Joint probabilistic data association method based on color image and depth image multiple features | |
CN114299039A (en) | Robot and collision detection device and method thereof | |
Wong et al. | Visual gaze analysis of robotic pedestrians moving in urban space | |
Scheuermann et al. | Mobile augmented reality based annotation system: A cyber-physical human system | |
CN116817891A (en) | Real-time multi-mode sensing high-precision map construction method | |
Ninomiya et al. | Automatic calibration of industrial robot and 3D sensors using real-time simulator | |
CN113838203B (en) | Navigation system based on three-dimensional point cloud map and two-dimensional grid map and application method | |
CN109443346A (en) | Monitor navigation methods and systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |