CN110928312B - Robot position determination method, non-volatile computer-readable storage medium, and robot - Google Patents

Robot position determination method, non-volatile computer-readable storage medium, and robot Download PDF

Info

Publication number
CN110928312B
CN110928312B CN201911296211.3A CN201911296211A CN110928312B CN 110928312 B CN110928312 B CN 110928312B CN 201911296211 A CN201911296211 A CN 201911296211A CN 110928312 B CN110928312 B CN 110928312B
Authority
CN
China
Prior art keywords
point cloud
lines
coordinate system
robot
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911296211.3A
Other languages
Chinese (zh)
Other versions
CN110928312A (en
Inventor
闫瑞君
刘敦浩
林李泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Group Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Technology Co Ltd filed Critical Shenzhen Silver Star Intelligent Technology Co Ltd
Priority to CN201911296211.3A priority Critical patent/CN110928312B/en
Publication of CN110928312A publication Critical patent/CN110928312A/en
Application granted granted Critical
Publication of CN110928312B publication Critical patent/CN110928312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0261Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic plots

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention relates to the technical field of robots, in particular to a robot position determining method, a nonvolatile computer readable storage medium and a robot. The method comprises the following steps: acquiring a first point cloud and a first yaw angle of the robot at a known position and a second point cloud and a second yaw angle of the robot at an unknown position in a preset space, wherein the first point cloud and the second point cloud are two discontinuous frames of point clouds; converting the second point cloud to a rotating coordinate system according to the first yaw angle and the second yaw angle to obtain a converted point cloud, wherein the rotating coordinate system is parallel to the first coordinate system of the first point cloud; determining at least two pairs of matching lines; and determining the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position. Even if the robot is moved to an unknown position in a large-scale instant, the method can determine the position information of the unknown position without reconstructing a map, and the position information of the unknown position can be used as an initial value of repositioning operation, so that the search space is effectively reduced, the calculation amount is further reduced, and the repositioning efficiency is greatly improved.

Description

Robot position determination method, non-volatile computer-readable storage medium, and robot
Technical Field
The invention relates to the technical field of robots, in particular to a robot position determining method, a nonvolatile computer readable storage medium and a robot.
Background
With the development of machine vision technology, robots build maps themselves in a physical space and perform navigation according to the built maps so as to complete specific business functions.
In the navigation process, the robot is often moved to an unfamiliar position in a large scale and in a moment, so that effective navigation cannot be realized.
In order to implement navigation effectively, a robot needs to be repositioned conventionally, however, a repositioning search interval is usually a map of the whole physical space, and the robot needs to perform calculation of a large amount of data, so that the repositioning time is long, and a specific business function cannot be completed effectively.
Disclosure of Invention
It is an object of embodiments of the present invention to provide a robot position determining method, a non-volatile computer-readable storage medium, and a robot, which can efficiently relocate.
In a first aspect, an embodiment of the present invention provides a robot position determining method, including:
acquiring a first point cloud and a first yaw angle of the robot at a known position and a second point cloud and a second yaw angle of the robot at an unknown position in a preset space, wherein the first point cloud and the second point cloud are two discontinuous frames of point clouds;
according to the first yaw angle and the second yaw angle, transforming the second point cloud to a rotating coordinate system to obtain a transformed point cloud, wherein the rotating coordinate system is parallel to a first coordinate system of the first point cloud;
determining at least two pairs of matching lines, wherein one line of each pair of matching lines is obtained by the first point cloud, and the other line of each pair of matching lines is obtained by the transformation point cloud;
and determining the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position.
In a second aspect, an embodiment of the present invention provides a robot position determining apparatus, including:
the robot comprises a data acquisition module, a data acquisition module and a control module, wherein the data acquisition module is used for acquiring a first point cloud and a first yaw angle of the robot at a known position and a second point cloud and a second yaw angle of the robot at an unknown position in a preset space, and the first point cloud and the second point cloud are two discontinuous frames of point clouds;
the coordinate conversion module is used for converting the second point cloud into a rotating coordinate system according to the first yaw angle and the second yaw angle to obtain a converted point cloud, and the rotating coordinate system is parallel to the first coordinate system of the first point cloud;
a line determining module for determining at least two pairs of matching lines, one line of each pair of matching lines being obtained from the first point cloud, and the other line being obtained from the transformed point cloud;
and the position determining module is used for determining the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position.
In a third aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer-executable instructions for causing a robot to perform any one of the robot position determining methods.
In a fourth aspect, an embodiment of the present invention provides a robot, including:
the image acquisition equipment is used for acquiring point cloud data of a preset space;
the angle detection equipment is used for acquiring a yaw angle of the robot;
the at least one processor is electrically connected with the image acquisition equipment and the angle detection equipment respectively; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the robot position determination methods.
In a fifth aspect, embodiments of the present invention provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a robot, cause the robot to perform any one of the robot position determination methods.
Compared with the prior art, in the robot position determining method provided by each embodiment of the invention, first, a first point cloud and a first yaw angle of the robot at a known position and a second point cloud and a second yaw angle of the robot at an unknown position in a preset space are obtained, and the first point cloud and the second point cloud are two discontinuous frames of point clouds. And secondly, transforming the second point cloud to a rotating coordinate system according to the first yaw angle and the second yaw angle to obtain a transformed point cloud, wherein the rotating coordinate system is parallel to the first coordinate system of the first point cloud. And determining at least two pairs of matching lines, wherein one line of each pair of matching lines is obtained by the first point cloud, and the other line of each pair of matching lines is obtained by converting the point cloud. And finally, determining the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position. Therefore, even if the robot is moved to the unknown position in a large scale instantly, the method can determine the position information of the unknown position without reconstructing a map, and the position information of the unknown position can be used as an initial value of repositioning operation, thereby effectively reducing the search space, further reducing the calculation amount and greatly improving the repositioning efficiency.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a schematic circuit structure diagram of a robot according to an embodiment of the present invention;
fig. 2a is a schematic flow chart of a robot position determining method according to an embodiment of the present invention;
fig. 2b and fig. 2c are schematic diagrams illustrating the transformation of the coordinate systems of the robot and the lidar according to the embodiment of the present invention;
FIG. 2d is a schematic diagram of a robot to two pairs of matching lines provided by an embodiment of the present invention;
FIG. 3 is a schematic flow chart of S50 in FIG. 2;
FIG. 4 is a schematic flow chart of S51 in FIG. 3;
FIG. 5a is a schematic flow chart of S512 in FIG. 4;
FIG. 5b is a schematic diagram illustrating an intersection of a first candidate line and a second candidate line in the first point cloud according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart of S52 in FIG. 3;
fig. 7a is a schematic structural diagram of a robot position determining apparatus according to an embodiment of the present invention;
FIG. 7b is a schematic diagram of the line determination module of FIG. 7 a;
fig. 8 is a schematic circuit structure diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if not conflicted, the various features of the embodiments of the invention may be combined with each other within the scope of protection of the invention. Additionally, while functional block divisions are performed in apparatus schematics, with logical sequences shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions in apparatus or flowcharts. The terms "first", "second", "third", and the like used in the present invention do not limit data and execution order, but distinguish the same items or similar items having substantially the same function and action.
The robot of the embodiment of the present invention may be configured in any suitable shape to realize specific business function operation, for example, the robot of the embodiment of the present invention may be a sweeping robot, a cleaning robot, a pet robot, a carrying robot, a nursing robot, and the like.
Referring to fig. 1, a robot 100 includes a control unit 11, a wireless communication unit 12, a driving unit 13, an audio unit 14, and a sensor unit 15.
The control unit 11 is a control core of the robot 100, and coordinates operations of the respective units. The control unit 11 may be a general purpose processor (e.g., central processing unit CPU), a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a field programmable gate array (FPGA, CPLD, etc.), a single chip microcomputer, an arm (acorn RISC machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components. Also, the control unit 11 may be any conventional processor, controller, microcontroller, or state machine. The control unit 11 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The wireless communication unit 12 is used for wireless communication with the user terminal, and the wireless communication unit 12 is electrically connected with the control unit 11. The user transmits a control instruction to the robot 100 through the user terminal, the wireless communication unit 12 receives the control instruction and transmits the control instruction to the control unit 11, and the control unit 11 controls the robot 100 according to the control instruction.
The wireless communication unit 12 includes one or more of a combination of a broadcast receiving module, a mobile communication module, a wireless internet module, a short-range communication module, and a location information module. Wherein the broadcast receiving module receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel. The broadcast receiving module may receive a digital broadcast signal using a digital broadcasting system such as terrestrial digital multimedia broadcasting (DMB-T), satellite digital multimedia broadcasting (DMB-S), media forward link only (MediaFLO), digital video broadcasting-handheld (DVB-H), or terrestrial integrated services digital broadcasting (ISDB-T).
The mobile communication module transmits or may receive a wireless signal to or from at least one of a base station, an external terminal, and a server on a mobile communication network. Here, the wireless signal may include a voice call signal, a video call signal, or various forms of data according to the reception and transmission of the character/multimedia message.
The wireless internet module refers to a module for wireless internet connection, and may be built in or out of the terminal. Wireless internet technologies such as wireless lan (wlan) (Wi-Fi), wireless broadband (Wibro), worldwide interoperability for microwave access (Wimax), High Speed Downlink Packet Access (HSDPA) may be used.
The short-range communication module refers to a module for performing short-range communication. Short range communication technologies such as Bluetooth (Bluetooth), Radio Frequency Identification (RFID), infrared data association (IrDA), Ultra Wideband (UWB), or ZigBee may be used.
The location information module is a module for obtaining a location of the mobile terminal, such as a Global Positioning System (GPS) module.
The driving unit 13 is configured to drive the robot 100 to walk, wherein the control unit 11 constructs a map of a preset space according to the sensor data collected by the sensor unit 15 and by combining with a map construction algorithm, and then the control unit 11 controls the driving unit 13 to drive the robot 100 to perform specific business functions, such as cleaning the ground, sweeping the floor, carrying goods, and the like, according to the navigation instruction. The map construction algorithm may adopt slam (hierarchical localization and mapping) and the like.
In some embodiments, the driving unit 13 may be composed of a roller and a driving module, the driving module includes a motor and a driving shaft, the motor is connected to the driving shaft, the driving shaft is connected to the roller, and the control unit 11 may control the operation of the motor, so that the roller is driven to rotate by the driving shaft.
The audio unit 14 is used for collecting sounds around the robot 100, or pushing sounds, and the audio unit 14 is electrically connected with the control unit 11.
In some embodiments, the audio unit 14 may be an electroacoustic transducer such as a speaker, a loudspeaker, a microphone, etc., wherein the number of speakers or loudspeakers may be one or more, the number of microphones may be multiple, and multiple microphones may form a microphone array so as to effectively collect sound. The microphone may be of an electric type (moving coil type, ribbon type), a capacitive type (direct current polarization type), a piezoelectric type (crystal type, ceramic type), an electromagnetic type, a carbon particle type, a semiconductor type, or the like, or any combination thereof. In some embodiments, the microphone may be a microelectromechanical systems (MEMS) microphone.
The sensor unit 15 is used to detect a preset space where the robot 100 is located, and obtain sensor data. The preset space is an environment where the robot is located, in some embodiments, the preset space may be a physical space composed of a plurality of walls or wall boards, and in the preset space, a user may place a table, a television, a trash can, and other household articles. When the robot 100 uses the sensor unit 15 to detect the preset space, on one hand, the sensor unit 15 may collect an image in the preset space, wherein the image may be obtained by the robot 100 at different positions or different angles, and the image may be obtained by the camera module shooting the preset space, and may also be obtained by the laser radar scanning the preset space. On the other hand, the sensor unit 15 may also acquire some motion parameters of the robot 100, such as acceleration, yaw angle, and the like.
In some embodiments, the sensor unit 15 includes an angle detection device and an image acquisition device.
The angle detection device may select a motion sensor for acquiring some motion parameters of the robot 100, and the motion sensor may include, for example, an Inertial Measurement Unit (IMU), a gyroscope, a magnetic field meter, an accelerometer, a speedometer, or the like.
The image acquisition equipment comprises a camera module and a laser radar, wherein the camera module is used for shooting a preset space to obtain a picture image of the preset space. For the same object in the preset space, when the robot shoots the preset space at different angles, the object in one frame of image is rotated by the preset angle and is approximately overlapped with the same object in another frame of image.
The camera module is fixedly arranged on the robot, the robot is configured with a robot coordinate system, and the robot is also corresponding to a world coordinate system. The camera module is configured with a camera coordinate system, an image coordinate system, and a pixel coordinate system, and in some embodiments, the robot coordinate system may or may not coincide with the camera coordinate system. And when the robot coordinate system and the camera coordinate system are not coincident, converting the physical points between the robot coordinate system and the camera coordinate system through a conversion matrix between the robot coordinate system and the camera coordinate system.
The camera module comprises one or more optical sensors and a lens, wherein the one or more optical sensors are arranged on an imaging surface of the lens, and when an object is shot, a generated optical image is projected onto the optical sensors through the lens. The optical sensor includes a Charge-coupled Device (CCD), a Complementary Metal Oxide Semiconductor (CMOS), and the CMOS sensor may be a backside illuminated CMOS sensor or a stacked CMOS sensor.
In some embodiments, the camera module further integrates an ISP (Image Signal Processor) for processing output data of the optical sensor, such as processing for AEC (automatic exposure control), AGC (automatic gain control), AWB (automatic white balance), color correction, and the like.
The laser radar is used for scanning the preset space to obtain a point cloud image of the preset space. Similarly, for the same object in the preset space, when the robot shoots the preset space at different angles, the object in one frame of image is rotated by the preset angle and then is approximately overlapped with the same object in another frame of image.
The lidar is fixedly mounted on the robot and configured with a lidar coordinate system, which in some embodiments may or may not coincide with the lidar coordinate system. And when the robot coordinate system and the laser radar coordinate system are not coincident, converting the physical point between the robot coordinate system and the laser radar coordinate system through a conversion matrix between the robot coordinate system and the laser radar coordinate system.
Laser radars include any type of laser source capable of projecting a laser spot, including solid state lasers, gas lasers, liquid lasers, semiconductor lasers, free electron lasers, and the like.
In some embodiments, the predetermined space is configured with a world coordinate system, the world coordinate system employing a right-hand rule. The origin of the robot coordinate system is at the center of the robot, the x-axis direction is right in front of the robot, the y-axis direction is on the left side of the robot, the z-axis is upward, and a right-hand rule is adopted. The coordinate system of the lidar employs a left-hand rule.
In this embodiment, the 2D Pose of the robot is represented as Pose (x, y, th), where x is the x-axis coordinate of the robot in the world coordinate system, y is the y-axis coordinate of the robot in the world coordinate system, and th is the orientation of the robot in the world coordinate system.
In the present embodiment, the coordinates of the center of the lidar in the robot coordinate system are lidar (x, y), where the mounting position of the lidar is known.
Generally, after a map of a preset space is built by powering on a robot, if the robot walks according to a normal walking path, the robot can determine the position of the robot in the preset space. However, when the robot is moved to an unknown position in a large scale instantly, even if the pose matching algorithms ICP, PLICP, NICP and the like between two frames are adopted, the pose change of the robot is large, for example, the sweeping robot is held to other positions, or the robot is rotated in a large scale instantly by an angle, which leads to the failure of matching between frames.
Moreover, due to installation factors, laser radars of the robot cause laser data points at partial angles to be lost, so that environmental information scanned by the robot at different angles cannot be completely matched due to the loss of partial environmental data, and other laser interframe matching algorithms depending on more complete environmental data also fail. As mentioned above, the conventional method requires robot repositioning, which requires a lot of calculation and is inefficient.
Based on the above, the embodiment of the invention provides a robot position determining method. Referring to fig. 2a, the robot position determining method S20 includes:
s30, acquiring a first point cloud and a first yaw angle of the robot at a known position and a second point cloud and a second yaw angle of the robot at an unknown position in a preset space, wherein the first point cloud and the second point cloud are two discontinuous frames of point clouds;
in this embodiment, the lidar is fixedly mounted to the robot. During detection, the laser radar detects a preset space, so that each frame of point cloud is obtained. The first point cloud is obtained by detecting a preset space by the robot at a known position, and the first yaw angle is the yaw angle of the robot at the known position. The second point cloud is obtained by detecting a preset space at an unknown position by the robot, and the second yaw angle is the yaw angle of the robot at the unknown position.
In this embodiment, the first point cloud and the second point cloud are two discontinuous frames of point clouds in the frame sequence, that is, when the sensor unit obtains the first point cloud at the known position and obtains the second point cloud at the unknown position, wherein the distance between the known position and the unknown position is greater than a preset distance threshold (e.g., 10 cm), the first point cloud and the second point cloud are two discontinuous frames of point clouds in the frame sequence, for example, the robot is held by the binding frame and moves from the known position to the unknown position by 20 cm. At this time, a plurality of frames of invalid point clouds are spaced between the first point cloud obtained at the known position and the second point cloud obtained at the unknown position, that is, the first point cloud and the second point cloud are discontinuous.
In the present embodiment, the position information of the known position is known, and generally, assuming that the time when the robot is kidnapped is the time t1, the position before the time t1 can be obtained by a map which is constructed in advance when the robot is powered on.
After the robot is held away from the known position, the robot is carried by a large-amplitude angle rotation or a large-amplitude distance, so that the robot cannot obtain the position information of the current position through a pre-constructed map, namely cannot obtain the position information of the unknown position, but can acquire the second point cloud of the unknown position.
In this embodiment, when the robot obtains the first point cloud and the second point cloud, the robot can respectively convert the first point cloud and the second point cloud into a two-dimensional point cloud for performing a line matching process. The first point cloud corresponds to a first coordinate system, the second point cloud corresponds to a second coordinate system, and the first coordinate system and the second coordinate system are obtained by converting the coordinate system of a laser radar installed on the robot at different angles.
Converting the first point cloud into a two-dimensional point cloud under a first coordinate system as follows:
Figure BDA0002320622310000091
converting the second point cloud into a two-dimensional point cloud under a second coordinate system as follows:
tx=r*cos(i*π/180)
ty=r*sin(i*π/180)
wherein r is the distance scanned by the current laser angle, and i is the ith laser beam.
And S40, transforming the second point cloud to a rotating coordinate system according to the first yaw angle and the second yaw angle to obtain a transformed point cloud, wherein the rotating coordinate system is parallel to the first coordinate system of the first point cloud.
As previously mentioned, the lidar may be configured with a coordinate system that does not rotate relative to the robot when the robot is traveling, but the coordinate system of the lidar at different positions or angles may move as the robot moves forward, backward, left turn, and right turn. For post-processing line matching, the second point cloud of unknown position needs to be transformed to a rotating coordinate system, wherein the rotating coordinate system is parallel to the first coordinate system of the first point cloud.
For example, please refer to fig. 2b and fig. 2c, wherein in fig. 2c, x0-y0 is a world coordinate system, x1-y1 is a robot coordinate system of the first point cloud, x2-y2 is a first coordinate system of the laser radar in the first point cloud, x3-y3 is a second coordinate system of the laser radar in the second point cloud, x4-y4 is a rotation coordinate system of the transformed point cloud, x5-y5 is a robot coordinate system of the second point cloud, and x6-y6 is a rotated robot coordinate system. The process of obtaining the transformed point cloud is as follows:
the first step is as follows: the robot calculates a yaw angle difference between the second yaw angle and the first yaw angle as follows:
dth=imu_target.yaw-imu_ref.yaw
wherein imu _ target.yaw is the second yaw angle, imu _ ref.yaw is the first yaw angle, and dth is the difference.
The second step is that: the robot calculates the radar coordinates of the laser radar in a rotating coordinate system before rotation according to a two-dimensional coordinate rotating formula and by combining a yaw angle difference value, and the method comprises the following steps:
cx=cos(dth)*lidar.x-sin(dth)*lidar.y+lidar.x
cy=sin(dth)*lidar.x+cos(dth)*lidar.y+lidar.y
the third step: and calculating coordinates of the second point cloud from rotation to a rotation coordinate system according to a two-dimensional coordinate rotation formula by combining the yaw angle difference and the radar coordinate to obtain a transformation point cloud (x2, y2, id2) as follows:
Figure BDA0002320622310000111
and S50, determining at least two pairs of matching lines, wherein one line of each pair of matching lines is obtained by the first point cloud, and the other line of each pair of matching lines is obtained by the point cloud conversion.
In the present embodiment, the line may be linear in any shape, such as a straight line, an arc line, and the like.
Generally, when a robot moves with a laser radar, lines represented by the same object outline in the second point cloud or the transformed point cloud can be correspondingly rotated and transformed, and then the lines represented by the same object outline in the first point cloud and the lines represented by the same object outline in the second point cloud or the transformed point cloud meet the laser index matching condition, so that the position information of an unknown position can be reversely deduced by combining a coordinate system rotation transformation principle. Therefore, in the above process, the matching line is a line that satisfies the laser index matching condition with the line in the transformed point cloud among the lines in the first point cloud. In the same way, the matching line is the line which meets the laser index matching condition with the line in the first point cloud in the plurality of lines of the transformed point cloud. For the same object, when the first point cloud and the transformed point cloud have mutually matched lines, the first line matched with the transformed point cloud and the second line matched with the transformed point cloud and the first point cloud form a pair of matched lines.
It will be appreciated that there are generally a variety of objects in the pre-defined space, which may have a plurality of pairs of matching lines. It will also be appreciated that there may be multiple pairs of matching lines for the same object, for example, a rectangular parallelepiped wall, with four lines per face, and thus there may be four pairs of matching lines in both the first point cloud and the transformed point cloud.
In this embodiment, the robot may process the first point cloud and the second point cloud by using a preset algorithm, extract a plurality of lines from the first point cloud and the second point cloud, and then match any line of the first point cloud with any line of the second point cloud to obtain at least two pairs of matching lines.
And S60, determining the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position.
For example, the robot calculates a distance increment after the robot is moved at a known position according to at least two pairs of matching lines, and determines the position information of the unknown position according to the distance increment and the position information of the known position. Since the position information of the known position is known, after the distance increment Δ L is obtained, the position information of the known position and the distance increment Δ L can be superimposed to obtain the position information of the unknown position.
Generally, the known position can be regarded as a coordinate (X, y, θ) in the world coordinate system, please refer to fig. 2d, when the robot moves in the X-axis direction, and the robot calculates the distance increment according to at least two pairs of matching lines, first, the robot obtains a first angle ref α of one line corrLine [0] [0] in one pair of matching lines corrLine [0] relative to the X-axis of the first coordinate system, a second angle ref β of one line corrLine [1] [0] in the other pair of matching lines corrLine [1] relative to the X-axis of the first coordinate system, a third angle tar α of the other line [0] [1] in one pair of matching lines [0] relative to the X-axis of the rotation coordinate system, and a fourth angle tar β of the other line [1] [1] in the other pair of matching lines [1] relative to the X-axis of the rotation coordinate system.
corrLine [ i ] represents the ith pair of matching lines, corrLine [ i ] [0] represents the line in the first point cloud of the ith pair of matching lines, corrLine [ i ] [1] represents the line in the transformed point cloud of the ith pair of matching lines.
Secondly, the robot calculates a first distance d1 from the origin of the first coordinate system to one line corrLine [0] [0] of the pair of matching lines [0] and a second distance d2 from the origin of the second coordinate system to one line corrLine [1] [0] of the other pair of matching lines [1], and calculates a third distance d3 from the origin of the rotating coordinate system to the other line corrLine [0] [1] of the pair of matching lines [0] and a fourth distance d4 from the origin of the rotating coordinate system to the other line corrLine [1] [1] of the other pair of matching lines [1 ].
Again, the robot calculates a first projected distance refx on the X-axis of the first coordinate system according to the first distance d1, the second distance d2, the first angle ref α, and the second angle ref β, as follows:
refx=d1*sin refα+d2*sin refβ
again, the robot calculates a second projected distance tarx on the X-axis of the rotating coordinate system according to the third distance d3, the fourth distance d4, the third angle tar α, and the fourth angle tar β, as follows:
tarx=d3*sin tarα+d4*sin tarβ
finally, the robot subtracts the first projection distance refx from the second projection distance tarx to obtain a first distance increment dx, as follows:
dx=tarx-refx
similarly, when the robot moves in the Y-axis direction, first, the robot calculates a third projected distance refy on the Y-axis of the first coordinate system according to the first distance d1, the second distance d2, the first angle ref α, and the second angle ref β, as follows:
refy=d1*cos refα-d2*cos refβ
next, the robot calculates a fourth projected distance tary on the Y axis of the rotating coordinate system based on the third distance d3, the fourth distance d4, the third angle tar α, and the fourth angle tar β, as follows:
tary=d3*cos tarα-d4*cos tarβ
finally, the robot subtracts the third projection distance refy from the fourth projection distance try to obtain a second distance increment dy, as follows:
dy=tary-refy
in a similar manner, the robot can also calculate the angular increment of the robot as follows:
dth=corrLine[0][1].a-corrLine[0][0].a
when the distance increment and the angle increment of the X axis and the Y axis are calculated, and the robot determines the position information of the unknown position according to the distance increment and the position information of the known position, firstly, the robot calculates the yaw angle difference t between the second yaw angle and the first yaw angle.
Secondly, the robot determines the position information of the unknown position according to a two-dimensional coordinate rotation formula by combining the first distance increment, the second distance increment, the yaw angle difference value and the position information of the known position, as follows:
Figure BDA0002320622310000131
up to this point, the robot has calculated position information for the unknown position.
In the embodiment, even if the robot is moved to the unknown position in a large-scale instant, the method can determine the position information of the unknown position without reconstructing a map, and the position information of the unknown position can be used as an initial value of repositioning operation, so that the search space is effectively reduced, the calculated amount is further reduced, and the repositioning efficiency is greatly improved.
In some embodiments, when the robot determines at least two pairs of matching lines, it extracts the lines in the first coordinate system and the rotating coordinate system respectively, and matches the lines again. Referring to fig. 3, S50 includes:
s51, extracting a target line of each frame of point cloud according to each frame of point cloud under each coordinate system;
and S52, matching at least two pairs of matching lines according to the target line of each frame of point cloud.
In this embodiment, the robot may not only extract the target line from the first point cloud in the first coordinate system. And when the second point cloud is converted into the rotating coordinate system, the converted point cloud is obtained, and meanwhile, the robot extracts the target line from the converted point cloud under the rotating coordinate system. The target line is a line meeting the preset line requirement.
In the process of extracting the target line from each frame of point cloud, the robot selects the target line formed by the corresponding laser points in each frame of point cloud by using a preset line extraction algorithm, and eliminates the laser points which do not meet the preset line requirement.
Generally, a plurality of objects are present in a preset space, for example, two tables L1 and L2 are sequentially placed in a room, and the tables L1 and L2 are arranged in a straight line, and the contour of each table is the same straight line. Since the straight line of the first point cloud and the straight line of the transformed point cloud need to be subjected to line matching processing in the subsequent line matching, it may happen that the straight line L1 'and the straight line L2' of the first point cloud are regarded as a straight line, or the straight line L1 "and the straight line L2" of the transformed point cloud are regarded as a straight line, so that a deviation occurs in the subsequent calculation of the unknown position. Thus, in some embodiments, as shown in fig. 4, S51 includes:
s511, determining continuous and longest candidate lines of the laser points according to each frame of point cloud under each coordinate system;
and S512, merging the longest candidate line meeting the preset merging condition to obtain a target line of each frame of point cloud.
In some embodiments, when determining the longest candidate line in each frame of point cloud, first, the robot randomly selects two laser points to construct a random straight line in each frame of point cloud in each coordinate system, for example, the first point cloud of the robot in the first coordinate system randomly selects two laser points to construct a random straight line, and at the same time, the robot also randomly selects two laser points to construct a random straight line in the transformation point cloud in the rotation coordinate system, for example, the robot calculates a straight line composed of points P1 and P2 in each coordinate system and straight line parameters thereof.
Secondly, the robot screens random straight lines meeting preset screening conditions to obtain a primary straight line, wherein at least two laser points exist on the primary straight line, and more than two arbitrary laser points can form a sub-straight line.
For example, when the robot extracts lines from each frame of point cloud, initialization is performed first, that is, the RANSAC external parameters are set according to the RANSAC algorithm, and include the maximum number of straight lines maxtiernum and the probability of internal points.
Then, the robot obtains the current iteration frequency currlineiter, and if the current iteration frequency currlineiter is smaller than or equal to a preset maximum iteration frequency lineMaxIter, the distance between all laser points and a random straight line in each frame of point cloud is obtained.
The robot counts the total number of inner points inliers of the laser points with the distance less than a preset distance threshold dmin, and calculates the ideal iteration number N according to the total number of inner points inliers, the total number C of all the laser points and the preset inner point prior probability, as follows:
N=log(1-probability)/log(1-(1-e)2)
e=inliears/C
where e is the total number of inliers divided by the total number of all laser spots C.
And if the current iteration times are larger than or equal to the ideal iteration times, selecting a random straight line as a primary straight line.
And if the current iteration number is smaller than the ideal iteration number, performing assignment processing on the current iteration number and a preset value (currlineinter +1) to obtain the current iteration number after assignment, and returning to the step of obtaining the current iteration number.
At this point, the robot extracts respective preliminary straight lines from the first point cloud and the transformed point cloud.
And finally, traversing a target sub-straight line with the number of continuous laser points larger than a preset laser point threshold MinSubLinePoint num by the robot from the primary straight line, and taking the target sub-straight line as the longest candidate line. For example, the robot calculates the number of interior points on each preliminary straight line, and discards the preliminary straight line if the number is less than or equal to a preset number threshold; and traversing a target sub-straight line with continuous laser points and the number of the continuous laser points being larger than the preset laser point threshold value on the primary straight line if the number is larger than the preset number threshold value. In some embodiments, when the laser index difference between adjacent laser points is at id < ═ 4, both laser points can be considered to be continuous.
Therefore, in the above manner, it is possible to find the longest candidate line, which is ready for accurately calculating the position information of the unknown position in the following.
In some embodiments, in order to find the target sub-line more accurately, the robot may fit an accurate line again according to each inner point on the preliminary line. And traversing target sub-straight lines with the number of continuous laser points larger than a preset laser point threshold from the accurate straight lines.
After extracting the target line corresponding to each frame of point cloud, in order to calculate the unknown position more efficiently, in some embodiments, the robot may merge nearly parallel lines in each frame of point cloud, for example, please refer to fig. 5a, S512 includes:
s5121, calculating an included angle between any two longest candidate lines, a first vertical distance from an origin of a corresponding coordinate system to one longest candidate line and a second vertical distance from the origin to the other longest candidate line;
s5122, when the included angle is smaller than a preset angle threshold value and the difference value between the second vertical distance and the first vertical distance is smaller than a preset difference value, combining the two longest candidate lines into a target line corresponding to each frame of point cloud.
Taking the longest candidate line merged in the first point cloud as an example, in the embodiment, please refer to fig. 5b, the robot calculates an included angle dth between the first longest candidate line L1 and the second longest candidate line L2, and a first vertical distance dL1 from the origin of the first coordinate system to the first longest candidate line L1 and a second vertical distance dL2 from the origin of the first coordinate system to the second longest candidate line L2.
When the included angle dth is greater than the preset angle threshold CorrLineAngleThresh, the first candidate line L1 and the second candidate line L2 are not merged.
When the difference Δ d between the second vertical distance dL2 and the first vertical distance dL1 is greater than or equal to the preset difference CorrLineDistanceThresh, the first longest candidate line L1 and the second longest candidate line L2 are not merged.
When the included angle dth is smaller than a preset angle threshold CorrLineAngleThresh and the difference Δ d between the second vertical distance dL2 and the first vertical distance dL1 is smaller than a preset difference corrlinedistancetethresh, merging the two longest candidate lines into a target line corresponding to each frame of point cloud. And after the target lines are obtained through combination, the robot readjusts the target lines according to each laser point in the target lines. And the robot also sorts all the target lines in each frame point cloud according to the laser index value of the laser point in each target line, and the sorting can be ascending or descending.
And after the target lines are obtained through combination, the robot matches at least two pairs of matching lines according to the target lines corresponding to each frame of point cloud. In some embodiments, referring to fig. 6, S52 includes:
s521, performing difference operation on the angle of any target line in the transformed point cloud relative to the X axis of the rotating coordinate system and the angle of any target line in the first point cloud relative to the X axis of the first coordinate system to obtain an angle difference value;
and S522, matching at least two pairs of matching lines according to the angle difference.
For example, if the first point cloud has the following target lines [ B1, B2, B3 … … Bn ], and the transformed point cloud has the following target lines [ C1, C2, C3 … … Cn ], the robot selects an angle of any one target line from the first point cloud or the transformed point cloud and performs a difference operation with an angle of any one target line of the other party to obtain an angle difference. For example, the robot calculates the difference between the angle of the target line C1 and the angle of any one of the target lines [ B1, B2, B3 … … Bn ], to obtain the angle difference for each comparison. Assuming that the angle difference between the target line C1 and the target line B1 is greater than the preset threshold value, and the angle difference between the target line C1 and the target line B2 is less than the preset threshold value, the robot combines the target line C1 and the target line B2 into a pair of matching lines.
In order to further obtain a more accurate pair of matching lines, in some embodiments, the angle difference is smaller than or equal to a preset angle difference and each target line in the first point cloud is a first target line, and the angle difference is smaller than or equal to the preset angle difference and each target line in the transformed point cloud is a second target line.
The robot can also calculate the degree of overlap between the laser index coverage range of the first target line and the laser index coverage range of the second target line, if the degree of overlap is greater than or equal to a preset overlap threshold value, the first target line and the second target line are selected to be a pair of optimal lines until at least two pairs of optimal lines are matched, and at least two pairs of matched lines are screened out from the at least two pairs of optimal lines.
For example, assume that the laser index of the second target line C1 of the transformed point cloud is as follows:
{2,3,5,7,8,10,44,45,46,47,49,51}
the laser index coverage of the second target line C1 is 2-10 and 44-51.
Assume that the laser index of the first target line B2 of the first point cloud is as follows:
{1,2,3,5,7,8,40,41,42,44,46,47}
the laser index coverage of the first target line B2 is 1-8 and 40-47.
The overlap range of the second target line C1 and the first target line B2 is 2-8,44-47, and the degree of overlap is (8-2+1) + (47-44+1) ═ 11.
Since the degree of overlap 11 is greater than the preset overlap threshold 10, the robot selects the first target line B2 and the second target line C1 as a pair of optimal lines.
And repeating the steps until at least two pairs of optimal lines are matched.
In some embodiments, when there are a plurality of first target lines matching the second target line, the first target line and the second target line with the largest degree of overlap are selected as a pair of optimal lines.
In some embodiments, the robot may treat each pair of best lines as a pair of matching lines. In some embodiments, the robot may further obtain a more accurate matching line.
For example, when the robot selects at least two pairs of matching lines from the at least two pairs of best lines, first, the robot calculates a matching score of each pair of best lines, wherein the robot may use any suitable weighting algorithm to calculate the matching score of each pair of best lines, for example, as follows: score (corrLine [ i ]) as a measure
wsum*pSum+wadiff*pADiff++wddiff*pDDiff+wpdiff*pPDiff
wsum+wadiff+wddiff+wpdiff=1
Figure BDA0002320622310000181
Figure BDA0002320622310000182
Figure BDA0002320622310000183
Figure BDA0002320622310000184
corrLine [ i ] represents the i-th pair of matching lines, Wsum, Wadiff, Wddiff and Wpdiff are respectively weight coefficients, pSum is the point cloud number score of the matching lines, pADiff is the parallelism score of the two lines, pDDiff is the proximity score of the distance between the two lines, pPDiff is the point cloud difference score of the two lines, corrLine [ i ] [0] represents the line in the first point cloud of the i-th pair of matching lines, corrLine [ i ] [1] represents the line in the transformed point cloud of the i-th pair of matching lines, n represents the number of laser points in the line, a represents the angle of the line, and d represents the distance from the origin to the line.
Secondly, the robot arranges each pair of the best lines in descending order according to the matching scores of each pair of the best lines.
And thirdly, selecting at least two pairs of best lines which are sequenced in the front and are positioned within the preset ranking as at least two pairs of matching lines by the robot, for example, selecting two best lines with the highest scores as the matching lines by the robot, if the matching lines are parallel to each other or the included angles between the matching lines are smaller than a certain angle threshold value, sequentially taking down the next-highest matching lines, and if the included angles between all the matching lines are smaller than the certain angle threshold value, indicating that the scene is a scene similar to a corridor.
It should be noted that, in the foregoing embodiments, a certain order does not necessarily exist between the foregoing steps, and it can be understood by those skilled in the art from the description of the embodiments of the present invention that, in different embodiments, the foregoing steps may have different execution orders, that is, may be executed in parallel, may also be executed in an exchange manner, and the like.
As another aspect of the embodiments of the present invention, an embodiment of the present invention provides a robot position determining apparatus. The robot position determining apparatus may be a software module, where the software module includes a plurality of instructions, and the instructions are stored in a memory in the electronic tilt, and the processor may access the memory and call the instructions to execute the instructions, so as to complete the robot position determining method described in each of the above embodiments.
In some embodiments, the robot position determining apparatus may also be built by hardware devices, for example, the robot position determining apparatus may be built by one or more than two chips, and each chip may work in coordination with each other to complete the robot position determining method described in each of the above embodiments. For another example, the robot position determining apparatus may also be constructed by various types of logic devices, such as a general processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a single chip, an arm (acorn RISC machine) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination of these components.
Referring to fig. 7a, the robot position determining apparatus 700 includes a data acquiring module 71, a coordinate transforming module 72, a line determining module 73, and a position determining module 74.
The data acquisition module 71 is configured to acquire a first point cloud and a first yaw angle of the robot at a known position and a second point cloud and a second yaw angle of the robot at an unknown position in a preset space, where the first point cloud and the second point cloud are two discontinuous frames of point clouds;
the coordinate conversion module 72 is configured to convert the second point cloud into a rotating coordinate system according to the first yaw angle and the second yaw angle to obtain a converted point cloud, where the rotating coordinate system is parallel to the first coordinate system of the first point cloud;
the line determining module 73 is configured to determine at least two pairs of matching lines, where one line of each pair of matching lines is obtained from the first point cloud, and the other line of each pair of matching lines is obtained from the transformed point cloud;
the position determining module 74 is configured to determine the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position.
In the embodiment, even if the robot is moved to the unknown position in a large-scale instant, the device can determine the position information of the unknown position without reconstructing a map, thereby effectively reducing the search space, further reducing the calculation amount and greatly improving the relocation efficiency.
In some embodiments, referring to fig. 7b, the line determining module 73 includes a line extracting unit 731 and a line matching unit 732.
The line extraction unit 731 is configured to extract a target line of each frame of point cloud according to each frame of point cloud in each coordinate system;
the line matching unit 732 is configured to match at least two pairs of matching lines according to the target line of each frame of point cloud.
In some embodiments, the line extraction unit 731 is specifically configured to determine a continuous and longest candidate line of the laser point according to each frame of point cloud under each coordinate system; and merging the longest candidate lines meeting the preset merging conditions to obtain the target line of each frame of point cloud.
In some embodiments, the line is a straight line, and the line extraction unit 731 is specifically configured to randomly select two laser points to construct a random straight line for each frame of point cloud under each coordinate system; screening random straight lines meeting preset screening conditions to obtain primary straight lines; and traversing a target sub-straight line with continuous laser points and the number of the continuous laser points being larger than a preset laser point threshold from the initial straight line, and taking the target sub-straight line as the longest candidate line.
In some embodiments, the line extraction unit 731 is specifically configured to obtain a current iteration count, and if the current iteration count is less than or equal to a preset maximum iteration count, obtain distances between all laser points in each frame of point cloud and the random straight line; counting the total number of inner points of the laser points with the distance smaller than a preset distance threshold, and calculating the ideal iteration times according to the total number of the inner points, the total number of all the laser points and the preset inner point prior probability; if the current iteration times are larger than or equal to the ideal iteration times, selecting the random straight line as a primary straight line; and if the current iteration times are smaller than the ideal iteration times, assigning the current iteration times and a preset numerical value to obtain the assigned current iteration times, and returning to obtain the current iteration times.
In some embodiments, the line extraction unit 731 is specifically configured to calculate the number of interior points on each of the preliminary straight lines; if the number is less than or equal to a preset number threshold, discarding the preliminary straight line; and traversing a target sub-straight line with continuous laser points and the number of the continuous laser points being larger than the preset laser point threshold value on the primary straight line if the number is larger than the preset number threshold value.
In some embodiments, the line extraction unit 731 is specifically configured to re-fit an accurate straight line according to each internal point on the preliminary straight line; and traversing target sub-straight lines with the number of continuous laser points larger than a preset laser point threshold from the accurate straight lines.
In some embodiments, the line extraction unit 731 is specifically configured to calculate an included angle between any two longest candidate lines, a first vertical distance from an origin of a corresponding coordinate system to one of the longest candidate lines, and a second vertical distance from the origin to the other of the longest candidate lines; and when the included angle is smaller than a preset angle threshold value and the difference value between the second vertical distance and the first vertical distance is smaller than a preset difference value, combining the two longest candidate lines into a target line corresponding to each frame of point cloud.
In some embodiments, the line matching unit 732 is specifically configured to perform a difference operation on an angle of any target line in the transformed point cloud with respect to the X axis of the rotating coordinate system and an angle of any target line in the first point cloud with respect to the X axis of the first coordinate system to obtain an angle difference; and matching at least two pairs of matching lines according to the angle difference.
In some embodiments, the angle difference is smaller than or equal to a preset angle difference and each target line in the first point cloud is a first target line, the angle difference is smaller than or equal to a preset angle difference and each target line in the transformed point cloud is a second target line, and the line matching unit 732 is specifically configured to calculate the degree of overlap between the laser index coverage of the first target line and the laser index coverage of the second target line; if the overlapping degree is greater than or equal to a preset overlapping threshold value, selecting the first target line and the second target line as a pair of optimal lines until at least two pairs of optimal lines are matched; and screening out at least two pairs of matching lines from the at least two pairs of best lines.
In some embodiments, the line matching unit 732 is specifically configured to calculate a matching score for each pair of the best lines; arranging each pair of the optimal lines in a descending order according to the matching scores of each pair of the optimal lines; and selecting at least two pairs of optimal lines which are ranked in the front and are positioned within the preset ranking as at least two pairs of matched lines.
When there are a plurality of first target lines matched with the second target line, in some embodiments, the line matching unit 732 is specifically configured to select the first target line and the second target line with the largest degree of overlap as a pair of optimal lines.
The line is a straight line, and in some embodiments, the line matching unit 732 is specifically configured to calculate, according to the at least two pairs of matching lines, a distance increment after the robot is moved at the known position; and determining the position information of the unknown position according to the distance increment and the position information of the known position.
In some embodiments, the distance increment includes a first distance increment in the X-axis direction, and the line matching unit 732 is specifically configured to obtain a first angle of one of the pair of matching lines with respect to the X-axis of the first coordinate system, a second angle of one of the other pair of matching lines with respect to the X-axis of the first coordinate system, a third angle of the other of the pair of matching lines with respect to the X-axis of the rotating coordinate system, and a fourth angle of the other of the pair of matching lines with respect to the X-axis of the rotating coordinate system; calculating a first distance from the origin of the first coordinate system to one of the pair of matching lines and a second distance from the origin of the first coordinate system to one of the other pair of matching lines, and a third distance from the origin of the rotating coordinate system to the other of the pair of matching lines and a fourth distance from the origin of the rotating coordinate system to the other of the other pair of matching lines; calculating a first projection distance on an X axis of the first coordinate system according to the first distance, the second distance, the first angle and the second angle; calculating a second projection distance on the X axis of the rotating coordinate system according to the third distance, the fourth distance, the third angle and the fourth angle; and subtracting the first projection distance from the second projection distance to obtain a first distance increment.
In some embodiments, the distance increment includes a second distance increment in the Y-axis direction, and the line matching unit 732 is specifically configured to calculate a third projection distance on the Y-axis of the first coordinate system according to the first distance, the second distance, the first angle, and the second angle; calculating a fourth projection distance on the Y axis of the rotating coordinate system according to the third distance, the fourth distance, the third angle and the fourth angle; and subtracting the third projection distance from the fourth projection distance to obtain a second distance increment.
In some embodiments, the line matching unit 732 is specifically configured to calculate a yaw angle difference between the second yaw angle and the first yaw angle; and determining the position information of the unknown position according to a two-dimensional coordinate rotation formula by combining the first distance increment, the second distance increment, the yaw angle difference value and the position information of the known position.
In some embodiments, each of the point clouds is collected by a lidar mounted to the robot, the first point cloud corresponds to a first coordinate system, the second point cloud corresponds to a second coordinate system, and the rotating coordinate system is rotatable from and parallel to the second coordinate system.
In some embodiments, coordinate transformation module 72 is specifically configured to calculate a yaw angle difference between the second yaw angle and the first yaw angle; calculating the radar coordinate of the laser radar in the rotating coordinate system before rotation according to a two-dimensional coordinate rotating formula and by combining the yaw angle difference; and calculating the coordinate of the second point cloud rotated to the rotating coordinate system according to a two-dimensional coordinate rotating formula and by combining the yaw angle difference and the radar coordinate to obtain a transformed point cloud.
The robot position determining apparatus may perform the robot position determining method provided by the embodiment of the present invention, and has functional modules and beneficial effects corresponding to the performing method. For technical details that are not described in detail in the embodiment of the robot position determining apparatus, reference may be made to the robot position determining method provided in the embodiment of the present invention.
Fig. 8 is a schematic circuit structure diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 8, the electronic device includes one or more processors 81 and a memory 82. In fig. 8, one processor 81 is taken as an example.
The processor 81 and the memory 82 may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
The memory 82, which is a non-volatile computer-readable storage medium, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, such as program instructions/modules corresponding to the robot position determination method in the embodiments of the present invention. The processor 81 executes various functional applications and data processing of the robot position determining apparatus by running the nonvolatile software program, instructions and modules stored in the memory 82, that is, the functions of the robot position determining method provided by the above-mentioned method embodiment and the various modules or units of the above-mentioned apparatus embodiment are realized.
The memory 82 may include high speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory 82 may optionally include memory located remotely from the processor 81, which may be connected to the processor 81 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The program instructions/modules are stored in the memory 82 and, when executed by the one or more processors 81, perform the robot position determination method in any of the method embodiments described above.
Embodiments of the present invention also provide a non-transitory computer storage medium storing computer-executable instructions, which are executed by one or more processors, such as a processor 81 in fig. 8, and enable the one or more processors to execute the robot position determining method in any of the above method embodiments.
Embodiments of the present invention also provide a computer program product, which includes a computer program stored on a non-volatile computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by an electronic device, the electronic device is caused to execute any one of the robot position determination methods.
In the embodiment, even if the robot is moved to the unknown position in a large-scale instant manner, the method can determine the position information of the unknown position without reconstructing a map, thereby effectively reducing the search space, further reducing the calculation amount and greatly improving the repositioning efficiency.
The above-described embodiments of the apparatus or device are merely illustrative, wherein the unit modules described as separate parts may or may not be physically separate, and the parts displayed as module units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a general hardware platform, and certainly can also be implemented by hardware. Based on such understanding, the above technical solutions substantially or contributing to the related art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; within the idea of the invention, also technical features in the above embodiments or in different embodiments may be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (20)

1. A method for determining a position of a robot, comprising:
acquiring a first point cloud and a first yaw angle of the robot at a known position and a second point cloud and a second yaw angle of the robot at an unknown position in a preset space, wherein the first point cloud and the second point cloud are two discontinuous frames of point clouds;
according to the first yaw angle and the second yaw angle, transforming the second point cloud to a rotating coordinate system to obtain a transformed point cloud, wherein the rotating coordinate system is parallel to a first coordinate system of the first point cloud;
determining at least two pairs of matching lines, wherein one line of each pair of matching lines is obtained by the first point cloud, and the other line of each pair of matching lines is obtained by the transformation point cloud;
and determining the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position.
2. The method of claim 1, wherein the determining at least two pairs of matching lines comprises:
extracting a target line of each frame of point cloud according to each frame of point cloud under each coordinate system;
and matching at least two pairs of matching lines according to the target line of each frame of point cloud.
3. The method of claim 2, wherein extracting the target line of each frame of point cloud according to each frame of point cloud under each coordinate system comprises:
determining continuous and longest candidate lines of the laser points according to each frame of point cloud under each coordinate system;
and merging the longest candidate lines meeting the preset merging conditions to obtain the target line of each frame of point cloud.
4. The method of claim 3, wherein the line is a straight line, and determining the laser point continuous and longest candidate line according to each frame of point cloud under each coordinate system comprises:
randomly selecting two laser points to construct a random straight line in each frame of point cloud under each coordinate system;
screening random straight lines meeting preset screening conditions to obtain primary straight lines;
and traversing a target sub-straight line with continuous laser points and the number of the continuous laser points being larger than a preset laser point threshold from the initial straight line, and taking the target sub-straight line as the longest candidate line.
5. The method of claim 4, wherein the step of screening the random straight lines satisfying the preset screening condition to obtain the preliminary straight line comprises:
acquiring the current iteration times, and if the current iteration times are less than or equal to the preset maximum iteration times, calculating the distance between all laser points in each frame of point cloud and the random straight line;
counting the total number of inner points of the laser points with the distance smaller than a preset distance threshold, and calculating the ideal iteration times according to the total number of the inner points, the total number of all the laser points and the preset inner point prior probability;
if the current iteration times are larger than or equal to the ideal iteration times, selecting the random straight line as a primary straight line;
and if the current iteration times are smaller than the ideal iteration times, assigning the current iteration times and a preset numerical value to obtain the assigned current iteration times, and returning to the step of obtaining the current iteration times.
6. The method of claim 4, wherein traversing from the preliminary straight line a target sub-straight line having a number of consecutive laser points greater than a predetermined laser point threshold comprises:
calculating the number of inner points on each preliminary straight line;
if the number is less than or equal to a preset number threshold, discarding the preliminary straight line;
and traversing a target sub-straight line with continuous laser points and the number of the continuous laser points being larger than the preset laser point threshold value on the primary straight line if the number is larger than the preset number threshold value.
7. The method according to claim 6, wherein traversing the preliminary straight line to obtain a target sub-straight line with a number of consecutive laser points greater than a preset laser point threshold, comprises:
fitting an accurate straight line again according to each inner point on the primary straight line;
and traversing target sub-straight lines with the number of continuous laser points larger than a preset laser point threshold from the accurate straight lines.
8. The method according to any one of claims 3 to 7, wherein the lines are straight lines, and merging the longest candidate line satisfying a preset merging condition to obtain a target line corresponding to each frame of point cloud comprises:
calculating an included angle between any two longest candidate lines, a first vertical distance from an origin of a corresponding coordinate system to one of the longest candidate lines, and a second vertical distance from the origin to the other longest candidate line;
and when the included angle is smaller than a preset angle threshold value and the difference value between the second vertical distance and the first vertical distance is smaller than a preset difference value, combining the two longest candidate lines into a target line corresponding to each frame of point cloud.
9. The method according to any one of claims 2 to 7, wherein the line is a straight line, and the matching of at least two pairs of matching lines according to the target line corresponding to each frame of point cloud comprises:
performing difference operation on the angle of any target line in the transformed point cloud relative to the X axis of the rotating coordinate system and the angle of any target line in the first point cloud relative to the X axis of the first coordinate system to obtain an angle difference value;
and matching at least two pairs of matching lines according to the angle difference.
10. The method of claim 9, wherein an angle difference value is smaller than or equal to a preset angle difference value, each target line in the first point cloud is a first target line, the angle difference value is smaller than or equal to a preset angle difference value, each target line in the transformed point cloud is a second target line, and matching at least two pairs of matching lines according to the angle difference value comprises:
calculating the degree of overlap of the laser index coverage range of the first target line and the laser index coverage range of the second target line;
if the overlapping degree is greater than or equal to a preset overlapping threshold value, selecting the first target line and the second target line as a pair of optimal lines until at least two pairs of optimal lines are matched;
and screening out at least two pairs of matching lines from the at least two pairs of best lines.
11. The method of claim 10, wherein said screening out at least two pairs of matching lines from said at least two pairs of best lines comprises:
calculating a matching score of each pair of the best lines;
arranging each pair of the optimal lines in a descending order according to the matching scores of each pair of the optimal lines;
and selecting at least two pairs of optimal lines which are ranked in the front and are positioned within the preset ranking as at least two pairs of matched lines.
12. The method of claim 10, wherein when there are a plurality of first target lines matching the second target line, selecting the first target line and the second target line as a pair of best lines comprises:
and selecting the first target line and the second target line with the maximum overlapping degree as a pair of optimal lines.
13. The method according to any one of claims 1 to 7, wherein the line is a straight line, and the locating the position information of the unknown position according to the at least two pairs of matching lines and the position information of the known position comprises:
calculating the distance increment of the robot after the robot is moved at the known position according to the at least two pairs of matching lines;
and positioning the position information of the unknown position according to the distance increment and the position information of the known position.
14. The method of claim 13, wherein the distance increment comprises a first distance increment in an X-axis direction, and wherein calculating the distance increment after the robot movement from the at least two pairs of matched lines comprises:
acquiring a first angle of one of the pair of matching lines relative to the X axis of the first coordinate system, a second angle of one of the other pair of matching lines relative to the X axis of the first coordinate system, a third angle of the other of the pair of matching lines relative to the X axis of the rotating coordinate system, and a fourth angle of the other of the pair of matching lines relative to the X axis of the rotating coordinate system;
calculating a first distance from the origin of the first coordinate system to one of the pair of matching lines and a second distance from the origin of the first coordinate system to one of the other pair of matching lines, and a third distance from the origin of the rotating coordinate system to the other of the pair of matching lines and a fourth distance from the origin of the rotating coordinate system to the other of the other pair of matching lines;
calculating a first projection distance on an X axis of the first coordinate system according to the first distance, the second distance, the first angle and the second angle;
calculating a second projection distance on the X axis of the rotating coordinate system according to the third distance, the fourth distance, the third angle and the fourth angle;
and subtracting the first projection distance from the second projection distance to obtain a first distance increment.
15. The method of claim 14, wherein the distance increment comprises a second distance increment in the Y-axis direction, and wherein calculating the distance increment after the robot movement from the at least two pairs of matched lines further comprises:
calculating a third projection distance on the Y axis of the first coordinate system according to the first distance, the second distance, the first angle and the second angle;
calculating a fourth projection distance on the Y axis of the rotating coordinate system according to the third distance, the fourth distance, the third angle and the fourth angle;
and subtracting the third projection distance from the fourth projection distance to obtain a second distance increment.
16. The method of claim 15, wherein locating the position information for the unknown location based on the distance increments and the position information for the known location comprises:
calculating a yaw angle difference between the second yaw angle and the first yaw angle;
and positioning the position information of the unknown position according to a two-dimensional coordinate rotation formula by combining the first distance increment, the second distance increment, the yaw angle difference and the position information of the known position.
17. The method according to any one of claims 1 to 7, wherein each of the point clouds is acquired by a lidar mounted to the robot, the first point cloud corresponds to a first coordinate system, the second point cloud corresponds to a second coordinate system, and the rotating coordinate system is rotatable from and parallel to the second coordinate system.
18. The method of claim 17, wherein transforming the second point cloud to a rotating coordinate system according to the first yaw angle and the second yaw angle, resulting in a transformed point cloud comprises:
calculating a yaw angle difference between the second yaw angle and the first yaw angle;
calculating the radar coordinate of the laser radar in the rotating coordinate system before rotation according to a two-dimensional coordinate rotating formula and by combining the yaw angle difference;
and calculating the coordinate of the second point cloud rotated to the rotating coordinate system according to a two-dimensional coordinate rotating formula and by combining the yaw angle difference and the radar coordinate to obtain a transformed point cloud.
19. A non-transitory computer-readable storage medium storing computer-executable instructions for causing a robot to perform the robot position determining method according to any one of claims 1 to 18.
20. A robot, comprising:
the image acquisition equipment is used for acquiring point cloud data of a preset space;
the angle detection equipment is used for acquiring a yaw angle of the robot;
the at least one processor is electrically connected with the image acquisition equipment and the angle detection equipment respectively; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a robot position determination method according to any one of claims 1 to 18.
CN201911296211.3A 2019-12-16 2019-12-16 Robot position determination method, non-volatile computer-readable storage medium, and robot Active CN110928312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911296211.3A CN110928312B (en) 2019-12-16 2019-12-16 Robot position determination method, non-volatile computer-readable storage medium, and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911296211.3A CN110928312B (en) 2019-12-16 2019-12-16 Robot position determination method, non-volatile computer-readable storage medium, and robot

Publications (2)

Publication Number Publication Date
CN110928312A CN110928312A (en) 2020-03-27
CN110928312B true CN110928312B (en) 2021-06-29

Family

ID=69863799

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911296211.3A Active CN110928312B (en) 2019-12-16 2019-12-16 Robot position determination method, non-volatile computer-readable storage medium, and robot

Country Status (1)

Country Link
CN (1) CN110928312B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111546348A (en) * 2020-06-10 2020-08-18 上海有个机器人有限公司 Robot position calibration method and position calibration system
CN114001706B (en) * 2021-12-29 2022-04-29 阿里巴巴达摩院(杭州)科技有限公司 Course angle estimation method and device, electronic equipment and storage medium
CN114660583A (en) * 2022-02-17 2022-06-24 深圳市杉川机器人有限公司 Robot and repositioning method, device and medium thereof
CN116906277A (en) * 2023-06-20 2023-10-20 北京图知天下科技有限责任公司 Fan yaw variation determining method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106959691A (en) * 2017-03-24 2017-07-18 联想(北京)有限公司 Mobile electronic equipment and immediately positioning and map constructing method
CN107037806A (en) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 Self-movement robot re-positioning method and the self-movement robot using this method
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
CN107765694A (en) * 2017-11-06 2018-03-06 深圳市杉川机器人有限公司 A kind of method for relocating, device and computer read/write memory medium
CN108052101A (en) * 2017-12-06 2018-05-18 北京奇虎科技有限公司 The method for relocating and device of robot
CN108053446A (en) * 2017-12-11 2018-05-18 北京奇虎科技有限公司 Localization method, device and electronic equipment based on cloud
CN108931983A (en) * 2018-09-07 2018-12-04 深圳市银星智能科技股份有限公司 Map constructing method and its robot
CN109186608A (en) * 2018-09-27 2019-01-11 大连理工大学 A kind of rarefaction three-dimensional point cloud towards reorientation ground drawing generating method
CN110574071A (en) * 2017-01-27 2019-12-13 Ucl商业有限公司 Device, method and system for aligning 3D data sets
CN110561423A (en) * 2019-08-16 2019-12-13 深圳优地科技有限公司 pose transformation method, robot and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10162362B2 (en) * 2016-08-29 2018-12-25 PerceptIn, Inc. Fault tolerance to provide robust tracking for autonomous positional awareness

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107037806A (en) * 2016-02-04 2017-08-11 科沃斯机器人股份有限公司 Self-movement robot re-positioning method and the self-movement robot using this method
US9868212B1 (en) * 2016-02-18 2018-01-16 X Development Llc Methods and apparatus for determining the pose of an object based on point cloud data
CN110574071A (en) * 2017-01-27 2019-12-13 Ucl商业有限公司 Device, method and system for aligning 3D data sets
CN106959691A (en) * 2017-03-24 2017-07-18 联想(北京)有限公司 Mobile electronic equipment and immediately positioning and map constructing method
CN107765694A (en) * 2017-11-06 2018-03-06 深圳市杉川机器人有限公司 A kind of method for relocating, device and computer read/write memory medium
CN108052101A (en) * 2017-12-06 2018-05-18 北京奇虎科技有限公司 The method for relocating and device of robot
CN108053446A (en) * 2017-12-11 2018-05-18 北京奇虎科技有限公司 Localization method, device and electronic equipment based on cloud
CN108931983A (en) * 2018-09-07 2018-12-04 深圳市银星智能科技股份有限公司 Map constructing method and its robot
CN109186608A (en) * 2018-09-27 2019-01-11 大连理工大学 A kind of rarefaction three-dimensional point cloud towards reorientation ground drawing generating method
CN110561423A (en) * 2019-08-16 2019-12-13 深圳优地科技有限公司 pose transformation method, robot and storage medium

Also Published As

Publication number Publication date
CN110928312A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110928312B (en) Robot position determination method, non-volatile computer-readable storage medium, and robot
JP6885485B2 (en) Systems and methods for capturing still and / or moving scenes using multiple camera networks
CN110738143B (en) Positioning method and device, equipment and storage medium
CN110869974B (en) Point cloud processing method, equipment and storage medium
CN112050810B (en) Indoor positioning navigation method and system based on computer vision
WO2021016854A1 (en) Calibration method and device, movable platform, and storage medium
WO2021134809A1 (en) Distance measurement module, robot, distance measurement method and nonvolatile readable storage medium
TW201740160A (en) Laser scanning system, laser scanning method, movable laser scanning system and program
EP3716210B1 (en) Three-dimensional point group data generation method, position estimation method, three-dimensional point group data generation device, and position estimation device
CN110675457A (en) Positioning method and device, equipment and storage medium
US10210615B2 (en) System and method for extrinsic camera parameters calibration by use of a three dimensional (3D) calibration object
WO2015010579A1 (en) Target searching method, device, apparatus and system
CN108459597A (en) A kind of mobile electronic device and method for handling the task of mission area
EP4094226A1 (en) Calibration of cameras on unmanned aerial vehicles using human joints
CN110850973B (en) Audio device control method, audio device and storage medium
JP2020017173A (en) Moving entity
CN115810025A (en) Indoor pedestrian positioning method and system based on UWB and vision
CN114199235B (en) Positioning system and positioning method based on sector depth camera
CN113556680A (en) Fingerprint data processing method, medium and mobile robot
CN113536820B (en) Position identification method and device and electronic equipment
WO2023160301A1 (en) Object information determination method, mobile robot system, and electronic device
CN113326836B (en) License plate recognition method, license plate recognition device, server and storage medium
Kulkarni et al. Approximate initialization of camera sensor networks
KR20170077370A (en) Device and method for object recognition, and system for object recognition using the same
CN114782496A (en) Object tracking method and device, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 518000 1701, building 2, Yinxing Zhijie, No. 1301-72, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Yinxing Intelligent Group Co.,Ltd.

Address before: 518000 building A1, Yinxing hi tech Industrial Park, Guanlan street, Longhua District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Silver Star Intelligent Technology Co.,Ltd.