CN116468876A - Three-dimensional reconstruction method and device for obstacle, mobile robot and medium - Google Patents

Three-dimensional reconstruction method and device for obstacle, mobile robot and medium Download PDF

Info

Publication number
CN116468876A
CN116468876A CN202310477426.5A CN202310477426A CN116468876A CN 116468876 A CN116468876 A CN 116468876A CN 202310477426 A CN202310477426 A CN 202310477426A CN 116468876 A CN116468876 A CN 116468876A
Authority
CN
China
Prior art keywords
point cloud
dimensional
coordinate system
obstacle
dimensional point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310477426.5A
Other languages
Chinese (zh)
Inventor
蔡为燕
罗晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Stone Innovation Technology Co ltd
Original Assignee
Beijing Stone Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Stone Innovation Technology Co ltd filed Critical Beijing Stone Innovation Technology Co ltd
Priority to CN202310477426.5A priority Critical patent/CN116468876A/en
Publication of CN116468876A publication Critical patent/CN116468876A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a three-dimensional reconstruction method and device of an obstacle, a mobile robot and a medium, wherein the method comprises the following steps: acquiring multi-frame initial three-dimensional point clouds under camera coordinate systems at different positions when the mobile robot executes a plane task; mapping the initial three-dimensional point cloud of each frame into an environment three-dimensional point cloud under a world coordinate system, and fusing the obtained multi-frame environment three-dimensional point cloud to obtain an observation three-dimensional point cloud of a target obstacle; and sequentially dividing and clustering the observed three-dimensional point cloud to obtain the three-dimensional reconstruction surface profile of the target obstacle. The three-dimensional reconstruction method of the obstacle has the characteristics of low cost, light weight, small calculated amount, high instantaneity, easiness in deployment and the like, is mainly suitable for detecting the small obstacle when the mobile robot executes a planar task, and can accurately acquire the shape of the small obstacle, so that the mobile robot is helped to execute the corresponding task; the method can also be used for auxiliary identification of small obstacles.

Description

Three-dimensional reconstruction method and device for obstacle, mobile robot and medium
Technical Field
The application relates to the technical field of obstacle detection, in particular to a three-dimensional reconstruction method and device of an obstacle, a mobile robot and a medium.
Background
As mobile robots develop in an intelligent manner, there is a higher demand for their environment-awareness capabilities. When the mobile robot executes tasks, not only the obstacles in the surrounding environment are expected to be avoided, but also the surface contours of the obstacles are expected to be reconstructed in three dimensions, so that the shape information of the obstacles can be obtained, and the obstacles are accurately and smoothly avoided.
The traditional three-dimensional reconstruction method needs mobile equipment such as a handheld laser scanner and the like, performs 360-degree surrounding scanning on an object to be detected, obtains a three-dimensional reconstruction result after complex calculation and long-time waiting, and has an unsatisfactory reconstruction effect.
Disclosure of Invention
Aiming at the situation, the embodiment of the application provides a three-dimensional reconstruction method, a device, a mobile robot and a medium for an obstacle, so as to provide the three-dimensional reconstruction method for the obstacle, which has the advantages of low cost, light weight, small calculation amount, high instantaneity and easy deployment.
In a first aspect, an embodiment of the present application provides a three-dimensional reconstruction method of an obstacle, including:
acquiring multi-frame initial three-dimensional point clouds under camera coordinate systems at different positions when the mobile robot executes a plane task;
mapping the initial three-dimensional point cloud of each frame into an environment three-dimensional point cloud under a world coordinate system, and fusing the obtained multi-frame environment three-dimensional point cloud to obtain an observation three-dimensional point cloud of a target obstacle;
And sequentially dividing and clustering the observed three-dimensional point cloud to obtain the three-dimensional reconstruction surface profile of the target obstacle.
Optionally, in the above method, the mobile robot is loaded with a wired laser module, and the acquiring the multi-frame initial three-dimensional point cloud under the camera coordinate systems of different positions when the mobile robot executes the plane task includes:
acquiring an original point cloud of the center of a laser line under a pixel coordinate system when the mobile robot executes a plane task according to a preset time period through light rays emitted by the laser module;
based on the mapping from the camera coordinate system to the pixel coordinate system, establishing a first constraint according to the original point cloud of the laser line center;
establishing a second constraint based on a line laser light plane equation under the camera coordinate system;
and determining a frame of initial three-dimensional point cloud under a camera coordinate system by combining the first constraint and the second constraint.
Optionally, in the above method, mapping the initial three-dimensional point cloud of each frame to an environmental three-dimensional point cloud in a world coordinate system includes:
determining a first transformation matrix of the camera coordinate system to the robot coordinate system;
Reading and determining a first coordinate of a designated axis of the robot coordinate system under a world coordinate system according to the pose of the mobile robot under the world coordinate system;
determining a second transformation matrix from the robot coordinate system to the world coordinate system according to the first coordinate and the designated axis unit coordinate of the world coordinate system;
and determining the environment three-dimensional point cloud of the initial three-dimensional point cloud of each frame under the world coordinate system according to the initial three-dimensional point cloud of each frame, the first transformation matrix and the second transformation matrix.
Optionally, in the above method, the sequentially dividing and clustering the observed three-dimensional point cloud to obtain a three-dimensional reconstructed surface profile of the target obstacle includes:
filtering the observed three-dimensional point cloud based on ground point cloud filtering conditions to obtain a first intermediate point cloud;
filtering the first intermediate point cloud based on the wall point cloud filtering condition to obtain a second intermediate point cloud;
clustering the second intermediate point cloud to obtain at least one point cloud set;
and determining whether the point cloud set is the three-dimensional reconstruction surface profile of the target obstacle according to a preset obstacle volume threshold value.
Optionally, in the above method, the filtering the observed three-dimensional point cloud based on the ground point cloud filtering condition to obtain a first intermediate point cloud includes:
determining whether the height value of each point cloud in the observed three-dimensional point cloud is larger than a preset ground height threshold value, if so, attributing the corresponding point cloud to the first intermediate point cloud;
the filtering of the first intermediate point cloud based on the wall point cloud filtering condition to obtain a second intermediate point cloud includes:
acquiring the wall surface distance acquired by radar laser loaded on the mobile robot;
and determining whether the height value of each point cloud in the first intermediate point cloud is smaller than the wall surface distance, and if so, attributing the corresponding point cloud to the second intermediate point cloud.
Optionally, in the above method, the determining whether the point cloud set is a three-dimensional reconstructed surface contour of the target obstacle according to a preset obstacle volume threshold includes:
determining whether the volume of one point cloud set is smaller than the barrier volume threshold, if so, determining the point cloud set as a target barrier, and determining the point cloud set as a three-dimensional reconstruction surface profile of the target barrier.
Optionally, the method further comprises:
extracting surface structural features of the target obstacle according to the three-dimensional reconstruction surface profile of the target obstacle, wherein the surface structural features at least comprise: curvature, corner points;
acquiring image information and texture information acquired by an image sensor mounted on the mobile robot;
the type of the target obstacle is determined in combination with the surface structure features, the image information, and the texture information.
In a second aspect, embodiments of the present application further provide a three-dimensional reconstruction apparatus of an obstacle, the apparatus including:
the mobile robot comprises an acquisition unit, a calculation unit and a calculation unit, wherein the acquisition unit is used for acquiring multi-frame initial three-dimensional point clouds under camera coordinate systems at different positions when the mobile robot executes a plane task;
the mapping unit is used for mapping the initial three-dimensional point cloud of each frame into an environment three-dimensional point cloud under a world coordinate system, and fusing the obtained multi-frame environment three-dimensional point cloud to obtain an observation three-dimensional point cloud of the target obstacle;
and the reconstruction unit is used for sequentially dividing and clustering the observed three-dimensional point cloud to obtain the three-dimensional reconstruction surface profile of the target obstacle.
In a third aspect, an embodiment of the present application further provides a mobile robot, including: a processor; and a memory for storing executable instructions of the processor, wherein the processor is configured to perform the three-dimensional reconstruction method of any of the obstacles described above via the executable instructions.
In a fourth aspect, embodiments of the present application further provide a computer readable storage medium having stored thereon a computer program which, when executed by the processor, implements a method of three-dimensional reconstruction of an obstacle as described in any of the above.
The method adopted by the embodiment of the application has the following beneficial effects:
when the mobile robot executes a plane task, multiple frames of initial three-dimensional point clouds of the mobile robot under camera coordinate systems at different positions are obtained, then the initial three-dimensional point clouds of each frame are mapped into environment three-dimensional point clouds under a world coordinate system, the obtained multiple frames of environment three-dimensional point clouds are fused to obtain observed three-dimensional point clouds of the surface of an obstacle object and the local environment where the obstacle is located, finally, the observed three-dimensional point clouds are sequentially segmented and filtered, and the reserved point clouds are clustered to obtain the three-dimensional reconstruction surface profile of the target obstacle. The three-dimensional reconstruction method of the obstacle has the characteristics of low cost, light weight, small calculated amount, high instantaneity, easiness in deployment and the like, is mainly suitable for detecting the small obstacle when the mobile robot executes a planar task, and can accurately acquire the shape of the small obstacle, so that the mobile robot is helped to execute the corresponding task; the method can also be used for auxiliary identification of small obstacles.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
FIG. 1 illustrates a flow diagram of a method of three-dimensional reconstruction of an obstacle according to some embodiments of the present application;
FIG. 2-a shows a schematic diagram of the mounting of a line laser module on a mobile robot according to one embodiment of the present application;
FIG. 2-b shows a schematic diagram of the mounting of a line laser module on a mobile robot according to one embodiment of the present application;
FIG. 3 shows a schematic diagram of a world coordinate system agreed upon in the arrangement of the line laser modules of FIG. 2-a or FIG. 2-b;
FIG. 4 shows a schematic diagram of a mobile robot during operation according to one embodiment of the present application;
FIG. 5 shows a flow diagram of a method of three-dimensional reconstruction of an obstacle according to another embodiment of the present application;
FIG. 6 illustrates a schematic structural diagram of a three-dimensional reconstruction device of an obstacle according to one embodiment of the present application;
fig. 7 is a schematic structural diagram of a mobile robot according to an embodiment of the present application.
Detailed Description
For the purposes, technical solutions and advantages of the present application, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
In the related art, a mobile robot performing a planar task is expected to perform three-dimensional reconstruction on the surface profile of obstacles, so as to obtain shape information of the obstacles and accurately and smoothly avoid the obstacles.
The traditional three-dimensional reconstruction method needs mobile equipment such as a handheld laser scanner and the like, performs 360-degree surrounding scanning on an object to be detected, obtains a three-dimensional reconstruction result after complex calculation and long-time waiting, and has an unsatisfactory reconstruction effect.
To this end, the present application proposes a three-dimensional reconstruction method of an obstacle, fig. 1 shows a flow chart of the three-dimensional reconstruction method of an obstacle according to some embodiments of the present application, and as can be seen from fig. 1, the present application at least includes steps S110 to 130:
Step S110: and acquiring multi-frame initial three-dimensional point clouds under camera coordinate systems at different positions when the mobile robot executes a plane task.
The method and the device are mainly applicable to situations when the robot can move to perform tasks on the plane, particularly situations when the robot performing planar motion performs tasks, such as a sweeping robot.
When the mobile robot executes a plane task, such as a floor sweeping robot moves on the ground to sweep, an obstacle on the ground needs to be detected to avoid, and the method is mainly applicable to a scene of detecting the obstacle with a relatively small volume on the ground; a mobile robot, like a small packing box, a small stool or the like, is defined as a small obstacle around which the mobile robot moves during movement, acquires its three-dimensional point cloud at different positions, here denoted as an initial three-dimensional point cloud, which can be converted in a camera coordinate system if it is not. For the acquisition of the initial three-dimensional point cloud, various forms of laser radars, cameras or the like are adopted, and specifically, the laser radars or the cameras can be installed at the appointed position of the mobile robot in a specific pose, so that the initial three-dimensional point cloud around the mobile robot is acquired when the mobile robot executes a plane task.
In some embodiments of the present application, a line laser sensor may be selected to obtain an initial three-dimensional point cloud from an economic as well as an effect standpoint. And the three-dimensional point cloud information of the obstacle surface under the camera coordinate system can be recovered by combining the line laser light stripe pixel points on the line laser image, the pre-calibrated camera internal and external parameters and the line laser light plane equation of the line laser in the camera coordinate system. Specifically, the line laser module is installed on the mobile robot, and the installation mode of the line laser module on the mobile robot can be various, namely single horizontal projection line laser, double vertical or crossed projection line lasers, or other placement modes, and compared with single horizontal projection line laser, the data acquired in the form of double projection line lasers are more accurate, so that the following description is made on the acquisition of initial three-dimensional point cloud by taking the double projection line lasers as an optional mode. Fig. 2-a shows a schematic diagram of an installation manner of a line laser module on a mobile robot according to an embodiment of the present application, fig. 2-b shows a schematic diagram of an installation manner of a line laser module on a mobile robot according to an embodiment of the present application, and as can be seen from fig. 2-a and fig. 2-b, the installation manner of a line laser module on a mobile robot includes, but is not limited to, a double vertical line laser front cross combination (fig. 2-a), a horizontal plus vertical line laser cross combination (fig. 2-b), and the like. The manners shown in fig. 2-a and fig. 2-b are only exemplary, and do not limit the application, and other mounting manners of the line laser module may be adopted, which will not be repeated.
The following description will take the arrangement of the line laser modules shown in fig. 2-a and 2-b as an example, and the calculation modes are the same.
First, describing the coordinate system convention of the present application, fig. 3 shows a schematic diagram of a world coordinate system convention in the arrangement of the line laser module of fig. 2-a or fig. 2-b.
When the line laser emitted by the line laser module is projected onto a working plane (usually the ground), the initial three-dimensional point cloud information of the working plane environment under the camera coordinate system can be recovered. Specifically, according to a preset time period, acquiring an original point cloud of the center of a laser line under a pixel coordinate system when the mobile robot executes a plane task through light rays emitted by the laser module; based on the mapping from the camera coordinate system to the pixel coordinate system, establishing a first constraint according to the original point cloud of the laser line center; establishing a second constraint based on a line laser light plane equation under the camera coordinate system; and determining the environment three-dimensional point cloud by combining the first constraint and the second constraint.
When a time period is reached, the line laser module scans the environment when the mobile robot executes a plane task to obtain an original point cloud p= (u, v) of the laser line center under a pixel coordinate system, and an environment point cloud coordinate P of the original point cloud p= (u, v) under a camera coordinate system can be recovered C = (x, y, z), specifically, according to the mapping K (shown as equation 1) from the camera coordinate system to the pixel coordinate system, the following first constraint shown as equation 2 may be established:
where f represents a mapping function from a camera coordinate system to a pixel coordinate system, u 0 And u 0 Representing any original point cloud.
And the point from the line laser light plane must fall on the line laser light plane, the following second constraint as shown in equation 3 can be established from the line laser light plane equation under the camera coordinate system:
A laser X+B laser Y+C laser Z+D laser =0 formula (3).
Wherein A is laser 、B laser 、C laser And D laser Representing coefficients of the line laser light plane equation.
Finally, combining the first constraint and the second constraint, and solving to obtain an environment three-dimensional point cloud under a camera coordinate system, wherein the environment three-dimensional point cloud comprises three-dimensional coordinates (X, Y, Z) of a plurality of point clouds, as shown in the following formula 4:
fig. 4 shows a schematic diagram of a mobile robot during operation according to an embodiment of the present application, and as can be seen from fig. 4, when the mobile robot performs a planar task, it is in different positions, and by the above method, an initial three-dimensional point cloud in different positions can be obtained.
It should be noted that, in fig. 4, a laser module of the mobile robot shows a laser, which is only an exemplary illustration, and indicates that the mobile robot can obtain an initial three-dimensional point cloud covering different angles of the obstacle through the laser module every time the mobile robot reaches a position; in a practical scenario, a laser representation shown in fig. 4 is not limited to the form of the laser line shown in fig. 2-a or fig. 2-b, where the three-dimensional point cloud recovered by the mobile robot in one location using the above method is denoted as a frame of initial three-dimensional point cloud. The mobile robot moves continuously, when a time period is reached, the mobile robot reaches a different position, and each time a time period is reached, one frame of initial three-dimensional point cloud is recovered, so that multiple frames of initial three-dimensional point clouds of the mobile robot under camera coordinate systems at different positions can be obtained along with the time.
Step S120: and mapping the initial three-dimensional point cloud of each frame into an environment three-dimensional point cloud under a world coordinate system, and fusing the obtained multi-frame environment three-dimensional point cloud to obtain the observed three-dimensional point cloud of the obstacle.
Because the detection of the obstacle is performed under the world coordinate system, the initial three-dimensional point cloud needs to be mapped under the world coordinate system, and particularly, the initial three-dimensional point cloud can be projected under the world coordinate system by combining the pose information of the robot in the world coordinate system, which is given by the mobile robot positioning module.
More specifically, in some embodiments of the present application, mapping the initial three-dimensional point cloud of each frame to an environmental three-dimensional point cloud in a world coordinate system includes: determining a first transformation matrix of the camera coordinate system to the robot coordinate system; reading and determining a first coordinate of a designated axis of the robot coordinate system under a world coordinate system according to the pose of the mobile robot under the world coordinate system; determining a second transformation matrix from the robot coordinate system to the world coordinate system according to the first coordinate and the designated axis unit coordinate of the world coordinate system; and determining the environment three-dimensional point cloud of the initial three-dimensional point cloud of each frame under the world coordinate system according to the initial three-dimensional point cloud of each frame, the first transformation matrix and the second transformation matrix.
I.e. can be transformed into the world coordinate system by a first transformation matrix, which is the transformation matrix of the camera coordinate system into the robot coordinate system, and a second transformation matrix, which is the transformation matrix of the robot coordinate system into the world coordinate system.
First, a first transformation matrix T of a camera coordinate system to the robot coordinate system is determined RC First transformation matrix T RC Can be represented as formula 5:
in general, the first transformation matrix T RC Is known, or known by knowing the hardware configuration of the mobile robot, where R RC And t RC Is a first transformation matrix T RC Elements of the corresponding location.
For the second transformation matrix, pose information of the mobile robot can be read from a positioning module of the mobile robot, and for the application situation of the planar motion robot, usually, coordinates x and y of the robot under a world coordinate system and a rotation angle theta of the robot coordinate system relative to a z-axis of the world coordinate system, the second transformation matrix T from the robot coordinate system to the world coordinate system can be obtained WR As shown in formula (6)Showing the
Wherein R is WR And t WR Is a first transformation matrix T RC Elements of the corresponding location.
Finally, according to the initial three-dimensional point cloud coordinates P in the camera coordinate system C = (x, y, z), combined with a first transformation matrix T from the camera coordinate system to the robot coordinate system RC And a second transformation matrix T from the robot coordinate system to the world coordinate system WR The coordinate P of the initial three-dimensional point cloud under the world coordinate system can be obtained w The specific algorithm is shown in formula 7:
and fusing the obtained multi-frame environment three-dimensional point clouds to obtain an observation three-dimensional point cloud of the target obstacle, wherein the observation three-dimensional point cloud can be defined as an observation point cloud of the surface of the small obstacle object and the local environment where the small obstacle object is located. The fusion mode can be used for matching according to the matching degree of each point cloud in the three-dimensional points of each frame environment, and then the fusion mode is used for fusing the point clouds together.
Step S130: and sequentially dividing and clustering the observed three-dimensional point cloud to obtain the three-dimensional reconstruction surface profile of the target obstacle.
Finally, the obtained observed three-dimensional point cloud is sequentially segmented and clustered, wherein the segmentation aims at eliminating some noise data points, such as point cloud belonging to the ground, point cloud belonging to surrounding walls and the like; and then clustering the reserved point clouds, wherein each type of point clouds obtained by clustering is used as a target obstacle, and each type of point clouds is the three-dimensional reconstructed surface profile of the detected target small obstacle.
Specifically, in some embodiments of the present application, the sequentially segmenting and clustering the observed three-dimensional point cloud to obtain a three-dimensional reconstructed surface profile of the target obstacle includes: filtering the observed three-dimensional point cloud based on ground point cloud filtering conditions to obtain a first intermediate point cloud; filtering the first intermediate point cloud based on the wall point cloud filtering condition to obtain a second intermediate point cloud; clustering the second intermediate point cloud to obtain at least one point cloud set; and determining whether the point cloud set is the three-dimensional reconstruction surface profile of the target obstacle according to a preset obstacle volume threshold value.
In the obtained observed three-dimensional point cloud, some noise data may exist, such as a point cloud belonging to the ground and a point cloud belonging to a wall surface, and the noise data may have a certain influence on a reconstruction result, so that a certain method may be adopted to exclude the noise, specifically, the observed three-dimensional point cloud may be filtered based on a ground point cloud filtering condition, so as to obtain a first intermediate point cloud, which more specifically includes: and determining whether the height value of each point cloud in the observed three-dimensional point cloud is larger than a preset ground height threshold value, and if so, attributing the corresponding point cloud to the first intermediate point cloud.
A relatively small ground height threshold, such as 1cm, may be set, for one of the observed three-dimensional point clouds, it is determined whether the height of the point cloud is greater than a preset ground height threshold, if so, the point cloud is retained, i.e., the point cloud is attributed to the first intermediate point cloud, and if the height of the point cloud is less than or equal to the ground height threshold, the point cloud is excluded. And (3) carrying out the processing on each point cloud in the observed three-dimensional point cloud, namely eliminating the point cloud belonging to the ground to obtain a first intermediate point cloud.
For noise data belonging to the wall surface, the noise data can be eliminated based on the wall surface point cloud filtering condition, and the noise data specifically comprises: acquiring the wall surface distance acquired by radar laser loaded on the mobile robot; and determining whether the distance value of each point cloud in the first intermediate point cloud is smaller than the wall surface distance, and if so, attributing the corresponding point cloud to the second intermediate point cloud.
Besides the wired laser module, the mobile robot is generally also provided with a plurality of other sensors, such as a laser radar, a color camera or a depth camera, and the like, which are used for collecting environmental information around the mobile robot, reading wall distances collected by the laser radar or the other sensors, such as the camera, and determining whether the distance value of one of the first intermediate point clouds is smaller than the collected wall distance, if so, reserving the point cloud, namely belonging to the second intermediate point cloud, and if the distance value of the point cloud is larger than or equal to the collected wall distance, removing the point cloud. And (3) performing the processing on each point cloud in the first intermediate point cloud, namely removing the point clouds belonging to the wall surface to obtain a second intermediate point cloud. Wherein the wall distance may define a distance of a sensor carried on the mobile robot from a wall in the environment; the distance value of the point cloud may be defined as the distance of the point cloud from the wall.
After the observed three-dimensional point cloud is segmented, a second intermediate point cloud is reserved, the second intermediate point cloud is clustered, the clustering basis can be the distance between the point clouds, including but not limited to Euclidean distance, cosine distance and the like, one or more point cloud sets can be obtained after clustering, and each point cloud set can be used as a candidate target obstacle.
Finally, the determining whether the point cloud set is the three-dimensional reconstruction surface contour of the target obstacle according to a preset obstacle volume threshold value includes: determining whether the volume of one point cloud set is smaller than the barrier volume threshold, if so, determining the point cloud set as a target barrier, and determining the point cloud set as a three-dimensional reconstruction surface profile of the target barrier; if not, the point cloud set is ignored.
That is, clustering the point clouds obtained by the segmentation to obtain a point cloud set which accords with a preset volume size threshold value of the small obstacle, and judging that the point cloud set belongs to the three-dimensional point cloud set of the small obstacle, wherein the surface and the profile formed by the point cloud set are the three-dimensional reconstruction surface profile of the target obstacle.
More specifically, determining whether the volume of a point cloud set is smaller than a preset obstacle volume threshold, if so, determining that the point cloud set is a target obstacle to be identified, and determining the point cloud set as a three-dimensional reconstruction surface profile of the target obstacle; and if the volume of one point cloud set is larger than or equal to a preset obstacle volume threshold value, ignoring the point cloud set.
As can be seen from the method shown in fig. 1, when the mobile robot executes a planar task, the method acquires multi-frame initial three-dimensional point clouds under camera coordinate systems at different positions, maps each frame of initial three-dimensional point clouds into environment three-dimensional point clouds under a world coordinate system, fuses the acquired multi-frame environment three-dimensional point clouds to obtain an observed three-dimensional point cloud of the surface of an obstacle object and a local environment where the obstacle is located, and finally sequentially performs segmentation and filtration on the observed three-dimensional point clouds, clusters the retained point clouds, and can obtain the three-dimensional reconstruction surface profile of the target obstacle. The three-dimensional reconstruction method of the obstacle has the characteristics of low cost, light weight, small calculated amount, high instantaneity, easiness in deployment and the like, is mainly suitable for detecting the small obstacle when the mobile robot executes a planar task, and can accurately acquire the shape of the small obstacle, so that the mobile robot is helped to execute the corresponding task; the method can also be used for auxiliary identification of small obstacles.
In some embodiments of the present application, the method further comprises: extracting surface structural features of the target obstacle according to the three-dimensional reconstruction surface profile of the target obstacle, wherein the surface structural features at least comprise: curvature, corner points; acquiring image information and texture information acquired by an image sensor mounted on the mobile robot; the type of the target obstacle is determined in combination with the surface structure features, the image information, and the texture information.
The three-dimensional reconstruction surface profile information of the small obstacle can be extracted after the three-dimensional reconstruction surface profile information of the small obstacle is obtained, the three-dimensional reconstruction surface profile information comprises curvature, angular points and the like, images (RGB or depth map) obtained by an image sensor or other texture information and the like are further combined, the obstacle type can be further identified, a target classification model which is trained in advance can be utilized for identifying the obstacle type, for example, the target classification model which is built and trained on the basis of a neural network can be utilized, and when the three-dimensional reconstruction surface profile information of the obstacle, the images obtained by the image sensor, the texture information and the like are input into the target classification model, so that the shape and the type information of the obstacle can be clarified, and the mobile robot can be cleaned and avoided more intelligently.
Fig. 5 shows a flow chart of a three-dimensional reconstruction method of an obstacle according to another embodiment of the present application, and as can be seen from fig. 5, the present embodiment includes:
in the process of executing a plane task, the mobile robot acquires an original point cloud of a laser line center under a pixel coordinate system according to a preset time period; and (5) a multi-frame initial three-dimensional point cloud under a camera coordinate system at different positions is copied according to the original point Yun Hui.
And mapping the initial three-dimensional point cloud of each frame into an environment three-dimensional point cloud under a world coordinate system, and fusing the obtained multi-frame environment three-dimensional point cloud to obtain an observation three-dimensional point cloud of the target obstacle.
For any point cloud in the observed three-dimensional point cloud, determining whether the height value of the point cloud is larger than a ground height threshold; if not, the corresponding point cloud is eliminated; if yes, the corresponding point cloud is attributed to the first intermediate point cloud.
For any point cloud in the first intermediate point cloud, determining whether the distance value of the point cloud is smaller than the wall surface distance, and if not, eliminating the corresponding point cloud; if yes, the corresponding point cloud is attributed to the second intermediate point cloud.
And clustering the second intermediate point cloud to obtain a plurality of point cloud sets.
For any point cloud set, determining whether the volume of one point cloud set is smaller than an obstacle volume threshold, and if not, ignoring the point cloud set; if yes, determining the point cloud set as a target obstacle, and determining the point cloud set as a three-dimensional reconstruction surface contour of the target obstacle.
And extracting the surface structure characteristics of the target obstacle, and determining the type of the target obstacle by combining the image information and the texture information acquired by an image sensor mounted on the mobile robot.
Fig. 6 shows a schematic structural diagram of a three-dimensional reconstruction apparatus of an obstacle according to an embodiment of the present application, the apparatus comprising:
an acquiring unit 610, configured to acquire a multi-frame initial three-dimensional point cloud under a camera coordinate system at different positions when the mobile robot performs a planar task;
the mapping unit 620 is configured to map the initial three-dimensional point cloud of each frame into an environmental three-dimensional point cloud under a world coordinate system, and fuse the obtained multi-frame environmental three-dimensional point clouds to obtain an observed three-dimensional point cloud of the obstacle;
and the reconstruction unit 630 is configured to sequentially segment and cluster the observed three-dimensional point cloud, so as to obtain a three-dimensional reconstructed surface profile of the target obstacle.
In some embodiments of the present application, in the above apparatus, the mobile robot is loaded with a wired laser module; an obtaining unit 610, configured to obtain, according to a preset time period, an original point cloud of a laser line center under a pixel coordinate system by using a light ray emitted by the laser module; based on the mapping from the camera coordinate system to the pixel coordinate system, establishing a first constraint according to the original point cloud of the laser line center; establishing a second constraint based on a line laser light plane equation under the camera coordinate system; and determining a frame of initial three-dimensional point cloud under a camera coordinate system by combining the first constraint and the second constraint.
In some embodiments of the present application, in the above apparatus, the mapping unit 620 is configured to determine a first transformation matrix of the camera coordinate system to the robot coordinate system; reading and determining a first coordinate of a designated axis of the robot coordinate system under a world coordinate system according to the pose of the mobile robot under the world coordinate system; determining a second transformation matrix from the robot coordinate system to the world coordinate system according to the first coordinate and the designated axis unit coordinate of the world coordinate system; and determining the environment three-dimensional point cloud of the initial three-dimensional point cloud of each frame under the world coordinate system according to the initial three-dimensional point cloud of each frame, the first transformation matrix and the second transformation matrix.
In some embodiments of the present application, in the foregoing apparatus, the reconstruction unit 630 is configured to filter the observed three-dimensional point cloud based on a ground point cloud filtering condition to obtain a first intermediate point cloud; filtering the first intermediate point cloud based on the wall point cloud filtering condition to obtain a second intermediate point cloud; clustering the second intermediate point cloud to obtain at least one point cloud set; and determining whether the point cloud set is the three-dimensional reconstruction surface profile of the target obstacle according to a preset obstacle volume threshold value.
In some embodiments of the present application, in the foregoing apparatus, the reconstruction unit 630 is configured to determine whether a height value of each point cloud in the observed three-dimensional point cloud is greater than a preset ground height threshold, and if yes, attribute the corresponding point cloud to the first intermediate point cloud; the method is also used for acquiring the wall surface distance acquired by radar laser loaded on the mobile robot; and determining whether the distance value of each point cloud in the first intermediate point cloud is smaller than the wall surface distance, and if so, attributing the corresponding point cloud to the second intermediate point cloud.
In some embodiments of the present application, in the foregoing apparatus, the reconstruction unit 630 is configured to determine whether a volume of one point cloud set is smaller than the obstacle volume threshold, if so, determine the point cloud set as a target obstacle, and determine the point cloud set as a three-dimensional reconstructed surface contour of the target obstacle.
In some embodiments of the present application, the apparatus further comprises: the object classification unit is used for extracting the surface structure characteristics of the object obstacle according to the three-dimensional reconstruction surface profile of the object obstacle, and the surface structure characteristics at least comprise: curvature, corner points; acquiring image information and texture information acquired by an image sensor mounted on the mobile robot; the type of the target obstacle is determined in combination with the surface structure features, the image information, and the texture information.
It should be noted that, the three-dimensional reconstruction device of the obstacle may implement the three-dimensional reconstruction method of the obstacle, which is not described herein.
Fig. 7 is a schematic structural view of a mobile robot according to an embodiment of the present application. Referring to fig. 7, at the hardware level, the mobile robot includes a processor, and optionally an internal bus, a network interface, and a memory. The Memory may include a Memory, such as a Random-Access Memory (RAM), and may further include a non-volatile Memory (non-volatile Memory), such as at least 1 disk Memory. Of course, the mobile robot may also include hardware required for other services.
The processor, network interface, and memory may be interconnected by an internal bus, which may be an ISA (Industry Standard Architecture ) bus, a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, or EISA (Extended Industry Standard Architecture ) bus, among others. The buses may be classified as address buses, data buses, control buses, etc. For ease of illustration, only one bi-directional arrow is shown in FIG. 7, but not only one bus or type of bus.
And the memory is used for storing programs. In particular, the program may include program code including computer-operating instructions. The memory may include memory and non-volatile storage and provide instructions and data to the processor.
The processor reads the corresponding computer program from the nonvolatile memory to the memory and then runs the computer program to form the three-dimensional reconstruction device of the barrier on the logic level. And the processor is used for executing the program stored in the memory and particularly used for executing the method.
The method performed by the three-dimensional reconstruction device of the obstacle disclosed in the embodiment shown in fig. 6 of the present application may be applied to a processor or implemented by the processor. The processor may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in a processor or by instructions in the form of software. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory, and the processor reads configuration information in the memory and, in combination with its hardware, performs the steps of the above method.
The mobile robot may also execute the method executed by the three-dimensional reconstruction device of the obstacle in fig. 6, and implement the function of the three-dimensional reconstruction device of the obstacle in the embodiment shown in fig. 6, which is not described herein.
The embodiments of the present application also provide a computer-readable storage medium storing one or more programs, the one or more programs including instructions, which when executed by a mobile robot including a plurality of application programs, enable the mobile robot to perform a method performed by a three-dimensional reconstruction apparatus of an obstacle in the embodiment shown in fig. 6, and in particular to perform the aforementioned method.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both permanent and non-permanent, removable and non-removable media, may implement configuration information storage by any method or technology. The configuration information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store configuration information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other identical elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. A method for three-dimensional reconstruction of an obstacle, comprising:
acquiring multi-frame initial three-dimensional point clouds under camera coordinate systems at different positions when the mobile robot executes a plane task;
mapping the initial three-dimensional point cloud of each frame into an environment three-dimensional point cloud under a world coordinate system, and fusing the obtained multi-frame environment three-dimensional point cloud to obtain an observation three-dimensional point cloud of a target obstacle;
and sequentially dividing and clustering the observed three-dimensional point cloud to obtain the three-dimensional reconstruction surface profile of the target obstacle.
2. The method of claim 1, wherein the mobile robot is loaded with a wired laser module;
the method for acquiring the multi-frame initial three-dimensional point cloud under the camera coordinate systems of different positions when the mobile robot executes the plane task comprises the following steps:
acquiring an original point cloud of the center of a laser line under a pixel coordinate system according to a preset time period through the light rays emitted by the laser module;
based on the mapping from the camera coordinate system to the pixel coordinate system, establishing a first constraint according to the original point cloud of the laser line center;
establishing a second constraint based on a line laser light plane equation under the camera coordinate system;
And determining a frame of initial three-dimensional point cloud under a camera coordinate system by combining the first constraint and the second constraint.
3. The method of claim 1, wherein mapping the initial three-dimensional point cloud for each frame to an ambient three-dimensional point cloud in a world coordinate system comprises:
determining a first transformation matrix of the camera coordinate system to the robot coordinate system;
reading and determining a first coordinate of a designated axis of the robot coordinate system under a world coordinate system according to the pose of the mobile robot under the world coordinate system;
determining a second transformation matrix from the robot coordinate system to the world coordinate system according to the first coordinate and the designated axis unit coordinate of the world coordinate system;
and determining the environment three-dimensional point cloud of the initial three-dimensional point cloud of each frame under the world coordinate system according to the initial three-dimensional point cloud of each frame, the first transformation matrix and the second transformation matrix.
4. The method of claim 1, wherein the sequentially segmenting and clustering the observed three-dimensional point cloud to obtain a three-dimensional reconstructed surface profile of the target obstacle comprises:
filtering the observed three-dimensional point cloud based on ground point cloud filtering conditions to obtain a first intermediate point cloud;
Filtering the first intermediate point cloud based on the wall point cloud filtering condition to obtain a second intermediate point cloud;
clustering the second intermediate point cloud to obtain at least one point cloud set;
and determining whether the point cloud set is the three-dimensional reconstruction surface profile of the target obstacle according to a preset obstacle volume threshold value.
5. The method of claim 4, wherein filtering the observed three-dimensional point cloud based on ground point cloud filtering conditions to obtain a first intermediate point cloud comprises:
determining whether the height value of each point cloud in the observed three-dimensional point cloud is larger than a preset ground height threshold value, if so, attributing the corresponding point cloud to the first intermediate point cloud;
the filtering of the first intermediate point cloud based on the wall point cloud filtering condition to obtain a second intermediate point cloud includes:
acquiring the wall surface distance acquired by radar laser loaded on the mobile robot;
and determining whether the distance value of each point cloud in the first intermediate point cloud is smaller than the wall surface distance, and if so, attributing the corresponding point cloud to the second intermediate point cloud.
6. The method of claim 4, wherein the determining whether the point cloud set is a three-dimensional reconstructed surface contour of the target obstacle according to a preset obstacle volume threshold comprises:
Determining whether the volume of one point cloud set is smaller than the barrier volume threshold, if so, determining the point cloud set as a target barrier, and determining the point cloud set as a three-dimensional reconstruction surface profile of the target barrier.
7. The method according to claim 1, wherein the method further comprises:
extracting surface structural features of the target obstacle according to the three-dimensional reconstruction surface profile of the target obstacle, wherein the surface structural features at least comprise: curvature, corner points;
acquiring image information and texture information acquired by an image sensor mounted on the mobile robot;
the type of the target obstacle is determined in combination with the surface structure features, the image information, and the texture information.
8. A three-dimensional reconstruction apparatus of an obstacle, the apparatus comprising:
the mobile robot comprises an acquisition unit, a calculation unit and a calculation unit, wherein the acquisition unit is used for acquiring multi-frame initial three-dimensional point clouds under camera coordinate systems at different positions when the mobile robot executes a plane task;
the mapping unit is used for mapping the initial three-dimensional point cloud of each frame into an environment three-dimensional point cloud under a world coordinate system, and fusing the obtained multi-frame environment three-dimensional point cloud to obtain an observation three-dimensional point cloud of the obstacle;
And the reconstruction unit is used for sequentially dividing and clustering the observed three-dimensional point cloud to obtain the three-dimensional reconstruction surface profile of the target obstacle.
9. A mobile robot, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of three-dimensional reconstruction of an obstacle according to any one of claims 1 to 7 via execution of the executable instructions.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the three-dimensional reconstruction method of an obstacle according to any one of claims 1 to 7.
CN202310477426.5A 2023-04-27 2023-04-27 Three-dimensional reconstruction method and device for obstacle, mobile robot and medium Pending CN116468876A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310477426.5A CN116468876A (en) 2023-04-27 2023-04-27 Three-dimensional reconstruction method and device for obstacle, mobile robot and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310477426.5A CN116468876A (en) 2023-04-27 2023-04-27 Three-dimensional reconstruction method and device for obstacle, mobile robot and medium

Publications (1)

Publication Number Publication Date
CN116468876A true CN116468876A (en) 2023-07-21

Family

ID=87180648

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310477426.5A Pending CN116468876A (en) 2023-04-27 2023-04-27 Three-dimensional reconstruction method and device for obstacle, mobile robot and medium

Country Status (1)

Country Link
CN (1) CN116468876A (en)

Similar Documents

Publication Publication Date Title
CN110163930B (en) Lane line generation method, device, equipment, system and readable storage medium
CN108520536B (en) Disparity map generation method and device and terminal
CN113657224B (en) Method, device and equipment for determining object state in vehicle-road coordination
CN110119679B (en) Object three-dimensional information estimation method and device, computer equipment and storage medium
CN110555407B (en) Pavement vehicle space identification method and electronic equipment
CN111080662A (en) Lane line extraction method and device and computer equipment
CN111815707A (en) Point cloud determining method, point cloud screening device and computer equipment
CN112097732A (en) Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium
CN114179788B (en) Automatic parking method, system, computer readable storage medium and vehicle terminal
CN111142514B (en) Robot and obstacle avoidance method and device thereof
WO2021017211A1 (en) Vehicle positioning method and device employing visual sensing, and vehicle-mounted terminal
CN112578405A (en) Method and system for removing ground based on laser radar point cloud data
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
CN115588047A (en) Three-dimensional target detection method based on scene coding
CN114549542A (en) Visual semantic segmentation method, device and equipment
CN114549779A (en) Scene model reconstruction method and device, electronic equipment and storage medium
US10223803B2 (en) Method for characterising a scene by computing 3D orientation
CN114648639B (en) Target vehicle detection method, system and device
CN109242900B (en) Focal plane positioning method, processing device, focal plane positioning system and storage medium
CN116468876A (en) Three-dimensional reconstruction method and device for obstacle, mobile robot and medium
CN112364693B (en) Binocular vision-based obstacle recognition method, device, equipment and storage medium
US11227166B2 (en) Method and device for evaluating images, operating assistance method, and operating device
Jaspers et al. Fast and robust b-spline terrain estimation for off-road navigation with stereo vision
CN117576200B (en) Long-period mobile robot positioning method, system, equipment and medium
CN112528728B (en) Image processing method and device for visual navigation and mobile robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination