CN107845114B - Map construction method and device and electronic equipment - Google Patents

Map construction method and device and electronic equipment Download PDF

Info

Publication number
CN107845114B
CN107845114B CN201711104367.8A CN201711104367A CN107845114B CN 107845114 B CN107845114 B CN 107845114B CN 201711104367 A CN201711104367 A CN 201711104367A CN 107845114 B CN107845114 B CN 107845114B
Authority
CN
China
Prior art keywords
depth image
pose
current
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711104367.8A
Other languages
Chinese (zh)
Other versions
CN107845114A (en
Inventor
王民航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN201711104367.8A priority Critical patent/CN107845114B/en
Publication of CN107845114A publication Critical patent/CN107845114A/en
Application granted granted Critical
Publication of CN107845114B publication Critical patent/CN107845114B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a map construction method, a map construction device and electronic equipment, wherein a specific implementation mode of the map construction method comprises the following steps: determining the pose of the acquisition equipment according to the current frame of depth image acquired by the acquisition equipment; optimizing the pose according to the historical depth image acquired by the acquisition equipment to obtain a target pose; and constructing a map according to the target pose. According to the embodiment, the pose of the current acquisition equipment can be optimized according to the historical depth image acquired by the acquisition equipment, the target pose with higher precision is obtained, and the map is constructed based on the target pose, so that errors gradually accumulated due to time can be reduced, and the map construction precision is improved.

Description

Map construction method and device and electronic equipment
Technical Field
The present disclosure relates to the field of navigation positioning technologies, and in particular, to a method and an apparatus for constructing a map, and an electronic device.
Background
Currently, with the wide popularization of technologies such as robots, unmanned robots, etc., visual positioning in navigation positioning technology and map construction technology based on visual positioning are becoming more and more important. In the related art, when the map is generally constructed in a visual manner, errors generated are gradually accumulated with the lapse of time, so that the errors of the map construction are larger and larger, and the errors are difficult to eliminate, thereby reducing the accuracy of the map construction.
Disclosure of Invention
In order to solve one of the above technical problems, the application provides a map construction method, a map construction device and electronic equipment.
According to a first aspect of an embodiment of the present application, there is provided a map construction method, including:
determining the pose of the acquisition equipment according to the current frame of depth image acquired by the acquisition equipment;
optimizing the pose according to the historical depth image acquired by the acquisition equipment to obtain a target pose;
and constructing a map according to the target pose.
Optionally, the determining the pose of the current acquisition device according to the current frame depth image acquired by the acquisition device includes:
and when a preset event does not occur, determining the pose of the acquisition equipment at present based on the current motion data of the acquisition equipment and the current frame of depth image.
Optionally, the determining the pose of the current acquisition device according to the current frame depth image acquired by the acquisition device further includes:
when a preset event occurs, determining the current pose of the acquisition equipment based on the historical depth image acquired by the acquisition equipment and the current frame depth image.
Optionally, the optimizing the pose according to the historical depth image acquired by the acquisition device includes:
acquiring part or all of key frame images in the historical depth image as target images;
and optimizing the pose according to the target image.
Optionally, the acquiring a part or all of the key frame images in the historical depth image as the target image includes:
acquiring the target image from pre-stored target data; and the target data is recorded with key frame images in the historical depth images.
Optionally, the target data is obtained by storing in the following manner:
determining a first key frame image from the depth image acquired by the acquisition equipment;
storing and recording the first key frame image into the target data;
after the first key frame image is determined, detecting each frame of depth image sequentially acquired by the acquisition equipment so as to sequentially determine the key frame images; for any one frame of depth image acquired in sequence, if the detection result indicates that the parallax between the frame of depth image and the previous frame of key frame image is larger than the preset parallax and the frame of depth image has more than the preset number of matched road mark points, determining the frame of depth image as the key frame image;
and sequentially storing and recording each frame of sequentially determined key frame image into the target data.
Optionally, the optimizing the pose according to the target image includes:
and when a preset event does not occur, optimizing the pose according to the target image and the current motion data of the acquisition equipment.
Optionally, the optimizing the pose according to the target image further includes:
and when a preset event occurs, optimizing the pose according to the target image and the current frame depth image.
According to a second aspect of embodiments of the present application, there is provided a map building apparatus, including:
the determining module is used for determining the pose of the acquisition equipment according to the current frame of depth image acquired by the acquisition equipment;
the optimizing module is used for optimizing the pose according to the historical depth image acquired by the acquisition equipment to obtain a target pose;
and the construction module is used for constructing a map according to the target pose.
According to a third aspect of embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of constructing a map according to any one of the first aspects when executing the program.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
according to the map construction method, the map construction device and the electronic equipment, the pose of the current acquisition equipment is determined according to the current frame of depth image acquired by the acquisition equipment, the pose is optimized according to the history depth image acquired by the acquisition equipment, the target pose is obtained, and the map is constructed according to the target pose. According to the method and the device for constructing the map, the pose of the current acquisition device can be optimized according to the historical depth image acquired by the acquisition device, the target pose with higher precision is obtained, and the map is constructed based on the target pose, so that errors gradually accumulated due to time can be reduced, and the accuracy of map construction is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method of constructing a map according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of another map construction method illustrated in accordance with an exemplary embodiment of the present application;
FIG. 3 is a flow chart of another map construction method illustrated in accordance with an exemplary embodiment of the present application;
FIG. 4 is a block diagram of a map construction apparatus according to an exemplary embodiment of the present application;
FIG. 5 is a block diagram of another map construction apparatus according to an exemplary embodiment of the present application;
FIG. 6 is a block diagram of another map construction apparatus according to an exemplary embodiment of the present application;
FIG. 7 is a block diagram of another map construction apparatus according to an exemplary embodiment of the present application;
fig. 8 is a schematic structural view of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited by these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In order to implement the map construction scheme provided in the present application, first, an acquisition device (for example, a navigation positioning robot, or an unmanned plane, etc.) for acquiring visual information and inertial information may be prepared, and a visual information acquisition device (for example, a binocular camera for acquiring depth images, or an RGBD camera, etc.) and an inertial information acquisition device (for example, an inertial measurement unit for acquiring motion data, etc.) may be disposed on the acquisition device. Then, the acquisition equipment is enabled to freely move in the target area, and visual information and inertial information are acquired in real time. The acquired visual information and inertial information can be processed in real time, so that a map of the target area is constructed. It should be noted that the collecting device may be provided with a processor, and the collected visual information and inertial information may be directly processed by the processor to construct a map. The acquired visual information and inertial information can be transmitted to another electronic device through the network by the acquisition device so as to construct a map. The present application is not limited in this respect.
As shown in fig. 1, fig. 1 is a flowchart illustrating a map construction method according to an exemplary embodiment, which may be applied to an electronic device. In this embodiment, those skilled in the art will appreciate that the electronic device may include, but is not limited to, a navigational positioning robot, a drone, a mobile terminal device such as a smart phone, a laptop, a tablet, a desktop, and the like. The method comprises the following steps:
in step 101, the pose of the current acquisition device is determined according to the current frame depth image acquired by the acquisition device.
In this embodiment, the pose of the current acquisition device may be determined based on the current motion data of the acquisition device and the current depth image of the previous frame, or may be determined based on the historical depth image acquired by the acquisition device and the current depth image of the previous frame. It is to be understood that the present application is not limited in this respect.
In this embodiment, the collecting device may include a visual information collecting device, and may further include an inertial information collecting device. The visual information acquisition device is used for acquiring the depth image in real time, and the visual information acquisition device can be various devices capable of acquiring the depth image and can comprise, but is not limited to, a binocular camera, an RGBD camera or the like. The inertial information acquisition device is used to acquire motion data in real time, and may include, but is not limited to, an IMU (Inertial Measurement Unit ), etc. The application is not limited to the specific hardware aspects provided on the acquisition device.
The current motion data are various inertial information currently collected by the collecting device, such as acceleration, angular velocity and other information in all directions in a three-dimensional space. The current frame of depth image is a frame of depth image data currently acquired by the acquisition equipment. The pose of the current acquisition device can be determined according to the current motion data of the acquisition device and/or the historical depth image acquired by the acquisition device and in combination with the current depth image of the previous frame acquired by the acquisition device, wherein the pose can comprise a position and a pose.
In step 102, the pose of the current acquisition device is optimized according to the historical depth image acquired by the acquisition device, and the target pose is obtained.
In this embodiment, the error of the pose of the current acquisition device may be accumulated and become larger over time, so the pose of the current acquisition device may be optimized according to the historical depth image acquired by the acquisition device, thereby obtaining a more accurate target pose. The historical depth image may be a depth image acquired by the acquisition device before the current moment. Specifically, part or all of the key frame images in the historical depth image can be obtained as target images, and the pose of the current acquisition equipment is optimized according to the target images. For example, the optimization can be performed as follows: the pose of the current acquisition equipment can be optimized according to the target image and the current motion data of the acquisition equipment, and the pose of the current acquisition equipment can be optimized according to the target image and the current depth image of the previous frame. It is to be understood that the present application is not limited to the particular manner of optimization described above.
In this embodiment, the key frame image may be a historical depth image saved during the process of capturing the depth image by the capturing device. Generally, in the process of acquiring a depth image by an acquisition device, after determining a first key frame image (generally, an acquired first frame depth image), each frame of depth image is acquired, whether the frame of depth image meets a preset condition is detected, where the preset condition is that a parallax between the frame of depth image and a previous frame of key frame image is greater than a preset parallax, and the frame of depth image has matching road mark points exceeding a preset number. If the preset condition is met, determining that the one frame of depth image is a key frame image, and storing after the use is finished. If the preset condition is not met, determining that the one-frame depth image is not a key frame image, and discarding the one-frame depth image after the use is finished.
In this embodiment, all the key frame images may be used as the target images, or a part of the key frame images acquired in a period closer to the current time may be used as the target images.
In step 103, a map is constructed from the target pose.
In the present embodiment, first, an newly added landmark point in the current frame depth image (i.e., a newly added landmark point compared to the previous frame depth image) may be determined. Then, according to the depth information and the target pose of the depth image of the current frame, a relative position relation between the newly added road marking point and the matched road marking point (i.e. the road marking point matched with the depth image of the previous frame) can be determined, and the position of the newly added road marking point in the depth image of the current frame is determined based on the relative position relation and the position information of the matched road marking point with the determined position. And finally, constructing a map in real time according to the position of the newly added road marking point.
According to the map construction method provided by the embodiment of the application, the pose of the current acquisition device is determined according to the current depth image of the frame acquired by the acquisition device, the pose is optimized according to the historical depth image acquired by the acquisition device, the target pose is obtained, and the map is constructed according to the target pose. According to the method and the device for constructing the map, the pose of the current acquisition device can be optimized according to the historical depth image acquired by the acquisition device, the target pose with higher precision is obtained, and the map is constructed based on the target pose, so that errors gradually accumulated due to time can be reduced, and the accuracy of map construction is improved.
As shown in fig. 2, fig. 2 is a flowchart illustrating another map construction method according to an exemplary embodiment, which describes a process of determining a pose of a current acquisition device, and the method may be applied to an electronic device, including the steps of:
in step 201, when the preset event does not occur, the pose of the current acquisition device is determined based on the current motion data of the acquisition device and the current depth image of the previous frame acquired by the acquisition device.
In this embodiment, when the preset event does not occur, the pose of the current acquisition device may be determined based on the current motion data of the acquisition device and the current depth image of the previous frame acquired by the acquisition device. Specifically, taking an acquisition device provided with a binocular camera and an IMU as an example, the current motion data acquired by the IMU may be first pre-integrated, so that a current IMU error term is determined according to a pre-integrated result. And determining a current characteristic re-projection error item according to the current frame depth image acquired by the binocular camera. Then, a first objective function is obtained based on the current IMU error term and the current feature re-projection error term. Finally, solving a first objective function by using an LM (Levenberg-Marquardt) algorithm, thereby obtaining the pose of the current acquisition equipment.
In step 202, when a preset event occurs, the pose of the current acquisition device is determined based on the historical depth image acquired by the acquisition device and the current depth image of the previous frame acquired by the acquisition device.
In this embodiment, when the preset event occurs, the pose of the current acquisition device may be determined based on the historical depth image acquired by the acquisition device and the current depth image of the previous frame acquired by the acquisition device. In particular, a reference image may be acquired from a historical depth image acquired by an acquisition device, the reference image having more than a preset number of matching landmark points with a current frame depth image. And determining a matching landmark point in the current frame of depth image relative to the reference image, acquiring the pose corresponding to the reference image, and determining the pose of the current acquisition equipment according to the depth information of the current frame of depth image and the pose corresponding to the reference image.
In this embodiment, the preset event may be an abnormality in the data collected by the collecting device, for example, the preset event may include: collecting an event of abnormality of current motion data of equipment; or the event that the number of the matched landmark points in the previous frame of depth image and the previous frame of depth image is smaller than the preset number. It will be appreciated that the preset event may also include other events, and the specific content aspect of the preset event is not limited in this application.
In step 203, the pose of the current acquisition device is optimized according to the historical depth image acquired by the acquisition device, so as to obtain the target pose.
In step 204, a map is constructed from the target pose.
It should be noted that, for the same steps as those in the embodiment of fig. 1, the description of the steps in the embodiment of fig. 2 is omitted, and the related content may be referred to the embodiment of fig. 1.
According to the map construction method provided by the embodiment of the application, when a preset event does not occur, the pose of the current acquisition device is determined based on the current motion data of the acquisition device and the current depth image of the current frame acquired by the acquisition device. And when a preset event occurs, determining the pose of the current acquisition equipment based on the historical depth image acquired by the acquisition equipment and the current frame of depth image acquired by the acquisition equipment. And optimizing the pose of the current acquisition equipment according to the historical depth image acquired by the acquisition equipment, obtaining the target pose, and constructing a map according to the target pose. Because the embodiment can determine the pose of the current acquisition device based on the historical depth image acquired by the acquisition device and the current frame depth image when the abnormal event occurs. Therefore, the problem that the map cannot be normally constructed when the data acquired by the acquisition equipment are abnormal is avoided, the efficiency of map construction is improved, and the accuracy of map construction is further improved.
As shown in fig. 3, fig. 3 is a flowchart illustrating another map construction according to an exemplary embodiment, which details a process of optimizing a pose, the method may be applied to an electronic device, and includes the steps of:
in step 301, when a preset event does not occur, the pose of the current acquisition device is determined based on the current motion data of the acquisition device and the current depth image of the previous frame acquired by the acquisition device.
In step 302, when a preset event occurs, the pose of the current acquisition device is determined based on the historical depth image acquired by the acquisition device and the current frame depth image.
In step 303, a part or all of the key frame images in the history depth image are acquired as target images.
In this embodiment, the target image may be acquired from target data stored in advance, in which key frame images in the history depth image are recorded. Specifically, the target data may be stored as follows: the first key frame image is determined from the depth image acquired by the acquisition device. Storing and recording the first key frame image into target data, and detecting each frame of depth image acquired by the acquisition equipment in sequence after determining the first key frame image so as to determine the key frame image in sequence. For any one frame of depth image acquired in sequence, if the detected result indicates that the parallax between the frame of depth image and the key frame image of the previous frame (i.e., the key frame image nearest to the acquisition time of the frame of depth image) is greater than the preset parallax and has more than the preset number of matching landmark points, determining that the frame of depth image is the key frame image. And sequentially storing and recording each frame of sequentially determined key frame image into the target data.
In step 304, when the preset event does not occur, the pose of the current acquisition device is optimized according to the target image and the current motion data of the acquisition device, so as to obtain the target pose.
In this embodiment, when the preset event does not occur, the pose of the current acquisition device may be optimized according to the target image and the current motion data of the acquisition device. Specifically, taking an acquisition device provided with a binocular camera and an IMU as an example, the current motion data acquired by the IMU may be first pre-integrated, so that a current IMU error term is determined according to a pre-integrated result. And determining a characteristic re-projection error item corresponding to the target image according to the target image acquired by the binocular camera. And then, acquiring a second objective function based on the current IMU error term and the characteristic re-projection error term corresponding to the target image. And determining constraint conditions according to the pose of the current acquisition equipment to be optimized. Finally, a LM (Levenberg-Marquardt ) algorithm is adopted to solve a second objective function according to constraint conditions, so that the pose of the target is obtained.
In step 305, when a preset event occurs, the pose of the current acquisition device is optimized according to the target image and the current depth image of the previous frame, so as to obtain the target pose.
In this embodiment, when a preset event occurs, the pose of the current acquisition device may be optimized according to the target image and the current depth image. Specifically, a matching landmark point in the current frame of depth image relative to the target image can be determined, the pose corresponding to the target image is obtained, and the pose of the current acquisition device is determined according to the depth information of the current frame of depth image and the pose corresponding to the target image.
In step 306, a map is constructed from the target pose.
It should be noted that, for the same steps as those in the embodiment of fig. 1 and 2, the description of the embodiment of fig. 3 is omitted, and the related content may be referred to the embodiment of fig. 1 and 2.
According to the map construction method provided by the embodiment of the application, when a preset event does not occur, the pose of the current acquisition device is determined according to the current motion data of the acquisition device and the current depth image of the current frame acquired by the acquisition device. And when a preset event occurs, determining the pose of the current acquisition equipment based on the historical depth image acquired by the acquisition equipment and the current frame depth image. And acquiring part or all of key frame images in the historical depth image as target images. And when the preset event does not occur, optimizing the pose of the current acquisition equipment according to the target image and the current motion data of the acquisition equipment to obtain the target pose. When a preset event occurs, optimizing the pose of the current acquisition equipment according to the target image and the current frame of depth image to obtain the target pose, and constructing a map according to the target pose. According to the method and the device for constructing the map, under the condition that the preset event does not occur or occurs, the pose of the current acquisition device can be optimized according to the historical depth image acquired by the acquisition device, the target pose with higher precision is obtained, the map is constructed based on the target pose, the problem that the pose cannot be optimized normally when the data acquired by the acquisition device are abnormal is avoided, the map construction efficiency is further improved, and the map construction precision is further improved.
It should be noted that although the operations of the methods of the present application are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in that particular order or that all illustrated operations be performed in order to achieve desirable results. Rather, the steps depicted in the flowcharts may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
Corresponding to the map construction method embodiment, the application also provides an embodiment of the map construction device.
As shown in fig. 4, fig. 4 is a block diagram of a map construction apparatus according to an exemplary embodiment of the present application, which may include: a determination module 401, an optimization module 402 and a construction module 403.
The determining module 401 is configured to determine a pose of the current acquisition device according to a current depth image of a previous frame acquired by the acquisition device.
And the optimizing module 402 is used for optimizing the pose of the current acquisition equipment according to the historical depth image acquired by the acquisition equipment to obtain the target pose.
A construction module 403, configured to construct a map according to the target pose.
As shown in fig. 5, fig. 5 is a block diagram of another map construction apparatus according to an exemplary embodiment of the present application, where the determining module 401 may include: the first determination submodule 501.
The first determining submodule 501 is configured to determine, when a preset event does not occur, a pose of a current acquisition device based on current motion data of the acquisition device and a current depth image of a previous frame.
As shown in fig. 6, fig. 6 is a block diagram of another map construction apparatus according to an exemplary embodiment of the present application, where the determining module 401 may further include: the second determination submodule 502.
The second determining submodule 502 is configured to determine, when a preset event occurs, a pose of a current acquisition device based on a historical depth image acquired by the acquisition device and a current frame of depth image.
As shown in fig. 7, fig. 7 is a block diagram of another map construction apparatus according to an exemplary embodiment of the present application, where the optimization module 402 may include: an acquisition sub-module 701 and an optimization sub-module 702.
The acquiring sub-module 701 is configured to acquire part or all of the key frame images in the historical depth image as the target image.
An optimizing sub-module 702, configured to optimize the pose of the current acquisition device according to the target image.
In some alternative embodiments, the acquisition sub-module 701 is configured to: and acquiring target images from pre-stored target data, wherein key frame images in the historical depth images are recorded in the target data.
In other alternative embodiments, the optimization sub-module 702 is configured to: when the preset event does not occur, the pose of the current acquisition equipment is optimized according to the target image and the current motion data of the acquisition equipment.
In other alternative embodiments, the optimization sub-module 702 is configured to: when a preset event occurs, the pose of the current acquisition equipment is optimized according to the target image and the current frame depth image.
It should be understood that the apparatus may be preset in the electronic device or the server, or may be loaded into the electronic device or the server by downloading or the like. The corresponding modules in the device can be matched with the modules in the electronic equipment or the server to realize the map construction scheme.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purposes of the present application. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The embodiment of the application also provides a computer readable storage medium, and the storage medium stores a computer program, and the computer program can be used for executing the map construction method provided by any one of the embodiments of fig. 1 to 3.
Corresponding to the map construction method, the embodiment of the application also provides a schematic structural diagram of the electronic device according to an exemplary embodiment of the application shown in fig. 8. Referring to fig. 8, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a nonvolatile memory, and may include hardware required by other services. The processor reads the corresponding computer program from the nonvolatile memory to the memory and then runs the computer program to form the map construction device on the logic level. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the present application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (5)

1. A method of constructing a map, the method comprising:
determining the pose of the acquisition equipment according to the current frame of depth image acquired by the acquisition equipment;
optimizing the pose according to the historical depth image acquired by the acquisition equipment to obtain a target pose;
constructing a map according to the target pose; the current frame of depth image is a frame of depth image data acquired by the acquisition equipment at the current moment; the historical depth image is a depth image acquired by the acquisition equipment before the current moment;
the method for determining the pose of the acquisition equipment at present according to the depth image of the current frame acquired by the acquisition equipment specifically comprises the following steps:
when a preset event does not occur, determining the pose of the acquisition equipment at present based on the current motion data of the acquisition equipment and the current depth image of the previous frame;
when a preset event occurs, determining the pose of the current acquisition equipment based on the historical depth image acquired by the acquisition equipment and the current frame depth image;
the preset event comprises: events with the number of matched landmark points smaller than the preset number in the previous frame of depth image and the previous frame of depth image;
the optimizing the pose according to the historical depth image acquired by the acquisition equipment comprises the following steps:
acquiring part or all of key frame images in the historical depth image as target images;
when a preset event does not occur, pre-integrating current motion data acquired by the IMU, determining a current IMU error item according to a pre-integration result, determining a characteristic re-projection error item corresponding to a target image according to the target image acquired by the binocular camera, acquiring a second target function based on the current IMU error item and the characteristic re-projection error item corresponding to the target image, determining constraint conditions according to the pose of the current acquisition equipment to be optimized, adopting an LM algorithm, and solving the second target function according to the constraint conditions to obtain the target pose; when a preset event occurs, determining a matching landmark point in the current frame of depth image relative to the target image, acquiring the pose corresponding to the target image, and determining the pose of the current acquisition equipment according to the depth information of the current frame of depth image and the pose corresponding to the target image.
2. The method of claim 1, wherein the acquiring a portion or all of the key frame images in the historical depth image as the target image comprises:
acquiring the target image from pre-stored target data; and the target data is recorded with key frame images in the historical depth images.
3. The method according to claim 2, characterized in that the target data is stored by:
determining a first key frame image from the depth image acquired by the acquisition equipment;
storing and recording the first key frame image into the target data;
after the first key frame image is determined, detecting each frame of depth image sequentially acquired by the acquisition equipment so as to sequentially determine the key frame images; for any one frame of depth image acquired in sequence, if the detection result indicates that the parallax between the frame of depth image and the previous frame of key frame image is larger than the preset parallax and the frame of depth image has more than the preset number of matched road mark points, determining the frame of depth image as the key frame image;
and sequentially storing and recording each frame of sequentially determined key frame image into the target data.
4. A map construction apparatus, the apparatus comprising:
the determining module is used for determining the pose of the acquisition equipment according to the current frame of depth image acquired by the acquisition equipment;
the optimizing module is used for optimizing the pose according to the historical depth image acquired by the acquisition equipment to obtain a target pose;
the construction module is used for constructing a map according to the target pose; the current frame of depth image is a frame of depth image data acquired by the acquisition equipment at the current moment; the historical depth image is a depth image acquired by the acquisition equipment before the current moment;
the determining module is specifically configured to determine, when a preset event does not occur, a pose of the current acquisition device based on current motion data of the acquisition device and the current depth image of the previous frame; when a preset event occurs, determining the pose of the current acquisition equipment based on the historical depth image acquired by the acquisition equipment and the current frame depth image; the preset event comprises: events with the number of matched landmark points smaller than the preset number in the previous frame of depth image and the previous frame of depth image;
the optimization module is specifically configured to obtain a part or all of key frame images in the historical depth image as a target image; when a preset event does not occur, pre-integrating current motion data acquired by the IMU, determining a current IMU error item according to a pre-integration result, determining a characteristic re-projection error item corresponding to a target image according to the target image acquired by the binocular camera, acquiring a second target function based on the current IMU error item and the characteristic re-projection error item corresponding to the target image, determining constraint conditions according to the pose of the current acquisition equipment to be optimized, adopting an LM algorithm, and solving the second target function according to the constraint conditions to obtain the target pose; when a preset event occurs, determining a matching landmark point in the current frame of depth image relative to the target image, acquiring the pose corresponding to the target image, and determining the pose of the current acquisition equipment according to the depth information of the current frame of depth image and the pose corresponding to the target image.
5. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-3 when executing the program.
CN201711104367.8A 2017-11-10 2017-11-10 Map construction method and device and electronic equipment Active CN107845114B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711104367.8A CN107845114B (en) 2017-11-10 2017-11-10 Map construction method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711104367.8A CN107845114B (en) 2017-11-10 2017-11-10 Map construction method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN107845114A CN107845114A (en) 2018-03-27
CN107845114B true CN107845114B (en) 2024-03-22

Family

ID=61680990

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711104367.8A Active CN107845114B (en) 2017-11-10 2017-11-10 Map construction method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN107845114B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846857A (en) * 2018-06-28 2018-11-20 清华大学深圳研究生院 The measurement method and visual odometry of visual odometry
CN109186618B (en) * 2018-08-31 2022-11-29 平安科技(深圳)有限公司 Map construction method and device, computer equipment and storage medium
CN114847803B (en) * 2018-10-29 2024-04-16 北京石头创新科技有限公司 Positioning method and device of robot, electronic equipment and storage medium
CN112634395B (en) * 2019-09-24 2023-08-25 杭州海康威视数字技术股份有限公司 Map construction method and device based on SLAM
CN111091621A (en) * 2019-12-11 2020-05-01 东南数字经济发展研究院 Binocular vision synchronous positioning and composition method, device, equipment and storage medium
CN111044289B (en) * 2019-12-26 2021-09-03 哈尔滨工业大学 Large-scale high-speed rotation equipment alignment error measuring method based on closed-loop dynamic measurement
CN111552757B (en) * 2020-04-30 2022-04-01 上海商汤临港智能科技有限公司 Method, device and equipment for generating electronic map and storage medium
CN112991515B (en) * 2021-02-26 2022-08-19 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment
CN113284240B (en) * 2021-06-18 2022-05-31 深圳市商汤科技有限公司 Map construction method and device, electronic equipment and storage medium
CN114619453B (en) * 2022-05-16 2022-09-20 深圳市普渡科技有限公司 Robot, map construction method, and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120037270A (en) * 2010-10-11 2012-04-19 삼성전자주식회사 Voxel map generator and method thereof
CN105783913A (en) * 2016-03-08 2016-07-20 中山大学 SLAM device integrating multiple vehicle-mounted sensors and control method of device
CN106052683A (en) * 2016-05-25 2016-10-26 速感科技(北京)有限公司 Robot motion attitude estimating method
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120037270A (en) * 2010-10-11 2012-04-19 삼성전자주식회사 Voxel map generator and method thereof
CN105783913A (en) * 2016-03-08 2016-07-20 中山大学 SLAM device integrating multiple vehicle-mounted sensors and control method of device
CN106052683A (en) * 2016-05-25 2016-10-26 速感科技(北京)有限公司 Robot motion attitude estimating method
CN106997614A (en) * 2017-03-17 2017-08-01 杭州光珀智能科技有限公司 A kind of large scale scene 3D modeling method and its device based on depth camera
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107160395A (en) * 2017-06-07 2017-09-15 中国人民解放军装甲兵工程学院 Map constructing method and robot control system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Multi-level mapping: Real-time dense monocular SLAM;W. Nicholas Greene等;《2016 IEEE International Conference on Robotics and Automation (ICRA)》;20160609;833-840 *
基于RGB-D数据的实时SLAM算法;付梦印等;《机器人》;20151115;第37卷(第6期);第684-686页第2-3节 *
结合SIFT算法的视频场景突变检测;李枫等;《中国光学》;20160215;第9卷(第1期);74-80 *

Also Published As

Publication number Publication date
CN107845114A (en) 2018-03-27

Similar Documents

Publication Publication Date Title
CN107845114B (en) Map construction method and device and electronic equipment
CN107888828B (en) Space positioning method and device, electronic device, and storage medium
US10789771B2 (en) Method and apparatus for fusing point cloud data
CN110246147B (en) Visual inertial odometer method, visual inertial odometer device and mobile equipment
CN110084832B (en) Method, device, system, equipment and storage medium for correcting camera pose
US10247556B2 (en) Method for processing feature measurements in vision-aided inertial navigation
CN107748569B (en) Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
CN109461208B (en) Three-dimensional map processing method, device, medium and computing equipment
CN110470333B (en) Calibration method and device of sensor parameters, storage medium and electronic device
WO2022193508A1 (en) Method and apparatus for posture optimization, electronic device, computer-readable storage medium, computer program, and program product
EP2175237B1 (en) System and methods for image-based navigation using line features matching
CN111380515B (en) Positioning method and device, storage medium and electronic device
Xian et al. Fusing stereo camera and low-cost inertial measurement unit for autonomous navigation in a tightly-coupled approach
CN111882494B (en) Pose graph processing method and device, computer equipment and storage medium
CN116958452A (en) Three-dimensional reconstruction method and system
CN113034538B (en) Pose tracking method and device of visual inertial navigation equipment and visual inertial navigation equipment
CN113379850B (en) Mobile robot control method, device, mobile robot and storage medium
CN110706194B (en) Positioning method and device and mobile equipment
CN111829552B (en) Error correction method and device for visual inertial system
CN114593735A (en) Pose prediction method and device
CN110823225A (en) Positioning method and device under indoor dynamic situation
Vintervold Camera-based integrated indoor positioning
CN111982108A (en) Mobile robot positioning method, device, equipment and storage medium
CN115077467B (en) Cleaning robot posture estimation method and device and cleaning robot
CN113516686A (en) Target tracking method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant