CN112198878A - Instant map construction method and device, robot and storage medium - Google Patents

Instant map construction method and device, robot and storage medium Download PDF

Info

Publication number
CN112198878A
CN112198878A CN202011063174.4A CN202011063174A CN112198878A CN 112198878 A CN112198878 A CN 112198878A CN 202011063174 A CN202011063174 A CN 202011063174A CN 112198878 A CN112198878 A CN 112198878A
Authority
CN
China
Prior art keywords
data
preset
data frame
current data
key frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011063174.4A
Other languages
Chinese (zh)
Other versions
CN112198878B (en
Inventor
林李泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Silver Star Intelligent Group Co Ltd
Original Assignee
Shenzhen Silver Star Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Silver Star Intelligent Technology Co Ltd filed Critical Shenzhen Silver Star Intelligent Technology Co Ltd
Priority to CN202011063174.4A priority Critical patent/CN112198878B/en
Publication of CN112198878A publication Critical patent/CN112198878A/en
Application granted granted Critical
Publication of CN112198878B publication Critical patent/CN112198878B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0251Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting 3D information from a plurality of images taken from different locations, e.g. stereo vision
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optics & Photonics (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application is applicable to the technical field of map construction, and provides an instant map construction method, an instant map construction device, a robot and a storage medium, wherein the method comprises the following steps: acquiring a current data frame, a key frame sequence and a preset map, wherein the current data frame is map data of a current area acquired by an environment detection device, the key frame sequence is a stored data frame set, and the preset map is constructed based on the map data acquired by the environment detection device; judging whether a preset matching condition is met or not according to the current data frame, the key frame sequence and a preset map; if the judgment result is yes, filtering the current data frame; if not, updating the key frame sequence and the preset map according to the current data frame; according to the method and the device, the current data frame is judged by utilizing the key frame sequence and the preset map, whether the current data frame can be stored in the key frame sequence as the key frame is judged, the redundancy phenomenon of the key frame can be effectively avoided, and the data processing is simpler and quicker when the map is constructed at the later stage.

Description

Instant map construction method and device, robot and storage medium
Technical Field
The application belongs to the technical field of map construction, and particularly relates to an instant map construction method, an instant map construction device, a robot and a storage medium.
Background
In recent years, unmanned aerial vehicles, robots, unmanned driving and other equipment are widely developed and applied, and maps serve as basic supporting elements in the unmanned aerial vehicles, the robots and the unmanned driving, so that the accuracy of the unmanned aerial vehicles, the robots and the unmanned driving in the form process can be improved by improving the precision of the maps.
The method of constructing a map by using a simultaneous Localization and Mapping (SLAM) technology is becoming more and more popular, and the key point when a map is constructed by using the SLAM is to determine a key frame and construct the map based on the key frame. At present, in SLAM, a motion filter is used to determine a key frame, however, the method for determining a key frame by a motion filter cannot automatically identify the position of a map that has already been built, so that the key frame at the position of the map that has been built can be repeatedly inserted into the map, resulting in key frame redundancy. The redundant key frames increase the amount of computation and memory usage.
Disclosure of Invention
The embodiment of the application provides an instant map construction method, an instant map construction device, a robot and a storage medium, and can solve the problem that a key frame determined in map construction has a redundancy phenomenon.
In a first aspect, an embodiment of the present application provides an instant map construction method, including:
acquiring a current data frame, a key frame sequence and a preset map, wherein the current data frame is map data of a current area acquired by an environment detection device, the key frame sequence is a stored data frame set, and the preset map is constructed based on the map data acquired by the environment detection device;
judging whether a preset matching condition is met or not according to the current data frame, the key frame sequence and the preset map;
if the judgment result is yes, filtering the current data frame;
if not, updating the key frame sequence and the preset map according to the current data frame.
In a second aspect, an embodiment of the present application provides an instant map building apparatus, including:
the data acquisition module is used for acquiring a current data frame, a key frame sequence and a preset map, wherein the current data frame is map data of a current area acquired by an environment detection device, the key frame sequence is a stored data frame set, and the preset map is generated based on the key frame sequence;
the judging module is used for judging whether a preset matching condition is met or not according to the current data frame, the key frame sequence and the preset map;
a first result generation module, configured to filter the current data frame if the determination result is yes;
and the second result generation module is used for updating the key frame sequence and the preset map according to the current data frame if the judgment result is negative.
In a third aspect, an embodiment of the present application provides a robot, including: memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the instant map construction method of any of the first aspect when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the instant map construction method according to any one of the above first aspects.
In a fifth aspect, an embodiment of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the instant map construction method according to any one of the above first aspects.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Compared with the prior art, the embodiment of the application has the advantages that: the method comprises the steps of firstly, acquiring a current data frame of map data of a current area acquired by an environment detection device, a key frame sequence formed by stored data frames and a preset map generated based on the key frame sequence, and then judging whether a preset matching condition is met or not according to the current data frame, the key frame sequence and the preset map; if the preset matching condition is met, filtering out the current data frame, namely the current data frame is not a key frame; if the preset matching condition is not met, updating the key frame sequence and the preset map according to the current data frame; according to the method and the device, the current data frame is judged by utilizing the key frame sequence and the preset map, whether the current data frame can be stored in the key frame sequence as the key frame is judged, the redundancy phenomenon of the key frame can be effectively avoided, and the data processing is simpler and quicker when the map is constructed at the later stage.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic view of an application scenario of an instant map construction method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a method for constructing an instant map according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for determining that a preset matching condition is satisfied in fig. 2 according to an embodiment of the present application;
fig. 4 is a schematic flowchart illustrating a method for determining similarity measure provided in an embodiment of the present application;
FIG. 5 is a flowchart illustrating a method for determining similarity measure according to another embodiment of the present application;
FIG. 6 is a schematic structural diagram of an instant map building apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Synchronous Localization and Mapping (SLAM) is a device used for robot Localization and Mapping. At present, a motion filter is mostly used to determine a key frame when an SLAM is used, then a point cloud map is constructed based on the key frame, and when the motion filter is used to determine the key frame, because the part of the constructed map cannot be identified, the key frame at some positions can be repeatedly inserted into the map, so that key frame redundancy is caused, troubles are brought to the construction of the map, and even errors occur in the construction of the map because the calculated amount is large. The motion filter uses a motion filtering method to filter data frames, and the motion filtering method is to filter data frames to obtain key frames by comparing the motion relationship of two adjacent frames of data, such as translation, rotation, time interval and other information.
The instant map construction method provided by the application can be applied to SLAM, and the application determines whether the current data frame is the key frame or not through the current data frame and the determined key frame, so that the redundancy of the key frame is reduced.
Fig. 1 is a schematic view of an application scenario of an instant map construction method according to an embodiment of the present application, where the instant map construction method may be used to filter key frames of data frames acquired by an acquisition device. The acquisition device 10 is configured to acquire a data frame of a current area, and the processor 20 is configured to acquire the data frame from the acquisition device 10, screen the data frame, complete updating of a key frame and updating of a preset map, and achieve a purpose of screening the data frame.
The instant map construction method according to the embodiment of the present application is described in detail below with reference to fig. 1.
Fig. 2 shows a schematic flow chart of the instant map construction method provided by the present application, and with reference to fig. 2, the method is described in detail as follows:
s101, acquiring a current data frame, a key frame sequence and a preset map, wherein the current data frame is map data of a current area acquired by an environment detection device, the key frame sequence is a stored data frame set, and the preset map is constructed based on the map data acquired by the environment detection device.
In this embodiment, the data frames stored in the key frame sequence may also be called key frames, and the map is constructed by using the key frames.
In this embodiment, the data frame is an image obtained by acquiring the target area once by the acquisition device, the current data frame is a frame of data that needs to be determined whether the current data frame is a key frame, and the data frame may be two-dimensional point cloud data or three-dimensional point cloud data. The current data frame may be data collected by the environment detecting device in real time.
As an example, the environment detection device acquires three data frames at three different moments, namely a first data frame, a second data frame and a third data frame, and when judging whether the first data frame is a key frame, the current data frame is the first data frame; when judging whether the second data frame is a key frame, the second data frame is the current data frame; and when judging whether the third data frame is the key frame, the third data frame is the current data frame.
The preset map is constructed according to map data acquired by the environment detection device and needs to be updated according to the current data frame.
The key frame sequence is composed of data frames which are determined to be key frames in the data frames collected by the environment detection device, and the key frame sequence can be used for subsequently judging whether the current data frame is the key frame.
S102, judging whether a preset matching condition is met according to the current data frame, the key frame sequence and the preset map.
In this embodiment, it may be determined whether the current data frame may be stored as a key frame in the key frame sequence by using the matching score of the current data frame and the preset map and the comparison between the current data frame and the key frame sequence. The preset matching condition can be set as required.
And S103, if the judgment result is yes, filtering the current data frame.
In this embodiment, if the preset matching condition is satisfied, it indicates that the current data frame may not be placed in the key frame sequence as a key frame, and the current data frame needs to be filtered, that is, the current data frame is not processed.
And S104, if not, updating the key frame sequence and the preset map according to the current data frame.
In this embodiment, if the preset matching condition is not satisfied, it indicates that the current data frame may be placed in the key frame sequence as a key frame, and the key frame needs to be placed in the key frame sequence to complete the updating of the key frame sequence, and the current data frame is used to complete the updating of the preset map, for example, the current data frame is stored in the preset map.
In the embodiment of the application, the key frame sequence and the preset map are used for judging the current data frame, and whether the current data frame can be stored in the key frame sequence as the key frame is judged, so that the redundancy phenomenon of the key frame can be effectively avoided, and the data processing is simpler and quicker when the map is constructed at the later stage.
As shown in fig. 3, in a possible implementation manner, the implementation process of step S102 may include:
and S1021, determining a matching score according to the current data frame and the preset map.
In this embodiment, the matching score of the current data frame and the preset map refers to the matching degree of the current data frame and the preset map, the current data frame may be mapped onto the preset map, the matching score when the degree of coincidence between the point cloud data of the current data frame and the preset map is the highest is calculated, and specifically, the matching score of the current data frame and the preset map may be determined by using a correlation scan matching method.
The matching score can be obtained by calculating after the current data frame is acquired by using a correlation scanning matching method and the like.
And S1022, when the matching score is greater than or equal to a preset score, judging whether the current data frame and the key frame sequence meet similar pose conditions, if so, meeting preset matching conditions, and if not, not meeting the preset matching conditions.
In this embodiment, if the matching score is greater than or equal to the preset score, which indicates that the current data frame may have a redundancy phenomenon with the data frame in the key frame sequence, that is, the current data frame may be similar to the data frame in the key frame sequence, it cannot be directly determined whether the current data frame is the key frame, and it is also necessary to determine whether the current data frame and the key frame sequence satisfy the condition of the similarity bit resource, and determine whether the current data frame can be the key frame by using the condition of the similarity bit resource.
When the current data frame and the key frame sequence do not meet the similar bit resource condition, judging that the preset matching condition is not met, namely the current data frame can be used as the key frame, and the redundancy does not exist between the current data frame and the data frame in the key frame sequence. On the contrary, the current data frame may not be used as the key frame, and the current data frame is redundant with the data frame in the key frame sequence.
For example, if the preset score is 0.65 and the matching score is 0.8, the matching score is greater than the preset score, and it is necessary to determine whether the current data frame and the key frame sequence satisfy the condition of the similarity bit resource. And if the similar bit resource condition is met, filtering out the current data frame. And if the condition of the similar bit resource is not met, updating the key frame sequence and the preset map by taking the current data frame as a key frame.
And S1023, if the matching score is smaller than a preset score, the preset matching condition is not met.
In this embodiment, when the matching score is smaller than the preset score, it is indicated that the key frame sequence does not have redundancy with the current data frame, that is, the coincidence rate of the data in the current data frame and the data in the data frame in the existing key frame sequence is low, and the current data frame can be stored in the key frame sequence as the key frame to complete the updating of the key frame and the preset map.
In the embodiment of the application, by comparing the matching score with the preset score, when the matching score is smaller than the preset score, the current data frame can be judged as the key frame. And when the matching score is greater than or equal to the preset score, increasing judgment of the similar bit resource condition. When the condition of the similar bit resource is not satisfied, the key frame in the current data frame can be judged. The matching score and the similar bit resource condition are used for jointly judging whether the current data frame is a key frame, the current data frame redundant with the data frame in the key frame sequence can be filtered, and the data processing speed is improved.
As shown in fig. 4, in a possible implementation manner, the implementation process of determining whether the current data frame and the key frame sequence satisfy the similar pose condition in step S1022 may include:
s10221, acquiring first bit attitude data corresponding to a current data frame and second bit attitude data corresponding to any data frame in the key frame sequence, wherein the key frame sequence comprises i data frames which are sequentially arranged, and i is greater than or equal to 1.
In this embodiment, the bit data may determine whether the two data frames are similar or not, and is also a key for determining whether the two data frames are redundant, so that the bit data of the two data frames need to be acquired separately. The current data frame needs to be compared with all the data frames in the key frame sequence, so the second bit data of each data frame in the key frame sequence needs to be acquired.
The first position data may include first position data and first pose data for the current data frame. The first position data refers to the real position of the acquisition equipment in the target area when acquiring the current data frame, and the real position refers to the position with the highest coincidence degree of the point cloud data of the current data frame and the preset map. The first pose data refers to the angle at which the acquisition device was at when the current data frame was acquired.
The second position data may include second position data and second pose data.
The first and second position data may be obtained from a database, or may be obtained from a front-end scan registration module of the SLAM, or may be obtained from a processor storing a computational model, which is not limited herein.
Specifically, the first position data may be obtained by using an iterative closest point method, correlation scan matching, optimization-based method, normal distribution transformation, feature-based matching, and other matching algorithms. The specific implementation process of the correlation scanning matching is as follows: and (3) placing the laser radar on each grid of the grid map, and determining whether the data frame point cloud is overlapped with the map at the position, wherein the position with the highest overlapping degree is the real position of the laser radar. Since the data frame in the key frame sequence is determined from the data frame before the current data frame, the determination method of the second position data is the same as the determination method of the first position data, and is not described herein again. The method for obtaining the first posture data and the second posture data is the same as the method for obtaining the first position data, please refer to the method for obtaining the first position data, and will not be described herein again.
S10222, determining whether the current data frame and the sequence of key frames satisfy a similar pose condition based on the first pose data and the second pose data.
In this embodiment, after the first bit resource data and the second bit resource data are acquired, whether the current data frame and the key frame sequence satisfy the similar bit resource condition may be determined according to the first bit resource data and the second bit resource data, and a specific method is described as follows.
As shown in fig. 5, in one possible implementation manner, the implementation process of step S10222 may include:
s102221, determining j position and posture change data based on the first position and posture data of the current data frame and second position and posture data corresponding to j data frame in the key frame sequence, wherein j is more than or equal to 1 and less than or equal to i, and i is the number of data frames in the key frame sequence.
In this embodiment, the pose change data includes at least two of an abscissa change value, an ordinate change value, and a pose angle change value, where the abscissa change value is a difference between an abscissa of the pose corresponding to the current data frame and an abscissa of the pose corresponding to the i data frames, the ordinate change value is a difference between an ordinate of the pose corresponding to the current data frame and an ordinate of the pose corresponding to the i data frames, and the pose angle change value is a difference between a pose angle of the pose corresponding to the current data frame and a pose angle of the pose corresponding to the i data frames.
In this embodiment, the first position data in the first position data includes an abscissa and an ordinate, and the first posture data includes a posture angle. The second position data in the second position data also includes an abscissa and an ordinate, and the second posture data also includes a posture angle.
Specifically, the pose change data can be calculated according to the following formula:
x′=|x1-x2where x' is the variation of the abscissa, x1As abscissa, x, of the current data frame2Is the abscissa of one data frame in the sequence of key frames. y ═ y1-y2Where y' is the variation of the ordinate, y1Is the ordinate, y, of the current data frame2Is the ordinate of a data frame in the sequence of key frames. Theta ═ theta12Where θ' is the attitude angle variation value, θ1For the attitude angle, θ, of the current data frame2Is the pose angle of one data frame in the sequence of key frames.
In this embodiment, when the number of data frames in the key frame sequence is greater than 1, the position change between the current data frame and each data frame in the key frame sequence needs to be calculated respectively.
By way of example, if the first bit datum of the current data frame is (3, 6, 30 °), and the second bit datum of the 3 rd data frame in the key frame sequence is (4, 8, 40 °), the 3 rd bit position change datum is (1, 2, 10 °).
S102222, determining whether a first pose data corresponding to the current data frame and a second pose data corresponding to a jth data frame in the key frame sequence satisfy a preset condition based on the jth pose change data, and counting the number of data frames in the key frame sequence that satisfy the preset condition.
In this embodiment, after the bit resource change data is calculated, it may be determined whether the bit resource change data satisfies a preset condition, and the number of the bit resource change data satisfying the preset condition in the current data frame and the key frame sequence is determined.
Since the bit data may include the abscissa variation value and the ordinate variation value, it may further include the attitude angle variation value. Therefore, satisfying the preset condition may include that the abscissa change value and the ordinate change value satisfy the condition, and may also include that the abscissa change value, the ordinate change value, and the attitude angle change value satisfy the condition at the same time.
Specifically, the implementation process of step S102222 may include:
and when the abscissa variation value of the jth position posture variation data is smaller than a second preset threshold value and the ordinate variation value of the jth position posture variation data is smaller than a third preset threshold value, determining that a preset condition is met.
And when the abscissa variation value of the jth posture variation data is greater than or equal to a second preset threshold value and the ordinate variation value of the jth posture variation data is greater than or equal to a third preset threshold value, determining that the preset condition is not met.
In this embodiment, the second preset threshold and the third preset threshold are both set as needed, and the second preset threshold and the third preset threshold may be the same or different, for example, the second preset threshold is 0.5 m.
The meeting of the preset conditions comprises the following steps: the horizontal coordinate change value is smaller than a second preset threshold value, and the vertical coordinate change value is smaller than a third preset threshold value.
For example, the second preset threshold is 0.5m, the 4 th posture change data is (0.3,0.4), 0.3 < 0.5, and 0.4 < 0.5, and then the first bit data when the current data frame is collected and the second bit data when the 4 th data frame of the key frame sequence is collected satisfy the preset condition.
If the 5 th posture change data is (0.6, 0.4), 0.6 is greater than 0.5 and 0.4 is less than 0.5, the first bit data when the current data frame is collected and the second bit data when the 5 th data frame of the key frame sequence is collected do not meet the preset condition.
Optionally, the implementation process of step S102222 may include:
calculating the Euclidean distance of the j position posture change data;
and when the Euclidean distance of the j-th posture change data is smaller than a seventh preset threshold value, determining that a preset condition is met, otherwise, determining that the preset condition is not met.
In this embodiment, the preset condition includes that the euclidean distance of the jth posture change data is smaller than the seventh preset threshold. The calculation method of the Euclidean distance comprises the following steps: l ═ x'2+y′2Wherein, L is Euclidean distance, x 'is the variation value of abscissa, and y' is the variation value of ordinate.
As an example, if the seventh preset threshold is 0.6m, the 6 th posture change data is (0.3,0.4), and 0.09+0.16 is 0.25 < 0.6, the first bit data when the current data frame is acquired and the second bit data when the 6 th data frame of the key frame sequence is acquired satisfy the preset condition.
Specifically, the implementation process of step S102222 may further include:
and when the abscissa change value of the jth position posture change data is smaller than a fourth preset threshold value, and the ordinate change value of the jth position posture change data is smaller than a fifth preset threshold value, and the posture angle change value of the jth position posture change data is smaller than a sixth preset threshold value, determining that the preset condition is met.
And when the abscissa variation value of the jth posture variation data is greater than or equal to a fourth preset threshold value, and the ordinate variation value of the jth posture variation data is greater than or equal to a fifth preset threshold value, and the posture angle variation value of the jth posture variation data is greater than or equal to a sixth preset threshold value, determining that the preset condition is not met.
In this embodiment, the condition that the preset condition is satisfied includes that the abscissa variation value, the ordinate variation value, and the attitude angle variation value all satisfy the condition, and it can be determined that the preset condition is satisfied only when the position variation and the attitude variation both satisfy the preset condition.
The fourth preset threshold and the fifth preset threshold may be the same or different. The sixth preset threshold is a maximum value of the change in the angle set as needed.
S102223, when the number of data frames in the key frame sequence, which satisfy a preset condition, is greater than or equal to a first preset threshold, the current data frame and the key frame sequence satisfy a similar pose condition.
In this embodiment, after comparing the current data frame with each data frame in the key frame sequence, the number of data frames meeting the preset condition in the key frame sequence is determined, and when the number is greater than or equal to a first preset threshold, it may be determined that the current data frame and the key frame sequence meet the similar pose condition, and it may be determined that the current data frame is not the key frame.
In this embodiment, the first preset threshold may be set as needed, for example, the first preset threshold may be 1, 2, or 3, etc.
In the embodiment of the application, the bit resource change when the current data frame and the data frame in the key frame sequence are acquired is calculated, and whether the current data frame is the key frame is determined through the bit resource change, so that the current data frame with the data frame bit resource similar to that in the key frame sequence can be excluded, and the redundancy of the data frame in the key frame sequence is further reduced.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 6 shows a block diagram of an instant map construction device provided in the embodiment of the present application, which corresponds to the instant map construction method described in the above embodiment, and only shows a part related to the embodiment of the present application for convenience of description.
Referring to fig. 6, the apparatus 600 may include: a data acquisition module 610, a judgment module 620, a first result generation module 630 and a second result generation module 640.
The data acquisition module 610 is configured to acquire a current data frame, a key frame sequence and a preset map, where the current data frame is map data of a current area acquired by an environment detection device, the key frame sequence is a stored data frame set, and the preset map is constructed based on the map data acquired by the environment detection device;
a judging module 620, configured to judge whether a preset matching condition is met according to the current data frame, the key frame sequence, and the preset map;
a first result generating module 630, configured to filter the current data frame if the determination result is yes;
and a second result generating module 640, configured to, if the determination result is negative, update the sequence of key frames and the preset map according to the current data frame.
In a possible implementation manner, the determining module 620 may specifically include: :
the score acquisition unit is used for determining a matching score according to the current data frame and the preset map;
the first judging unit is used for judging whether the current data frame and the key frame sequence meet the similar pose condition or not when the matching score is larger than or equal to a preset score, if so, the current data frame and the key frame sequence meet the preset matching condition, and if not, the current data frame and the key frame sequence do not meet the preset matching condition;
and the second judging unit is used for not meeting the preset matching condition when the matching score is smaller than the preset score.
In a possible implementation manner, the first determining unit may specifically include:
the data acquisition subunit is used for acquiring first bit attitude data corresponding to a current data frame and second bit attitude data corresponding to any data frame in the key frame sequence, wherein the key frame sequence comprises i data frames which are sequentially arranged, and i is greater than or equal to 1;
and the judging subunit is used for determining whether the current data frame and the key frame sequence meet the similar pose condition or not based on the first pose data and the second pose data.
In a possible implementation manner, the determining subunit may specifically be configured to:
determining j position change data based on the first position data of the current data frame and second position data corresponding to j data frame in the key frame sequence, wherein j is more than or equal to 1 and less than or equal to i, and i is the number of data frames in the key frame sequence;
determining whether first position and posture data corresponding to the current data frame and second position and posture data corresponding to a jth data frame in the key frame sequence meet preset conditions or not based on the jth position and posture change data, and counting the number of data frames meeting the preset conditions in the key frame sequence;
when the number of data frames meeting similar pose conditions in the key frame sequence is greater than or equal to a first preset threshold, the current data frame and the key frame sequence meet similar pose conditions.
In a possible implementation manner, the pose change data includes at least two of an abscissa change value, an ordinate change value, and a pose angle change value, the abscissa change value is a difference between an abscissa of the pose corresponding to the current data frame and an abscissa of the pose corresponding to the i data frames, the ordinate change value is a difference between an ordinate of the pose corresponding to the current data frame and an ordinate of the pose corresponding to the i data frames, and the pose angle change value is a difference between a pose angle of the pose corresponding to the current data frame and a pose angle of the pose corresponding to the i data frames.
In one possible implementation, the pose change data includes an abscissa change value and an ordinate change value;
the determining subunit may be specifically configured to:
and when the abscissa variation value of the jth position posture variation data is smaller than a second preset threshold value and the ordinate variation value of the jth position posture variation data is smaller than a third preset threshold value, determining that a preset condition is met.
In one possible implementation, the pose change data includes an abscissa change value, an ordinate change value, and an attitude angle change value;
the determining subunit may be specifically configured to:
and when the abscissa change value of the jth position posture change data is smaller than a fourth preset threshold value, and the ordinate change value of the jth position posture change data is smaller than a fifth preset threshold value, and the posture angle change value of the jth position posture change data is smaller than a sixth preset threshold value, determining that the preset condition is met.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
Embodiments of the present application further provide a robot, and referring to fig. 7, the robot 700 may include: at least one processor 710, a memory 720, and a computer program stored in the memory 720 and operable on the at least one processor 710, wherein the processor 710, when executing the computer program, implements the steps of any of the method embodiments described above, such as the steps S101 to S104 in the embodiment shown in fig. 2. Alternatively, the processor 710, when executing the computer program, implements the functions of the modules/units in the above-described device embodiments, such as the functions of the modules 610 to 640 shown in fig. 6.
Illustratively, the computer program may be divided into one or more modules/units, which are stored in the memory 720 and executed by the processor 710 to accomplish the present application. The one or more modules/units may be a series of computer program segments capable of performing certain functions, which are used to describe the execution of the computer program in the robot 700.
Those skilled in the art will appreciate that fig. 7 is merely an example of a robot and is not limiting to end devices and may include more or fewer components than shown, or some components in combination, or different components such as input output devices, network access devices, buses, etc.
The Processor 710 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 720 may be an internal storage unit of the terminal device, or may be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like. The memory 720 is used for storing the computer programs and other programs and data required by the terminal device. The memory 720 may also be used to temporarily store data that has been output or is to be output.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program can implement the steps in the embodiments of the instant map building method described above.
The embodiment of the present application provides a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in each embodiment of the instant map building method when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. An instant map construction method, comprising:
acquiring a current data frame, a key frame sequence and a preset map, wherein the current data frame is map data of a current area acquired by an environment detection device, the key frame sequence is a stored data frame set, and the preset map is constructed based on the map data acquired by the environment detection device;
judging whether a preset matching condition is met or not according to the current data frame, the key frame sequence and the preset map;
if the judgment result is yes, filtering the current data frame;
if not, updating the key frame sequence and the preset map according to the current data frame.
2. The method of claim 1, wherein the determining whether a preset matching condition is satisfied according to the current data frame, the key frame sequence, and the preset map comprises:
determining a matching score according to the current data frame and the preset map;
when the matching score is greater than or equal to a preset score, judging whether the current data frame and the key frame sequence meet similar pose conditions, if so, meeting preset matching conditions, and if not, not meeting the preset matching conditions;
and when the matching score is smaller than a preset score, the preset matching condition is not met.
3. The method of instant mapping according to claim 2, wherein said determining whether the current data frame and the sequence of key frames satisfy a similar pose condition comprises:
acquiring first position and attitude data corresponding to a current data frame and second position and attitude data corresponding to any data frame in the key frame sequence, wherein the key frame sequence comprises i data frames which are sequentially arranged, and i is greater than or equal to 1;
determining whether the current data frame and the sequence of keyframes satisfy a similar pose condition based on the first pose data and the second pose data.
4. The method of instant mapping of claim 3, wherein the determining whether the current data frame and the sequence of key frames satisfy a similar pose condition based on the first pose data and the second pose data comprises:
determining j position change data based on the first position data of the current data frame and second position data corresponding to j data frame in the key frame sequence, wherein j is more than or equal to 1 and less than or equal to i, and i is the number of data frames in the key frame sequence;
determining whether first position and posture data corresponding to the current data frame and second position and posture data corresponding to a jth data frame in the key frame sequence meet preset conditions or not based on the jth position and posture change data, and counting the number of data frames meeting the preset conditions in the key frame sequence;
and when the number of the data frames meeting the preset condition in the key frame sequence is greater than or equal to a first preset threshold, the current data frame and the key frame sequence meet the similar pose condition.
5. The instant mapping method of claim 4, wherein the pose change data includes at least two of an abscissa change value, an ordinate change value, and a pose angle change value, the abscissa change value being a difference between an abscissa of the pose corresponding to the current data frame and an abscissa of the pose corresponding to the i data frames, the ordinate change value being a difference between an ordinate of the pose corresponding to the current data frame and an ordinate of the pose corresponding to the i data frames, the pose angle change value being a difference between a pose angle of the pose corresponding to the current data frame and a pose angle of the pose corresponding to the i data frames.
6. The instant mapping method of claim 5, wherein the pose change data includes abscissa change values and ordinate change values;
the determining whether first position and orientation data corresponding to the current data frame and second position and orientation data corresponding to a jth data frame in the key frame sequence meet preset conditions based on the jth position and orientation change data includes:
and when the abscissa variation value of the jth position posture variation data is smaller than a second preset threshold value and the ordinate variation value of the jth position posture variation data is smaller than a third preset threshold value, determining that a preset condition is met.
7. The instant mapping method of claim 5, wherein the pose change data includes an abscissa change value, an ordinate change value, and a pose angle change value;
the determining whether first position and orientation data corresponding to the current data frame and second position and orientation data corresponding to a jth data frame in the key frame sequence meet preset conditions based on the jth position and orientation change data includes:
and when the abscissa change value of the jth position posture change data is smaller than a fourth preset threshold value, and the ordinate change value of the jth position posture change data is smaller than a fifth preset threshold value, and the posture angle change value of the jth position posture change data is smaller than a sixth preset threshold value, determining that the preset condition is met.
8. An instant map construction apparatus, comprising:
the data acquisition module is used for acquiring a current data frame, a key frame sequence and a preset map, wherein the current data frame is map data of a current area acquired by an environment detection device, the key frame sequence is a stored data frame set, and the preset map is generated based on the key frame sequence;
the judging module is used for judging whether a preset matching condition is met or not according to the current data frame, the key frame sequence and the preset map;
a first result generation module, configured to filter the current data frame if the determination result is yes;
and the second result generation module is used for updating the key frame sequence and the preset map according to the current data frame if the judgment result is negative.
9. A robot comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the instant mapping method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method of instant map construction according to any one of claims 1 to 7.
CN202011063174.4A 2020-09-30 2020-09-30 Instant map construction method and device, robot and storage medium Active CN112198878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011063174.4A CN112198878B (en) 2020-09-30 2020-09-30 Instant map construction method and device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011063174.4A CN112198878B (en) 2020-09-30 2020-09-30 Instant map construction method and device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN112198878A true CN112198878A (en) 2021-01-08
CN112198878B CN112198878B (en) 2021-09-28

Family

ID=74013558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011063174.4A Active CN112198878B (en) 2020-09-30 2020-09-30 Instant map construction method and device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN112198878B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112904365A (en) * 2021-02-10 2021-06-04 广州视源电子科技股份有限公司 Map updating method and device
CN113624222A (en) * 2021-07-30 2021-11-09 深圳市优必选科技股份有限公司 Map updating method, robot and readable storage medium
CN113739819A (en) * 2021-08-05 2021-12-03 上海高仙自动化科技发展有限公司 Verification method and device, electronic equipment, storage medium and chip

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140320593A1 (en) * 2013-04-30 2014-10-30 Qualcomm Incorporated Monocular visual slam with general and panorama camera movements
CN105678842A (en) * 2016-01-11 2016-06-15 湖南拓视觉信息技术有限公司 Manufacturing method and device for three-dimensional map of indoor environment
CN106485744A (en) * 2016-10-10 2017-03-08 成都奥德蒙科技有限公司 A kind of synchronous superposition method
KR20180113060A (en) * 2017-04-05 2018-10-15 충북대학교 산학협력단 Keyframe extraction method for graph-slam and apparatus using thereof
CN109887029A (en) * 2019-01-17 2019-06-14 江苏大学 A kind of monocular vision mileage measurement method based on color of image feature
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method
CN110561416A (en) * 2019-08-01 2019-12-13 深圳市银星智能科技股份有限公司 Laser radar repositioning method and robot
CN110816833A (en) * 2019-11-08 2020-02-21 广东工业大学 Unmanned aerial vehicle flying rice transplanting system and rice transplanting method
CN111024100A (en) * 2019-12-20 2020-04-17 深圳市优必选科技股份有限公司 Navigation map updating method and device, readable storage medium and robot
CN111274847A (en) * 2018-12-04 2020-06-12 上海汽车集团股份有限公司 Positioning method
CN111339228A (en) * 2020-02-18 2020-06-26 Oppo广东移动通信有限公司 Map updating method, device, cloud server and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140320593A1 (en) * 2013-04-30 2014-10-30 Qualcomm Incorporated Monocular visual slam with general and panorama camera movements
CN105678842A (en) * 2016-01-11 2016-06-15 湖南拓视觉信息技术有限公司 Manufacturing method and device for three-dimensional map of indoor environment
CN106485744A (en) * 2016-10-10 2017-03-08 成都奥德蒙科技有限公司 A kind of synchronous superposition method
KR20180113060A (en) * 2017-04-05 2018-10-15 충북대학교 산학협력단 Keyframe extraction method for graph-slam and apparatus using thereof
CN111274847A (en) * 2018-12-04 2020-06-12 上海汽车集团股份有限公司 Positioning method
CN109887029A (en) * 2019-01-17 2019-06-14 江苏大学 A kind of monocular vision mileage measurement method based on color of image feature
CN110561416A (en) * 2019-08-01 2019-12-13 深圳市银星智能科技股份有限公司 Laser radar repositioning method and robot
CN110501017A (en) * 2019-08-12 2019-11-26 华南理工大学 A kind of Mobile Robotics Navigation based on ORB_SLAM2 ground drawing generating method
CN110816833A (en) * 2019-11-08 2020-02-21 广东工业大学 Unmanned aerial vehicle flying rice transplanting system and rice transplanting method
CN111024100A (en) * 2019-12-20 2020-04-17 深圳市优必选科技股份有限公司 Navigation map updating method and device, readable storage medium and robot
CN111339228A (en) * 2020-02-18 2020-06-26 Oppo广东移动通信有限公司 Map updating method, device, cloud server and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孔德慧 等: "一种改进的面向SLAM***的相机位姿估计方法", 《华南理工大学学报(自然科学版)》 *
张国良 等: "《移动机器人的SLAM与VSLAM方法》", 31 October 2018, 西安交通大学出版社 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112904365A (en) * 2021-02-10 2021-06-04 广州视源电子科技股份有限公司 Map updating method and device
CN112904365B (en) * 2021-02-10 2024-05-10 广州视源电子科技股份有限公司 Map updating method and device
CN113624222A (en) * 2021-07-30 2021-11-09 深圳市优必选科技股份有限公司 Map updating method, robot and readable storage medium
CN113739819A (en) * 2021-08-05 2021-12-03 上海高仙自动化科技发展有限公司 Verification method and device, electronic equipment, storage medium and chip
CN113739819B (en) * 2021-08-05 2024-04-16 上海高仙自动化科技发展有限公司 Verification method, verification device, electronic equipment, storage medium and chip

Also Published As

Publication number Publication date
CN112198878B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN112198878B (en) Instant map construction method and device, robot and storage medium
US20210063577A1 (en) Robot relocalization method and apparatus and robot using the same
CN111815754B (en) Three-dimensional information determining method, three-dimensional information determining device and terminal equipment
CN111612841B (en) Target positioning method and device, mobile robot and readable storage medium
CN112595323A (en) Robot and drawing establishing method and device thereof
CN112348921A (en) Mapping method and system based on visual semantic point cloud
CN111754579A (en) Method and device for determining external parameters of multi-view camera
CN111402413B (en) Three-dimensional visual positioning method and device, computing equipment and storage medium
CN114897669A (en) Labeling method and device and electronic equipment
CN111368860B (en) Repositioning method and terminal equipment
CN110673607A (en) Feature point extraction method and device in dynamic scene and terminal equipment
CN113459088B (en) Map adjustment method, electronic device and storage medium
CN112215887A (en) Pose determination method and device, storage medium and mobile robot
CN114119749A (en) Monocular 3D vehicle detection method based on dense association
CN112212851B (en) Pose determination method and device, storage medium and mobile robot
CN112258647A (en) Map reconstruction method and device, computer readable medium and electronic device
CN111104965A (en) Vehicle target identification method and device
CN113721240B (en) Target association method, device, electronic equipment and storage medium
CN115388878A (en) Map construction method and device and terminal equipment
CN109919998B (en) Satellite attitude determination method and device and terminal equipment
CN113255405A (en) Parking space line identification method and system, parking space line identification device and storage medium
CN111223139A (en) Target positioning method and terminal equipment
CN112711965B (en) Drawing recognition method, device and equipment
CN113570667B (en) Visual inertial navigation compensation method and device and storage medium
CN116152048A (en) Data space synchronization method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 1701, building 2, Yinxing Zhijie, No. 1301-72, sightseeing Road, Xinlan community, Guanlan street, Longhua District, Shenzhen, Guangdong Province

Patentee after: Shenzhen Yinxing Intelligent Group Co.,Ltd.

Address before: 518000 building A1, Yinxing hi tech Industrial Park, Guanlan street, Longhua District, Shenzhen City, Guangdong Province

Patentee before: Shenzhen Silver Star Intelligent Technology Co.,Ltd.

CP03 Change of name, title or address