WO2019219077A1 - 定位方法、定位装置、定位***、存储介质及离线地图数据库的构建方法 - Google Patents
定位方法、定位装置、定位***、存储介质及离线地图数据库的构建方法 Download PDFInfo
- Publication number
- WO2019219077A1 WO2019219077A1 PCT/CN2019/087411 CN2019087411W WO2019219077A1 WO 2019219077 A1 WO2019219077 A1 WO 2019219077A1 CN 2019087411 W CN2019087411 W CN 2019087411W WO 2019219077 A1 WO2019219077 A1 WO 2019219077A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- map
- visual
- image information
- positioning
- current image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 104
- 238000003860 storage Methods 0.000 title claims abstract description 28
- 230000000007 visual effect Effects 0.000 claims abstract description 145
- 239000002245 particle Substances 0.000 claims description 53
- 238000012545 processing Methods 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 19
- 238000009826 distribution Methods 0.000 claims description 18
- 238000006243 chemical reaction Methods 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 15
- 238000010276 construction Methods 0.000 claims description 11
- 238000003384 imaging method Methods 0.000 claims description 11
- 238000001914 filtration Methods 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 9
- 239000000284 extract Substances 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000009434 installation Methods 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 230000000977 initiatory effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 238000000605 extraction Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 230000007717 exclusion Effects 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
- G06T2207/10044—Radar image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Definitions
- Embodiments of the present disclosure relate to a positioning method, a positioning device, a positioning system, a storage medium, and a method of constructing an offline map database.
- the traditional positioning method needs to locate the user through GPS satellite positioning.
- the scale of the building is getting larger and larger, and people are increasingly demanding indoor positioning.
- At least one embodiment of the present disclosure provides a positioning method, including: acquiring current image information, and extracting visual features in the current image information; and performing visual features in the current image information with key frames in an offline map database Matching, determining a candidate key frame similar to the visual feature in the current image information, the offline map database is generated according to the global raster map and the visual map; determining a pose corresponding to the candidate key frame, and the pose Convert to coordinate values.
- the positioning method is used in an indoor environment.
- the current image information is current indoor image information.
- the server receives the current image information sent by the mobile terminal, and extracts visual features in the current image information.
- the coordinate value is sent by the server to the mobile terminal.
- a candidate key frame similar to a visual feature in the current image information is determined by a visual word bag model matching algorithm.
- the positioning method provided by at least one embodiment of the present disclosure further includes: acquiring a global raster map constructed by the laser radar; acquiring a visual map constructed by the visual system; and generating the offline according to the global raster map and the visual map. Map database.
- acquiring a global raster map constructed by the laser radar includes: initializing a coordinate system of the map constructed by the laser radar into a global coordinate system; and estimating the laser radar Scanning the first positioning information of the environmental region, and using the first positioning information as an input of the particle filtering sample to obtain a prior distribution of the particles; generating the particle according to the prior distribution of the particle, and filtering according to the particle An algorithm, and a fusion odometer pose transformation update the pose and map data of the particle to generate the global raster map.
- acquiring a visual map constructed by a vision system includes: initializing an imaging device, and obtaining the according to a relative installation position of the imaging device and the laser radar a conversion relationship between a coordinate system of the visual map and a coordinate system of the global raster map; determining the key frame according to an inter-frame feature of the image frame acquired by the camera device, and determining the key frame according to the conversion relationship a second positioning information; determining a corrected scale factor according to the lidar and positioning information of the camera; and establishing a sparse map according to the corrected scale factor and the key frame.
- the laser radar constructs the global raster map and the visual system constructs the visual map in parallel; the positioning method further includes: adopting a loopback The detection optimizes the coordinate values corresponding to the key frame.
- the positioning method provided by at least one embodiment of the present disclosure further includes: when the visual feature is matched with a key frame in the offline map database, the candidate key frame similar to the visual feature cannot be determined, Obtaining velocity information and angle information of the adjacent frames of the current image information, and estimating coordinate values of the current image information according to the velocity information and the angle information.
- At least one embodiment of the present disclosure further provides a positioning apparatus, including an acquiring unit, an extracting unit, a matching unit, a first determining unit, a second determining unit, and a converting unit.
- An acquiring unit configured to acquire current image information; an extracting unit configured to extract a visual feature in the current image information; and a matching unit configured to select a visual feature in the current image information extracted by the extracting unit from an offline map a key frame in the database is matched; a first determining unit configured to determine a candidate key frame similar to a visual feature in the current image information; a second determining unit configured to determine the determined by the first determining unit a pose corresponding to the candidate key frame; and a converting unit configured to convert the pose determined by the second determining unit into a coordinate value.
- the acquiring unit is configured to receive, by the server, the current image information sent by the mobile terminal to acquire current image information.
- the positioning apparatus further includes: a sending unit, a first building unit, a second building unit, a generating unit, and a processing unit.
- a sending unit configured to send the coordinate value converted by the converting unit to the mobile terminal;
- a first building unit configured to start a lidar to construct a global grid map; and
- a second building unit configured to activate the vision system Constructing a visual map;
- the generating unit is configured to generate an offline map database according to the global raster map constructed by the first building unit and the visual map constructed by the second building unit;
- the processing unit is configured to When the visual feature is matched with the key frame in the offline map database, the candidate key frame similar to the visual feature cannot be determined, and the velocity information and the angle information of the adjacent frame of the current image information are acquired, and according to the The velocity information estimates the coordinate value of the current image information by using the angle information.
- At least one embodiment of the present disclosure also provides a positioning apparatus comprising: a processor; a memory storing one or more computer program modules; the one or more computer program modules being stored in the machine readable storage medium And configured to be executed by the processor, the one or more computer program modules comprising instructions for performing a positioning method provided by any of the embodiments of the present disclosure.
- At least one embodiment of the present disclosure also provides a positioning system including a mobile terminal and a server; the mobile terminal is configured to collect current image information, and send the current image information to the server; the server is configured to receive Determining current image information sent by the mobile terminal, and extracting visual features in the current image information; matching visual features in the current image information with key frames in an offline map database, and determining and the current image information a candidate key frame having similar visual features, the offline map database is generated according to a global raster map and a visual map; determining a pose corresponding to the candidate key frame, converting the pose into a coordinate value, and the coordinates A value is sent to the mobile terminal.
- At least one embodiment of the present disclosure also provides a storage medium that non-transitoryly stores computer readable instructions that, when executed by a computer, can perform the positioning method provided by any of the embodiments of the present disclosure.
- At least one embodiment of the present disclosure further provides a method for constructing an offline map database, comprising: starting a laser radar to construct a global raster map; starting a visual system to construct a visual map; and generating the according to the global raster map and the visual map.
- Offline map database comprising: starting a laser radar to construct a global raster map; starting a visual system to construct a visual map; and generating the according to the global raster map and the visual map.
- starting the laser radar to construct the global grid map includes: initializing a coordinate system of the map constructed by the laser radar into a global coordinate system; and estimating the lidar scan First positioning information of the obtained environmental region, and using the first positioning information as an input of the particle filtering sample to obtain a prior distribution of the particles; generating the particle according to the prior distribution of the particles, and according to the particle filtering algorithm And integrating the odometer pose transformation to update the pose and map data of the particle to generate the global raster map; and starting the vision system to construct a visual map, including: initializing the camera device, and according to the camera device Deriving a relative position of the lidar to obtain a conversion relationship between a coordinate system of the visual map and a coordinate system of the global grid map; determining a key frame according to an interframe feature of the image frame acquired by the camera device, and determining a key frame according to an interframe feature of the image frame acquired by the camera device The conversion relationship determines second
- starting the laser radar to construct the global raster map and starting the visual system to construct the visual map are performed in parallel.
- FIG. 1A is a flowchart of a positioning method according to at least one embodiment of the present disclosure
- FIG. 1B is a flowchart of some examples of step S102 shown in FIG. 1A;
- FIG. 2 is a flow chart showing another positioning method provided by at least one embodiment of the present disclosure.
- FIG. 3 is a schematic diagram of a robot platform provided by at least one embodiment of the present disclosure.
- FIG. 4 is a structural block diagram of a positioning apparatus according to at least one embodiment of the present disclosure.
- FIG. 5 is a structural block diagram of another positioning apparatus according to at least one embodiment of the present disclosure.
- FIG. 6 is a structural block diagram of still another positioning apparatus according to at least one embodiment of the present disclosure.
- FIG. 7 is a structural block diagram of a positioning system according to at least one embodiment of the present disclosure.
- FIG. 8 is a schematic diagram of a storage medium according to at least one embodiment of the present disclosure.
- GPS positioning is inaccurate or GPS positioning systems cannot be used due to poor indoor ability to receive GPS signals. Therefore, the authenticity and accuracy of GPS positioning data cannot meet the commercial needs of indoor or other poor signal environments.
- At least one embodiment of the present disclosure provides a positioning method, including: acquiring current image information, and extracting visual features in current image information; matching visual features in current image information with key frames in an offline map database, determining and The candidate key frame with similar visual features in the current image information is generated by the offline map database according to the global raster map and the visual map; the pose corresponding to the candidate key frame is determined, and the pose is converted into the coordinate value.
- At least one embodiment of the present disclosure also provides a positioning device, a positioning system, and a storage medium corresponding to the positioning method described above.
- the positioning method provided by the foregoing embodiment of the present disclosure can determine the corresponding coordinate value by analyzing the current image information to achieve accurate positioning.
- the positioning method can be used for positioning in an indoor environment, and can also be used for positioning in other outdoor environments.
- the positioning method is used in the indoor environment as an example for description.
- the positioning method can be implemented at least in part in software and loaded and executed by a processor in the positioning device, or at least partially implemented in hardware or firmware to achieve precise positioning.
- FIG. 1A is a flowchart of a positioning method according to at least one embodiment of the present disclosure. As shown in FIG. 1A, the positioning method includes steps S101 to S103. Steps S101 to S103 of the positioning method and their respective exemplary implementations are respectively described below.
- Step S101 Acquire current image information, and extract visual features in the current image information.
- Step S102 matching the visual features with key frames in the offline map database to determine candidate key frames similar to the visual features.
- Step S103 determining a pose corresponding to the candidate key frame, and converting the pose into a coordinate value.
- the current image information is current indoor image information
- the current image information is current outdoor image information, which may be determined according to actual conditions.
- the embodiment of the present disclosure does not limit this.
- the following is an example in which the current image information is used as the indoor image information, but the embodiment of the present disclosure does not limit this.
- current image information transmitted by the mobile terminal may be received by the server (ie, current image information is acquired), and subsequent positioning methods are performed by a processor, such as a central processing unit (CPU) in the server; in other examples
- the current image information may also be acquired by a device such as a camera in the mobile terminal, and the subsequent positioning method is directly executed directly by a processor such as a central processing unit (CPU) in the mobile terminal.
- the server receives the current image information sent by the mobile terminal (ie, acquires the current image information), and executes a subsequent positioning method by a processor such as a central processing unit (CPU) in the server, but an embodiment of the present disclosure There is no limit to this.
- the coordinate value is sent by the server to the mobile terminal to achieve positioning.
- the user transmits the current indoor image information captured by the user to the backend server through the application in the mobile terminal device.
- the mobile device can be held or fixed on the shopping cart, and a certain elevation angle with the horizontal plane is ensured during the shooting.
- the purpose is that if the camera is parallel to the horizontal plane, the current indoor image information captured may be The flow of people will be captured, which will affect the accuracy of the positioning information.
- the mobile terminal in the embodiment of the present disclosure may be a mobile phone, a tablet, or the like.
- the current indoor image information may be acquired using an imaging device of the mobile terminal or other separate imaging device.
- the server receives the current indoor image information sent by the mobile terminal, analyzes the current indoor image information, and extracts the visual features in the current indoor image information.
- the method may be implemented by using, but not limited to, the following method, for example: ORB
- ORB For the specific implementation method of the ORB feature, reference may be made to the related description in the related art, and the embodiments of the present disclosure are not described herein again.
- the server may perform image pre-processing on the received current indoor image information before analyzing the current indoor image information, or the mobile terminal may perform image pre-processing on the current indoor image information, and then pre-image the image.
- the processed current indoor image information is sent to the server to improve the processing speed of the server.
- the embodiment of the present disclosure does not limit this.
- an acquisition unit and an extraction unit may be provided, and current image information is acquired by the acquisition unit, and the visual feature in the current image information is extracted by the extraction unit; for example, by a central processing unit (CPU), an image processor (GPU) A tensor processor (TPU), a field programmable gate array (FPGA), or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the acquisition unit and the extraction unit.
- the processing unit may be a general purpose processor or a dedicated processor, and may be an X86 or ARM architecture based processor or the like.
- step S102 for example, when constructing the offline map database, all key frames in the indoor environment are stored, for example, in a shopping mall, key frames of image information in each floor are stored, and key frames of image information in each road are stored.
- the server matches the visual features of the current image information with the visual features in the key frames in the offline large map database to determine keyframes containing visual features similar to the visual features of the current image information. As a candidate keyframe.
- the exclusion method may be used, but the embodiment of the present disclosure is not limited to the exclusion method, and other methods in the field may also be used, and details are not described herein again.
- the exclusion method may include: excluding the least relevant or least similar key frames according to the visual features in the current image information, narrowing the search range, and matching the candidate key frames in a smaller range. There may be more than one candidate keyframe determined for the first time, but there is only one candidate keyframe finally confirmed, which is also the most similar or the same keyframe as the visual feature of the current image information.
- the visual character bag model matches the visual features in the current indoor image information and the visual features of each key frame, if the current indoor image information and the similar or identical visual features in the key frames reach a certain number (for example, The key frame is determined to be a candidate key frame by 80% of the total visual feature.
- the visual word bag model can be preset by using algorithms in the art, and details are not described herein again.
- a matching unit and a first determining unit may be provided, and the visual feature is matched with a key frame in the map database by the matching unit, and the candidate key frame similar to the visual feature is determined by the first determining unit; for example, Through a central processing unit (CPU), image processor (GPU), tensor processor (TPU), field programmable gate array (FPGA), or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and Corresponding computer instructions are used to implement the matching unit and the first determining unit.
- CPU central processing unit
- GPU image processor
- TPU tensor processor
- FPGA field programmable gate array
- the feature point matching iterative algorithm under the RANSAC framework is used to solve the pose, and if the key frame has enough inner points, the pose optimized by the key frame is selected as the current pose of the user.
- the feature point matching iterative algorithm in the RANSAC framework can be implemented by using algorithms in the field, and details are not described herein.
- the pose may be the spatial coordinates, the shooting angle, and the like of the camera on the user's mobile terminal device, that is, the spatial coordinates representing the user.
- the pose is converted into the coordinate value of the user.
- the server side returns the plane projection coordinate value of the obtained pose to the user's mobile terminal device through wireless transmission methods such as Bluetooth and WiFi.
- the coordinate origin of the coordinate value is set in the upper left corner of the current image information.
- a second determining unit and a converting unit may be provided, and the pose determined by the candidate key frame is determined by the second determining unit, and the pose is converted into a coordinate value by the conversion unit; for example, by a central processing unit (CPU) ), an image processor (GPU), a tensor processor (TPU), a field programmable gate array (FPGA), or other form of processing unit with data processing capabilities and/or instruction execution capabilities, and corresponding computer instructions to implement the Two determining units and converting units.
- CPU central processing unit
- GPU image processor
- TPU tensor processor
- FPGA field programmable gate array
- a sending unit may be further provided, and the coordinate value converted by the converting unit is transmitted to the mobile terminal by the sending unit.
- the sending unit may be implemented as a wireless transmission or a wired transmission, which is not limited in the embodiment of the present disclosure.
- the positioning method provided by the present disclosure can acquire current indoor image information, extract visual features in the current indoor image information, match the visual features with key frames in the offline map database, and determine candidate key frames similar to the visual features, and determine The pose corresponding to the candidate key frame converts the pose into a coordinate value. Therefore, the positioning method can determine the corresponding indoor coordinate value by analyzing the current indoor image information, thereby achieving accurate indoor positioning.
- an offline map database when generating an offline map database, it may be implemented by, but not limited to, for example: starting a lidar to construct a global grid map, starting a vision system to construct a visual map, according to the global grid map and The visual map generates an offline map.
- the offline map database is used as a basis for real-time indoor positioning, and the offline map database includes two parts: a global raster map and a visual map.
- the following describes the construction method of the global raster map and the visual map in detail.
- FIG. 1B is a flowchart of acquiring an offline map database according to at least one embodiment of the present disclosure. That is, FIG. 1B is a flowchart of some examples of step S102 shown in FIG. 1A. In some embodiments, as shown in FIG. 1B, the above step S102 includes steps S1021 to S1023.
- Step S1021 Acquire a global raster map constructed by the laser radar.
- step S1021 includes initializing a coordinate system of the lidar mapping to a global coordinate system.
- the purpose is that the coordinate system of the map constructed by the lidar may be different from the coordinate system of the camera, and the two are initialized in the global coordinate system, and the same coordinates can be used for the current image information in the global raster map and the visual map. The location is unified.
- the method for obtaining the pose by the odometer can be implemented by some methods in the art, and will not be described here.
- the laser radar and the camera are mounted on a robot system that can move autonomously or remotely, and the odometer information of the robot system is used to obtain the posture.
- the scanning surface of the lidar is parallel to the horizontal plane, and the mounting position of the camera is at an angle of elevation with the horizontal plane, such as in a supermarket facing the upper shelf and/or the ceiling, in order to avoid crowds.
- the robot needs to move continuously in the indoor space until it covers the entire area, thus constructing an offline map database that combines 3D visual map data and 2D raster maps.
- the rough positioning result of the laser radar is obtained, and the positioning result is used as the input of the particle filtering sampling link, that is, the prior distribution of the particle is generated, and the particle is generated based on the prior distribution, based on
- the particle filter algorithm updates the particle pose and map data by integrating the odometer pose transformation, and repeats the input of the particle filter sample to obtain the prior distribution of the particles; generates particles according to the prior distribution of the particles, and according to the particle filter algorithm, And the odometer pose transformation is used to update the particle pose and map data, and finally the generated global two-dimensional grid map data is saved.
- the robot may upload the global raster map and the visual map that it acquires to the server to construct an offline map database, and may also construct an offline map database through its own internal processor, which is not limited by the embodiment of the present disclosure.
- Step S1022 Acquire a visual map constructed by the visual system.
- the step S1022 specifically includes: initializing the camera device, and the robot system remains in a stationary state until the initialization of the camera coordinate system located on the robot system and the initialization of the lidar coordinate system are completed; according to the relative position of the camera device and the laser radar The installation position is obtained, and the conversion relationship between the visual map coordinate system and the global raster map coordinate system is obtained; the key frame is determined according to the inter-frame feature of the image frame acquired by the camera device, and the second positioning information of the key frame is determined according to the conversion relationship; according to the laser radar And the positioning information of the camera device determines the corrected scale factor; and the sparse map, that is, the sparse visual offline map data, is established according to the corrected scale factor and the key frame.
- the specific correction method includes: setting the initial motion process to ⁇ t, and using the motion of the lidar positioning result.
- the change is ( ⁇ x_1, ⁇ y_1)
- the motion change obtained by the visual positioning result is ( ⁇ x_2, ⁇ y_2)
- the corrected scale factor is:
- the position of the obtained key frame is updated according to the corrected scale factor.
- the loop detection method is used to optimize the coordinate values corresponding to the key frame, and finally the key frame position coordinates and the three-dimensional sparse point map data in the global coordinate system are obtained.
- the loopback detection method can win some method implementations in the field, and details are not described herein again.
- lidar mapping step S1021 above
- visual mapping step S1022 described above
- Step S1023 Generate an offline map database according to the global raster map and the visual map.
- the offline map database fuses three-dimensional visual map data (visual map) and two-dimensional raster map (global raster map).
- the step of constructing the offline map database only needs to run once for the generation of the offline map database at the beginning of the system generation, or run the update for the offline map database again when the environment changes. Use the generated offline map database or the updated offline map database directly during subsequent indoor positioning.
- At least one embodiment of the present disclosure also provides another positioning method. As shown in FIG. 2, the positioning method further includes steps S201 to S204. Steps S201 to S204 of the positioning method and their respective exemplary implementations are respectively described below.
- Step S201 The server receives current indoor image information sent by the mobile terminal, and extracts visual features in the current indoor image information (refer to step 101).
- Step S202 matching the visual features with the key frames in the offline map database to determine candidate key frames similar to the visual features (refer to step 102).
- Step S203 Determine a pose corresponding to the candidate key frame, convert the pose into a coordinate value, and send the coordinate value to the mobile terminal (refer to step 103).
- Step S204 When the visual feature is matched with the key frame in the offline map database, the candidate key frame similar to the visual feature cannot be determined, and the velocity information and the angle information of the adjacent frame of the current indoor image information are acquired, and according to The speed information and the angle information estimate the coordinate value of the current indoor image information (ie, the coordinate value of the user's current indoor image information).
- steps S201 to S203 are similar to the steps 101 and S103, and are not described herein again.
- the mobile terminal device is generally a camera of a conventional rolling shutter, when the moving speed is too fast or the rotation is fast, image blurring is likely to occur, causing the matching failure of the server when performing key frame extraction, thereby causing the positioning tracking to fail. Therefore, the matching positioning needs to be reinitialized.
- speed information and angle information of adjacent frames of the current indoor image information are acquired by sensors in the mobile terminal.
- the position coordinates acquired by the most recent k-1 frame, the k-1th frame and the first The acceleration information and the angle information of the k frame are used to estimate the coordinates of the kth and k+1 frame positions, and at the same time, based on the position coordinates estimated by the k+1th frame, the similar candidate frames matching the frame are filtered, that is, The candidate frames whose position coordinate distance estimated by the frame is greater than a certain threshold are excluded, thereby speeding up the image positioning initialization.
- the present disclosure combines the inertial sensors (acceleration sensors and gyroscopes) provided by the mobile terminal to narrow the screening range of the matching candidate key frames in the positioning initialization stage, thereby improving the positioning accuracy in the initialization stage. .
- the embodiment of the present disclosure combines a two-dimensional grid map constructed by a laser radar with a sparse visual map to construct an offline map, for example, a grid.
- the map is used for path planning in an indoor environment, and provides scale correction for monocular visual positioning and mapping, while a sparse visual offline map is used to match the image taken by the user's mobile phone to obtain the current location information of the user.
- the offline map construction method proposed by the embodiments of the present disclosure can achieve fast matching to obtain real-time location information and meet the needs of indoor navigation.
- At least one embodiment of the present disclosure also provides a positioning device that can be applied, for example, to an indoor environment, for example, for positioning a user based on indoor image information.
- the positioning device embodiment corresponds to the foregoing positioning method embodiment.
- the device embodiment does not describe the details in the foregoing method embodiments one by one, but it should be clear that the positioning device in this embodiment can be implemented correspondingly.
- FIG. 4 is a schematic block diagram of a positioning device according to at least one embodiment of the present disclosure.
- the positioning device 100 includes an obtaining unit 31, an extracting unit 32, a matching unit 33, a first determining unit 34, a second determining unit 35, and a converting unit 36; in still other embodiments
- the positioning device 100 further includes a transmitting unit 37.
- these units/modules can be implemented in software, hardware, firmware, and any combination thereof.
- the obtaining unit 31 is configured to acquire current image information
- the extracting unit 32 is configured to extract visual features in the current image information.
- the current image information may be current indoor image information.
- the obtaining unit 31 and the extracting unit 32 may implement the step S101, and the specific implementation method may refer to the related description of step S101, and details are not described herein again.
- the matching unit 33 is configured to match the visual features extracted by the extracting unit 32 with key frames in the offline map database; the first determining unit 34 is configured to match the visual features with the key frames in the offline map database at the matching unit 33 A candidate key frame similar to the visual feature in the current image information is determined.
- the matching unit 33 and the first determining unit 34 may implement step S102.
- step S102 For the specific implementation method, reference may be made to the related description of step S102, and details are not described herein again.
- the second determining unit 35 is configured to determine a pose corresponding to the candidate key frame determined by the first determining unit 34; and the converting unit 36 is configured to convert the pose determined by the second determining unit 35 into a coordinate value.
- the second determining unit 35 and the converting unit 36 may implement the step S103, and the specific implementation method may refer to the related description of step S103, and details are not described herein again.
- the obtaining unit 31 is configured to receive current image information transmitted by the mobile terminal by the server, that is, when the positioning method shown in FIG. 1A is processed by the server, the positioning apparatus 100 further includes a transmitting unit 37.
- the transmitting unit 37 is configured to transmit the converted coordinate values of the conversion unit 36 to the mobile terminal.
- the positioning device 100 further includes a first building unit 38, a second building unit 39, and a generating unit 310.
- the first building unit 38 is configured to start the lidar to construct a global grid map.
- the second building unit 39 is configured to activate the vision system to construct a visual map.
- the generating unit 310 is configured to generate an offline map database according to the global raster map constructed by the first building unit 38 and the visual map constructed by the second building unit 39.
- first construction unit 38 includes a first initialization sub-unit 381, a prediction sub-unit 382, an input sub-unit 383, and a first generation sub-unit 384.
- the first initialization sub-unit 381 is configured to initialize the coordinate system of the map constructed by the laser radar to a global coordinate system.
- the prediction subunit 382 is configured to estimate first positioning information of the indoor area scanned by the laser radar.
- the input sub-unit 383 is configured to use the first positioning information estimated by the prediction sub-unit 382 as an input of the particle filter sample to obtain a prior distribution of the particles.
- the first generation subunit 384 is configured to generate particles according to the prior distribution of the particles, and update the particle pose and the map data according to the particle filter algorithm and the fusion odometer pose transformation to generate a global raster map.
- the second construction unit 39 includes a second initialization subunit 391, a first determination subunit 392, a second determination subunit 393, a third determination subunit 394, and a fourth determination. Subunit 395 and second generation subunit 396.
- the second initialization subunit 391 is configured to initialize the camera.
- the first determining subunit 392 is configured to obtain a conversion relationship between the visual map coordinate system and the global raster map coordinate system according to the relative mounting position of the imaging device and the laser radar.
- the second determining sub-unit 393 is configured to determine a key frame according to an inter-frame feature of the image frame acquired by the camera.
- the third determining subunit 394 is configured to determine second positioning information of the key frame according to the conversion relationship determined by the second determining subunit 393.
- the fourth determining subunit 395 is configured to determine the corrected scale factor according to the positioning information of the laser radar and the imaging device.
- the second generation sub-unit 396 is configured to establish a sparse map according to the corrected scale factor determined by the fourth determination sub-unit 395 and the key frame.
- the second building unit 39 further includes an optimization sub-unit 397.
- the optimization sub-unit 397 is configured to optimize the coordinate values corresponding to the key frames by using loopback detection.
- the positioning device 100 further includes a processing unit 311.
- the processing unit 311 is configured to: when the visual feature is matched with the key frame in the offline map database, the candidate key frame similar to the visual feature cannot be determined, and the front and back phases of the current indoor image information are acquired by the sensor in the mobile terminal.
- the velocity information of the adjacent frame and the angle information, and the coordinate values of the current indoor image information are estimated according to the speed information and the angle information.
- the positioning device may include more or less circuits or units, and the connection relationship between the respective circuits or units is not limited, and may be determined according to actual needs.
- the specific configuration of each circuit is not limited, and may be composed of an analog device according to the circuit principle, a digital chip, or other suitable manner.
- FIG. 6 is a schematic block diagram of another positioning device according to at least one embodiment of the present disclosure.
- the positioning device 200 includes a processor 210, a memory 220, and one or more computer program modules 221.
- processor 210 is coupled to memory 220 via bus system 230.
- one or more computer program modules 221 are stored in memory 220.
- one or more computer program modules 221 include instructions for performing the positioning method provided by any of the embodiments of the present disclosure.
- instructions in one or more computer program modules 221 can be executed by processor 210.
- the bus system 230 can be a conventional serial, parallel communication bus, etc., and embodiments of the present disclosure do not limit this.
- the processor 210 can be a central processing unit (CPU), a field programmable gate array (FPGA), or other form of processing unit with data processing capabilities and/or instruction execution capabilities, which can be a general purpose processor or dedicated processing. And other components in the positioning device 200 can be controlled to perform the desired functions.
- CPU central processing unit
- FPGA field programmable gate array
- other components in the positioning device 200 can be controlled to perform the desired functions.
- Memory 220 can include one or more computer program products, which can include various forms of computer readable storage media, such as volatile memory and/or nonvolatile memory.
- the volatile memory may include, for example, random access memory (RAM) and/or cache or the like.
- the nonvolatile memory may include, for example, a read only memory (ROM), a hard disk, a flash memory, or the like.
- One or more computer program instructions can be stored on a computer readable storage medium, and the processor 210 can execute the program instructions to implement the functions (implemented by the processor 210) and/or other desired functions in the disclosed embodiments. For example, positioning methods and the like.
- Various applications and various data may also be stored in the computer readable storage medium, such as the starting coordinates of each rectangular area, the ending coordinates, and various data used and/or generated by the application.
- the embodiments of the present disclosure do not provide all the constituent units of the positioning device 200.
- those skilled in the art can provide and set other constituent units not shown according to specific needs, which is not limited by the embodiments of the present disclosure.
- FIG. 7 is a schematic diagram of a positioning system according to at least one embodiment of the present disclosure.
- the positioning system 300 includes the positioning device 100/200 shown in any of FIG. 4 or 5.
- the positioning system 300 also includes a mobile terminal and a server (not shown).
- the positioning device and system in the indoor environment provided by the present disclosure can determine the corresponding indoor coordinate value by analyzing the current indoor image information, thereby achieving accurate indoor positioning.
- the positioning device includes a processor and a memory, and the acquiring unit, the extracting unit, the matching unit, the first determining unit and the second determining unit, the converting unit, the transmitting unit, and the like are all stored as a program unit in a memory, and are executed by the processor.
- the above program units in the memory implement the corresponding functions.
- the processor contains a kernel, and the kernel removes the corresponding program unit from the memory.
- the kernel can be set to one or more, and the position of the indoor replacement can be accurately positioned by adjusting the kernel parameters.
- the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory (flash RAM), the memory including at least one Memory chip.
- RAM random access memory
- ROM read only memory
- flash RAM flash memory
- At least one embodiment of the present disclosure also provides a positioning system including a mobile terminal and a server.
- the mobile terminal is configured to collect current image information and send current image information to a server;
- the server is configured to receive current image information sent by the mobile terminal, and extract visual features in the current image information; and visualize the current image information
- the feature is matched with the key frame in the offline map database to determine a candidate key frame similar to the visual feature in the current image information, and the offline map database is generated according to the global raster map and the visual map; determining the pose corresponding to the candidate key frame, The pose is converted to a coordinate value and the coordinate value is sent to the mobile terminal.
- FIG. 8 is a schematic diagram of a storage medium according to at least one embodiment of the present disclosure.
- the storage medium 400 stores computer readable instructions 401 non-transitoryly, and the non-transitory computer readable instructions 401 can perform the positioning methods provided by any of the embodiments of the present disclosure when executed by a computer (including a processor).
- the storage medium can be any combination of one or more computer readable storage media, such as a computer readable storage medium containing computer readable program code for extracting visual features in current image information, another computer readable storage
- the medium includes computer readable program code that determines candidate keyframes that are similar to the visual features in the current image information.
- the computer can execute the program code stored in the computer storage medium to perform a positioning method such as provided by any of the embodiments of the present disclosure.
- the storage medium may include a memory card of a smart phone, a storage unit of a tablet, a hard disk of a personal computer, a random access memory (RAM), a read only memory (ROM), an erasable programmable read only memory (EPROM), Portable compact disk read only memory (CD-ROM), flash memory, or any combination of the above storage media may be other suitable storage media.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- CD-ROM Portable compact disk read only memory
- flash memory or any combination of the above storage media may be other suitable storage media.
- At least one embodiment of the present disclosure also provides a method for constructing an offline map database.
- the construction method may be implemented by the first construction unit 38 and the second construction unit 39.
- the method for constructing the offline map database comprises: starting a laser radar to construct a global raster map; starting a visual system to construct a visual map; and generating an offline map database according to the global raster map and the visual map.
- initiating a lidar to construct a global grid map includes: initializing a coordinate system of a map constructed by the lidar to a global coordinate system; estimating a first location information of an environmental region scanned by the lidar, and The first positioning information is used as the input of the particle filter sampling to obtain the prior distribution of the particles; the particles are generated according to the prior distribution of the particles, and the pose and map data of the particle are updated according to the particle filter algorithm and the odometer pose transformation.
- Global raster map initializing a coordinate system of a map constructed by the lidar to a global coordinate system.
- launching the vision system to construct a visual map includes: initializing the camera device, and converting the coordinate system of the visual map and the coordinate system of the global raster map according to the relative installation positions of the camera device and the laser radar. Correlation; determining a key frame according to an inter-frame feature of the image frame acquired by the camera device, and determining second positioning information of the key frame according to the conversion relationship; determining the corrected scale factor according to the positioning information of the laser radar and the imaging device; Scale factors and keyframes create sparse maps.
- the method of constructing the offline map database may optimize the coordinate values corresponding to the key frames by using a loopback detection method.
- launching a lidar to construct a global raster map is in parallel with launching a vision system to build a visual map.
- At least one embodiment of the present disclosure further provides an electronic device, where the device includes a processor, a memory, and a program stored on the memory and executable on the processor, and the positioning method provided by any embodiment of the present disclosure is implemented when the processor executes the program. .
- the device includes a processor, a memory, and a program stored on the memory and executable on the processor, and the positioning method provided by any embodiment of the present disclosure is implemented when the processor executes the program.
- the device in this document can be a tablet, a mobile phone, or the like.
- At least one embodiment of the present disclosure also provides a computer program product, which when implemented on a data processing device, can implement the positioning method provided by any embodiment of the present disclosure.
- a computer program product which when implemented on a data processing device, can implement the positioning method provided by any embodiment of the present disclosure.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
- a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or a combination of software and hardware aspects. Moreover, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (20)
- 一种定位方法,包括:获取当前图像信息,并提取所述当前图像信息中的视觉特征;将所述当前图像信息中的视觉特征与离线地图数据库中的关键帧进行匹配,确定与所述当前图像信息中的视觉特征相似的候选关键帧,其中,所述离线地图数据库根据全局栅格地图及视觉地图生成;确定所述候选关键帧对应的位姿,将所述位姿转化为坐标值。
- 根据权利要求1所述的定位方法,其中,所述定位方法用于室内环境下。
- 根据权利要求2所述的定位方法,其中,所述当前图像信息为当前室内图像信息。
- 根据权利要求1-3任一所述的定位方法,其中,由服务器接收移动终端发送的所述当前图像信息,并提取所述当前图像信息中的视觉特征。
- 根据权利要求4所述的定位方法,其中,将所述位姿转化为坐标值后,由所述服务器将所述坐标值发送至所述移动终端。
- 根据权利要求1-5任一所述的定位方法,其中,通过视觉词袋模型匹配算法确定与所述当前图像信息中的视觉特征相似的候选关键帧。
- 根据权利要求4-6任一所述的定位方法,还包括:获取激光雷达构建的全局栅格地图;获取视觉***构建的视觉地图;根据所述全局栅格地图及所述视觉地图生成所述离线地图数据库。
- 根据权利要求7所述的定位方法,其中,获取激光雷达构建的全局栅格地图,包括:将所述激光雷达提供的地图的坐标系初始化为全局坐标系;预估所述激光雷达扫描到的环境区域的第一定位信息,并将所述第一定位信息作为粒子滤波采样的输入,得到粒子的先验分布;根据所述粒子的先验分布生成所述粒子,并根据粒子滤波算法,以及融合里程计位姿变换更新所述粒子的位姿和地图数据,生成所述全局栅格地图。
- 根据权利要求8所述的定位方法,其中,获取视觉***构建的视觉地 图,包括:对摄像装置进行初始化,并根据所述摄像装置与所述激光雷达的相对安装位置,得到所述视觉地图的坐标系与所述全局栅格地图的坐标系的转换关系;根据所述摄像装置获取的图像帧的帧间特征确定所述关键帧,并根据所述转换关系确定所述关键帧的第二定位信息;根据所述激光雷达以及所述摄像装置的定位信息确定修正后的尺度因子;根据所述修正后的尺度因子以及所述关键帧建立稀疏地图。
- 根据权利要求9所述的定位方法,其中,所述激光雷达构建所述全局栅格地图与所述视觉***构建所述视觉地图是并行进行的;所述定位方法还包括:采用回环检测方法对所述关键帧对应的坐标值进行优化。
- 根据权利要求1-10中任一项所述的定位方法,还包括:在将所述视觉特征与所述离线地图数据库中的关键帧进行匹配时,无法确定与所述视觉特征相似的候选关键帧,则获取所述当前图像信息的前后相邻帧的速度信息以及角度信息,并根据所述速度信息以及所述角度信息推算所述当前图像信息的坐标值。
- 一种定位装置,包括:获取单元,配置为获取当前图像信息;提取单元,配置为提取所述当前图像信息中的视觉特征;匹配单元,配置为将所述提取单元提取的所述当前图像信息中的视觉特征与离线地图数据库中的关键帧进行匹配;第一确定单元,配置为确定与所述当前图像信息中的视觉特征相似的候选关键帧;第二确定单元,配置为确定所述第一确定单元确定的所述候选关键帧对应的位姿;转化单元,配置为将所述第二确定单元确定的所述位姿转化为坐标值。
- 根据权利要求12所述的定位装置,其中,所述获取单元配置为由服务器接收移动终端发送的所述当前图像信息以获取所述当前图像信息。
- 根据权利要求13所述的定位装置,还包括:发送单元、第一构建单元、第二构建单元、生成单元和处理单元;其中,所述发送单元,配置为将所述转化单元转化后的所述坐标值发送至所述移动终端;所述第一构建单元,配置为启动激光雷达构建全局栅格地图;所述第二构建单元,配置为启动视觉***构建视觉地图;所述生成单元,配置为根据所述第一构建单元构建的所述全局栅格地图及所述第二构建单元构建的所述视觉地图生成离线地图数据库;所述处理单元,配置为在将所述视觉特征与所述离线地图数据库中的关键帧进行匹配时,无法确定与所述视觉特征相似的候选关键帧,则获取当前图像信息的前后相邻帧的速度信息以及角度信息,并根据所述速度信息以所述及角度信息推算所述当前图像信息的坐标值。
- 一种定位装置,包括:处理器;存储器,存储有一个或多个计算机程序模块;其中,所述一个或多个计算机程序模块被存储在所述机器可读存储介质中并被配置为由所述处理器执行,所述一个或多个计算机程序模块包括用于执行实现权利要求1-11任一所述的定位方法的指令。
- 一种定位***,包括移动终端和服务器;所述移动终端配置为采集当前图像信息,并将所述当前图像信息发送至所述服务器;所述服务器配置为接收所述移动终端发送的当前图像信息,并提取所述当前图像信息中的视觉特征;将所述当前图像信息中的视觉特征与离线地图数据库中的关键帧进行匹配,确定与所述当前图像信息中的视觉特征相似的候选关键帧,其中,所述离线地图数据库根据全局栅格地图及视觉地图生成;确定所述候选关键帧对应的位姿,将所述位姿转化为坐标值,并将所述坐标值发送至所述移动终端。
- 一种存储介质,非暂时性地存储计算机可读指令,当所述非暂时性计算机可读指令由计算机执行时可以执行根据权利要求1-11任一所述的定 位方法。
- 一种离线地图数据库的构建方法,包括:启动激光雷达构建全局栅格地图;启动视觉***构建视觉地图;根据所述全局栅格地图及所述视觉地图生成所述离线地图数据库。
- 根据权利要求18所述的构建方法,其中,启动激光雷达构建全局栅格地图,包括:将所述激光雷达构建的地图的坐标系初始化为全局坐标系;预估所述激光雷达扫描到的环境区域的第一定位信息,并将所述第一定位信息作为粒子滤波采样的输入,得到粒子的先验分布;根据所述粒子的先验分布生成所述粒子,并根据粒子滤波算法,以及融合里程计位姿变换更新所述粒子的位姿和地图数据,生成所述全局栅格地图;其中,启动视觉***构建视觉地图,包括:对摄像装置进行初始化,并根据所述摄像装置与所述激光雷达的相对安装位置,得到所述视觉地图的坐标系与所述全局栅格地图的坐标系的转换关系;根据所述摄像装置获取的图像帧的帧间特征确定关键帧,并根据所述转换关系确定所述关键帧的第二定位信息;根据所述激光雷达以及所述摄像装置的定位信息确定修正后的尺度因子;根据所述修正后的尺度因子以及所述关键帧建立稀疏地图。
- 根据权利要求18或19所述的构建方法,其中,启动激光雷达构建全局栅格地图与启动视觉***构建视觉地图是并行进行的。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/641,359 US11295472B2 (en) | 2018-05-18 | 2019-05-17 | Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810482202.2 | 2018-05-18 | ||
CN201810482202.2A CN108717710B (zh) | 2018-05-18 | 2018-05-18 | 室内环境下的定位方法、装置及*** |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019219077A1 true WO2019219077A1 (zh) | 2019-11-21 |
Family
ID=63899999
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/087411 WO2019219077A1 (zh) | 2018-05-18 | 2019-05-17 | 定位方法、定位装置、定位***、存储介质及离线地图数据库的构建方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11295472B2 (zh) |
CN (1) | CN108717710B (zh) |
WO (1) | WO2019219077A1 (zh) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110887490A (zh) * | 2019-11-29 | 2020-03-17 | 上海有个机器人有限公司 | 一种激光定位导航的关键帧选取方法、介质、终端和装置 |
CN111025364A (zh) * | 2019-12-17 | 2020-04-17 | 南京航空航天大学 | 一种基于卫星辅助的机器视觉定位***及方法 |
CN111862215A (zh) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | 一种计算机设备定位方法、装置、计算机设备和存储介质 |
CN111897365A (zh) * | 2020-08-27 | 2020-11-06 | 中国人民解放军国防科技大学 | 一种等高线引导线的自主车三维路径规划方法 |
CN111964665A (zh) * | 2020-07-23 | 2020-11-20 | 武汉理工大学 | 基于车载环视图像的智能车定位方法、***及存储介质 |
CN112907644A (zh) * | 2021-02-03 | 2021-06-04 | 中国人民解放军战略支援部队信息工程大学 | 一种面向机器地图的视觉定位方法 |
CN113325433A (zh) * | 2021-05-28 | 2021-08-31 | 上海高仙自动化科技发展有限公司 | 一种定位方法、装置、电子设备及存储介质 |
CN113804222A (zh) * | 2021-11-16 | 2021-12-17 | 浙江欣奕华智能科技有限公司 | 一种定位精度的测试方法、装置、设备及存储介质 |
CN113838129A (zh) * | 2021-08-12 | 2021-12-24 | 高德软件有限公司 | 一种获得位姿信息的方法、装置以及*** |
CN114102577A (zh) * | 2020-08-31 | 2022-03-01 | 北京极智嘉科技股份有限公司 | 一种机器人及应用于机器人的定位方法 |
WO2023087758A1 (zh) * | 2021-11-16 | 2023-05-25 | 上海商汤智能科技有限公司 | 定位方法、定位装置、计算机可读存储介质和计算机程序产品 |
EP4153940A4 (en) * | 2020-07-09 | 2024-01-17 | Zhejiang Dahua Technology Co., Ltd. | SYSTEMS AND METHODS FOR ATTENTION DETERMINATION |
Families Citing this family (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108717710B (zh) * | 2018-05-18 | 2022-04-22 | 京东方科技集团股份有限公司 | 室内环境下的定位方法、装置及*** |
US10611028B1 (en) | 2018-11-30 | 2020-04-07 | NextVPU (Shanghai) Co., Ltd. | Map building and positioning of robot |
CN109682368B (zh) * | 2018-11-30 | 2021-07-06 | 上海肇观电子科技有限公司 | 机器人及地图构建方法、定位方法、电子设备、存储介质 |
CN109612455A (zh) * | 2018-12-04 | 2019-04-12 | 天津职业技术师范大学 | 一种室内定位方法及*** |
CN109885714A (zh) * | 2019-01-18 | 2019-06-14 | 上海扩博智能技术有限公司 | 线下零售商品信息推送方法、***、设备及存储介质 |
CN111765888A (zh) * | 2019-04-01 | 2020-10-13 | 阿里巴巴集团控股有限公司 | 设备定位方法、装置、电子设备及可读存储介质 |
CN110095752B (zh) * | 2019-05-07 | 2021-08-10 | 百度在线网络技术(北京)有限公司 | 定位方法、装置、设备和介质 |
CN112013844B (zh) | 2019-05-31 | 2022-02-11 | 北京小米智能科技有限公司 | 建立室内环境地图的方法及装置 |
CN110561416B (zh) * | 2019-08-01 | 2021-03-02 | 深圳市银星智能科技股份有限公司 | 一种激光雷达重定位方法及机器人 |
CN110686676A (zh) * | 2019-09-12 | 2020-01-14 | 深圳市银星智能科技股份有限公司 | 机器人重定位方法、装置及机器人 |
CN110672102B (zh) * | 2019-10-18 | 2021-06-08 | 劢微机器人科技(深圳)有限公司 | 视觉辅助机器人初始化定位方法、机器人及可读存储介质 |
CN110796706A (zh) * | 2019-11-08 | 2020-02-14 | 四川长虹电器股份有限公司 | 视觉定位方法及*** |
CN111113415B (zh) * | 2019-12-19 | 2023-07-25 | 上海点甜农业专业合作社 | 一种基于二维码路标、摄像头和陀螺仪的机器人定位方法 |
CN113256715B (zh) * | 2020-02-12 | 2024-04-05 | 北京京东乾石科技有限公司 | 机器人的定位方法和装置 |
CN111340887B (zh) * | 2020-02-26 | 2023-12-29 | Oppo广东移动通信有限公司 | 视觉定位方法、装置、电子设备和存储介质 |
CN111652103B (zh) * | 2020-05-27 | 2023-09-19 | 北京百度网讯科技有限公司 | 室内定位方法、装置、设备以及存储介质 |
CN111664866A (zh) * | 2020-06-04 | 2020-09-15 | 浙江商汤科技开发有限公司 | 定位展示方法及装置、定位方法及装置和电子设备 |
CN111783849B (zh) * | 2020-06-15 | 2022-10-28 | 清华大学 | 一种室内定位方法、装置、电子设备及存储介质 |
CN111862672B (zh) * | 2020-06-24 | 2021-11-23 | 北京易航远智科技有限公司 | 基于顶视图的停车场车辆自定位及地图构建方法 |
CN112179330B (zh) * | 2020-09-14 | 2022-12-06 | 浙江华睿科技股份有限公司 | 移动设备的位姿确定方法及装置 |
CN111862214B (zh) * | 2020-07-29 | 2023-08-25 | 上海高仙自动化科技发展有限公司 | 计算机设备定位方法、装置、计算机设备和存储介质 |
CN111784776B (zh) * | 2020-08-03 | 2023-09-26 | Oppo广东移动通信有限公司 | 视觉定位方法及装置、计算机可读介质和电子设备 |
CN111986261B (zh) * | 2020-08-13 | 2023-08-18 | 清华大学苏州汽车研究院(吴江) | 一种车辆定位方法、装置、电子设备及存储介质 |
CN111814752B (zh) * | 2020-08-14 | 2024-03-12 | 上海木木聚枞机器人科技有限公司 | 室内定位实现方法、服务器、智能移动设备、存储介质 |
CN112025709B (zh) * | 2020-08-31 | 2021-08-27 | 东南大学 | 一种基于车载摄像头视觉的移动机器人定位***及方法 |
CN112102408B (zh) * | 2020-09-09 | 2024-07-23 | 东软睿驰汽车技术(沈阳)有限公司 | 单目视觉的尺度修正方法、装置及自动驾驶汽车 |
CN114184193A (zh) * | 2020-09-14 | 2022-03-15 | 杭州海康威视数字技术股份有限公司 | 定位方法及*** |
CN112129282B (zh) * | 2020-09-30 | 2021-06-18 | 杭州海康机器人技术有限公司 | 一种不同导航方式之间定位结果的转换方法、转换装置 |
CN112683262A (zh) * | 2020-11-30 | 2021-04-20 | 浙江商汤科技开发有限公司 | 定位方法及装置、电子设备和存储介质 |
CN112596064B (zh) * | 2020-11-30 | 2024-03-08 | 中科院软件研究所南京软件技术研究院 | 激光与视觉融合的一体化室内机器人全局定位方法 |
CN112598732B (zh) * | 2020-12-10 | 2024-07-05 | Oppo广东移动通信有限公司 | 目标设备定位方法、地图构建方法及装置、介质、设备 |
CN112396039B (zh) * | 2021-01-12 | 2022-06-24 | 之江实验室 | 一种基于邻域关系的火星栅格地形地图生成方法 |
CN114859370A (zh) * | 2021-01-20 | 2022-08-05 | 京东科技信息技术有限公司 | 定位方法和装置、计算机装置和计算机可读存储介质 |
CN113763468B (zh) * | 2021-01-21 | 2023-12-05 | 北京京东乾石科技有限公司 | 一种定位方法、装置、***及存储介质 |
CN112859874B (zh) * | 2021-01-25 | 2024-04-30 | 上海思岚科技有限公司 | 用于移动机器人的动态环境区域运维方法与设备 |
CN113189613B (zh) * | 2021-01-25 | 2023-01-10 | 广东工业大学 | 一种基于粒子滤波的机器人定位方法 |
CN112509053B (zh) * | 2021-02-07 | 2021-06-04 | 深圳市智绘科技有限公司 | 机器人位姿的获取方法、装置及电子设备 |
CN113052906A (zh) * | 2021-04-01 | 2021-06-29 | 福州大学 | 基于单目相机与里程计的室内机器人定位方法 |
US20220374947A1 (en) * | 2021-05-07 | 2022-11-24 | Tina Anne Sebastian | Artificial intelligence-based system and method for grading collectible trading cards |
CN113329333B (zh) * | 2021-05-27 | 2022-12-23 | 中国工商银行股份有限公司 | 一种室内定位的方法、装置、计算机设备及存储介质 |
CN113340293A (zh) * | 2021-05-28 | 2021-09-03 | 上海高仙自动化科技发展有限公司 | 一种定位方法、装置、电子设备及存储介质 |
WO2023280274A1 (en) * | 2021-07-07 | 2023-01-12 | The Hong Kong University Of Science And Technology | Geometric structure aided visual localization method and system |
CN113503876B (zh) * | 2021-07-09 | 2023-11-21 | 深圳华芯信息技术股份有限公司 | 多传感器融合的激光雷达定位方法、***以及终端 |
CN113447026B (zh) * | 2021-07-14 | 2024-06-18 | 深圳亿嘉和科技研发有限公司 | 适应动态环境变化的amcl定位方法 |
CN113570663B (zh) * | 2021-07-20 | 2024-07-16 | 上海云易数科科技有限公司 | 基于单线激光雷达和顶视摄像头融合的室内定位方法 |
CN114088104B (zh) * | 2021-07-23 | 2023-09-29 | 武汉理工大学 | 一种自动驾驶场景下的地图生成方法 |
CN113591847B (zh) * | 2021-07-28 | 2022-12-20 | 北京百度网讯科技有限公司 | 一种车辆定位方法、装置、电子设备及存储介质 |
CN113625296B (zh) * | 2021-07-30 | 2023-11-21 | 深圳市优必选科技股份有限公司 | 基于反光板的机器人定位方法、装置及机器人 |
CN113686332A (zh) * | 2021-09-08 | 2021-11-23 | 上海快仓智能科技有限公司 | 移动机器人及其导航方法、装置、设备和存储介质 |
CN113808269A (zh) * | 2021-09-23 | 2021-12-17 | 视辰信息科技(上海)有限公司 | 地图生成方法、定位方法、***及计算机可读存储介质 |
CN114167866B (zh) * | 2021-12-02 | 2024-04-12 | 桂林电子科技大学 | 一种智能物流机器人及控制方法 |
CN114216452B (zh) * | 2021-12-06 | 2024-03-19 | 北京云迹科技股份有限公司 | 一种机器人的定位方法及装置 |
CN114608569B (zh) * | 2022-02-22 | 2024-03-01 | 杭州国辰机器人科技有限公司 | 三维位姿估计方法、***、计算机设备及存储介质 |
CN114563795B (zh) * | 2022-02-25 | 2023-01-17 | 湖南大学无锡智能控制研究院 | 基于激光里程计和标签融合算法的定位追踪方法及*** |
CN117036663B (zh) * | 2022-04-18 | 2024-07-09 | 荣耀终端有限公司 | 视觉定位方法、设备和存储介质 |
CN115439536B (zh) * | 2022-08-18 | 2023-09-26 | 北京百度网讯科技有限公司 | 视觉地图更新方法、装置及电子设备 |
CN115267725B (zh) * | 2022-09-27 | 2023-01-31 | 上海仙工智能科技有限公司 | 一种基于单线激光雷达的建图方法及装置、存储介质 |
CN115375870B (zh) * | 2022-10-25 | 2023-02-10 | 杭州华橙软件技术有限公司 | 回环检测优化方法、电子设备及计算机可读存储装置 |
CN116662600B (zh) * | 2023-06-08 | 2024-05-14 | 北京科技大学 | 一种基于轻量结构化线地图的视觉定位方法 |
CN117191021B (zh) * | 2023-08-21 | 2024-06-04 | 深圳市晅夏机器人有限公司 | 室内视觉循线导航方法、装置、设备及存储介质 |
CN117112043B (zh) * | 2023-10-20 | 2024-01-30 | 深圳市智绘科技有限公司 | 视觉惯性***的初始化方法、装置、电子设备及介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106352877A (zh) * | 2016-08-10 | 2017-01-25 | 纳恩博(北京)科技有限公司 | 一种移动装置及其定位方法 |
CN107742311A (zh) * | 2017-09-29 | 2018-02-27 | 北京易达图灵科技有限公司 | 一种视觉定位的方法及装置 |
CN108717710A (zh) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | 室内环境下的定位方法、装置及*** |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7689321B2 (en) * | 2004-02-13 | 2010-03-30 | Evolution Robotics, Inc. | Robust sensor fusion for mapping and localization in a simultaneous localization and mapping (SLAM) system |
US8427472B2 (en) * | 2005-02-08 | 2013-04-23 | Seegrid Corporation | Multidimensional evidence grids and system and methods for applying same |
KR100843085B1 (ko) | 2006-06-20 | 2008-07-02 | 삼성전자주식회사 | 이동 로봇의 격자지도 작성 방법 및 장치와 이를 이용한영역 분리 방법 및 장치 |
KR101682175B1 (ko) * | 2010-01-20 | 2016-12-02 | 삼성전자주식회사 | 그리드 맵 생성 장치 및 방법 |
WO2016050290A1 (en) * | 2014-10-01 | 2016-04-07 | Metaio Gmbh | Method and system for determining at least one property related to at least part of a real environment |
GB2541884A (en) * | 2015-08-28 | 2017-03-08 | Imp College Of Science Tech And Medicine | Mapping a space using a multi-directional camera |
US10788836B2 (en) * | 2016-02-29 | 2020-09-29 | AI Incorporated | Obstacle recognition method for autonomous robots |
CN105865449B (zh) * | 2016-04-01 | 2020-05-05 | 深圳市杉川机器人有限公司 | 基于激光和视觉的移动机器人的混合定位方法 |
US10739142B2 (en) * | 2016-09-02 | 2020-08-11 | Apple Inc. | System for determining position both indoor and outdoor |
CN107193279A (zh) * | 2017-05-09 | 2017-09-22 | 复旦大学 | 基于单目视觉和imu信息的机器人定位与地图构建*** |
US10482619B2 (en) * | 2017-07-27 | 2019-11-19 | AI Incorporated | Method and apparatus for combining data to construct a floor plan |
CN107677279B (zh) * | 2017-09-26 | 2020-04-24 | 上海思岚科技有限公司 | 一种定位建图的方法及*** |
US11210804B2 (en) * | 2018-02-23 | 2021-12-28 | Sony Group Corporation | Methods, devices and computer program products for global bundle adjustment of 3D images |
US10657691B2 (en) * | 2018-03-27 | 2020-05-19 | Faro Technologies, Inc. | System and method of automatic room segmentation for two-dimensional floorplan annotation |
EP3833177B1 (en) * | 2018-08-08 | 2022-07-27 | The Toro Company | Handle and method for training an autonomous vehicle, and methods of storing same |
WO2020048618A1 (en) * | 2018-09-07 | 2020-03-12 | Huawei Technologies Co., Ltd. | Device and method for performing simultaneous localization and mapping |
US20210042958A1 (en) * | 2019-08-09 | 2021-02-11 | Facebook Technologies, Llc | Localization and mapping utilizing visual odometry |
-
2018
- 2018-05-18 CN CN201810482202.2A patent/CN108717710B/zh active Active
-
2019
- 2019-05-17 WO PCT/CN2019/087411 patent/WO2019219077A1/zh active Application Filing
- 2019-05-17 US US16/641,359 patent/US11295472B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106352877A (zh) * | 2016-08-10 | 2017-01-25 | 纳恩博(北京)科技有限公司 | 一种移动装置及其定位方法 |
CN107742311A (zh) * | 2017-09-29 | 2018-02-27 | 北京易达图灵科技有限公司 | 一种视觉定位的方法及装置 |
CN108717710A (zh) * | 2018-05-18 | 2018-10-30 | 京东方科技集团股份有限公司 | 室内环境下的定位方法、装置及*** |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110887490A (zh) * | 2019-11-29 | 2020-03-17 | 上海有个机器人有限公司 | 一种激光定位导航的关键帧选取方法、介质、终端和装置 |
CN110887490B (zh) * | 2019-11-29 | 2023-08-29 | 上海有个机器人有限公司 | 一种激光定位导航的关键帧选取方法、介质、终端和装置 |
CN111025364B (zh) * | 2019-12-17 | 2023-05-16 | 南京航空航天大学 | 一种基于卫星辅助的机器视觉定位***及方法 |
CN111025364A (zh) * | 2019-12-17 | 2020-04-17 | 南京航空航天大学 | 一种基于卫星辅助的机器视觉定位***及方法 |
EP4153940A4 (en) * | 2020-07-09 | 2024-01-17 | Zhejiang Dahua Technology Co., Ltd. | SYSTEMS AND METHODS FOR ATTENTION DETERMINATION |
CN111964665A (zh) * | 2020-07-23 | 2020-11-20 | 武汉理工大学 | 基于车载环视图像的智能车定位方法、***及存储介质 |
CN111964665B (zh) * | 2020-07-23 | 2022-07-12 | 武汉理工大学 | 基于车载环视图像的智能车定位方法、***及存储介质 |
CN111862215A (zh) * | 2020-07-29 | 2020-10-30 | 上海高仙自动化科技发展有限公司 | 一种计算机设备定位方法、装置、计算机设备和存储介质 |
CN111862215B (zh) * | 2020-07-29 | 2023-10-03 | 上海高仙自动化科技发展有限公司 | 一种计算机设备定位方法、装置、计算机设备和存储介质 |
CN111897365A (zh) * | 2020-08-27 | 2020-11-06 | 中国人民解放军国防科技大学 | 一种等高线引导线的自主车三维路径规划方法 |
CN114102577A (zh) * | 2020-08-31 | 2022-03-01 | 北京极智嘉科技股份有限公司 | 一种机器人及应用于机器人的定位方法 |
CN114102577B (zh) * | 2020-08-31 | 2023-05-30 | 北京极智嘉科技股份有限公司 | 一种机器人及应用于机器人的定位方法 |
CN112907644A (zh) * | 2021-02-03 | 2021-06-04 | 中国人民解放军战略支援部队信息工程大学 | 一种面向机器地图的视觉定位方法 |
CN112907644B (zh) * | 2021-02-03 | 2023-02-03 | 中国人民解放军战略支援部队信息工程大学 | 一种面向机器地图的视觉定位方法 |
CN113325433A (zh) * | 2021-05-28 | 2021-08-31 | 上海高仙自动化科技发展有限公司 | 一种定位方法、装置、电子设备及存储介质 |
CN113838129A (zh) * | 2021-08-12 | 2021-12-24 | 高德软件有限公司 | 一种获得位姿信息的方法、装置以及*** |
CN113838129B (zh) * | 2021-08-12 | 2024-03-15 | 高德软件有限公司 | 一种获得位姿信息的方法、装置以及*** |
WO2023087758A1 (zh) * | 2021-11-16 | 2023-05-25 | 上海商汤智能科技有限公司 | 定位方法、定位装置、计算机可读存储介质和计算机程序产品 |
CN113804222B (zh) * | 2021-11-16 | 2022-03-04 | 浙江欣奕华智能科技有限公司 | 一种定位精度的测试方法、装置、设备及存储介质 |
CN113804222A (zh) * | 2021-11-16 | 2021-12-17 | 浙江欣奕华智能科技有限公司 | 一种定位精度的测试方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US11295472B2 (en) | 2022-04-05 |
CN108717710B (zh) | 2022-04-22 |
US20200226782A1 (en) | 2020-07-16 |
CN108717710A (zh) | 2018-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019219077A1 (zh) | 定位方法、定位装置、定位***、存储介质及离线地图数据库的构建方法 | |
US11393173B2 (en) | Mobile augmented reality system | |
JP6812404B2 (ja) | 点群データを融合させるための方法、装置、コンピュータ読み取り可能な記憶媒体、及びコンピュータプログラム | |
Walch et al. | Image-based localization using lstms for structured feature correlation | |
CN109074667B (zh) | 基于预测器-校正器的位姿检测 | |
CN105283905B (zh) | 使用点和线特征的稳健跟踪 | |
CN105143907B (zh) | 定位***和方法 | |
TWI483215B (zh) | 根據相關3d點雲端資料強化影像資料 | |
JP2019087229A (ja) | 情報処理装置、情報処理装置の制御方法及びプログラム | |
JP2020067439A (ja) | 移動***置推定システムおよび移動***置推定方法 | |
CN112197764B (zh) | 实时位姿确定方法、装置及电子设备 | |
WO2022247548A1 (zh) | 定位方法和装置、电子设备及存储介质 | |
JP2014186004A (ja) | 計測装置、方法及びプログラム | |
CN113610702B (zh) | 一种建图方法、装置、电子设备及存储介质 | |
Zingoni et al. | Real-time 3D reconstruction from images taken from an UAV | |
WO2022205750A1 (zh) | 点云数据生成方法、装置、电子设备及存储介质 | |
Bao et al. | Robust tightly-coupled visual-inertial odometry with pre-built maps in high latency situations | |
US11557059B2 (en) | System and method for determining position of multi-dimensional object from satellite images | |
CN115773759A (zh) | 自主移动机器人的室内定位方法、装置、设备及存储介质 | |
CN111581322B (zh) | 视频中兴趣区域在地图窗口内显示的方法和装置及设备 | |
JP2008203991A (ja) | 画像処理装置 | |
Su et al. | Accurate Pose Tracking for Uncooperative Targets via Data Fusion of Laser Scanner and Optical Camera | |
CN116576866B (zh) | 导航方法和设备 | |
WO2022153910A1 (ja) | 検出システム、検出方法、及びプログラム | |
WO2024084925A1 (en) | Information processing apparatus, program, and information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19802857 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19802857 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.04.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19802857 Country of ref document: EP Kind code of ref document: A1 |