CN110909671B - Grid map obstacle detection method integrating probability and height information - Google Patents

Grid map obstacle detection method integrating probability and height information Download PDF

Info

Publication number
CN110909671B
CN110909671B CN201911145461.7A CN201911145461A CN110909671B CN 110909671 B CN110909671 B CN 110909671B CN 201911145461 A CN201911145461 A CN 201911145461A CN 110909671 B CN110909671 B CN 110909671B
Authority
CN
China
Prior art keywords
probability
grid
center
clustering
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911145461.7A
Other languages
Chinese (zh)
Other versions
CN110909671A (en
Inventor
仲维
陈圣伦
李豪杰
王智慧
刘日升
樊鑫
罗钟铉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian University of Technology
Original Assignee
Dalian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian University of Technology filed Critical Dalian University of Technology
Priority to CN201911145461.7A priority Critical patent/CN110909671B/en
Priority to PCT/CN2020/077975 priority patent/WO2021098082A1/en
Priority to US17/280,745 priority patent/US11302105B2/en
Publication of CN110909671A publication Critical patent/CN110909671A/en
Application granted granted Critical
Publication of CN110909671B publication Critical patent/CN110909671B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30261Obstacle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a grid map obstacle detection method integrating probability and height information, and belongs to the field of image processing and computer vision. And constructing a high-performance operation platform by using the GPU, and constructing a high-performance solving algorithm to obtain the barrier information in the map. The system is easy to construct, simple in procedure and easy to realize; the position of the obstacle is obtained in the multilayer grid map by utilizing the fusion probability and the height information, and the method is strong in robustness and high in precision.

Description

Grid map obstacle detection method integrating probability and height information
Technical Field
The present invention is in the field of image processing and computer vision. After a grid map of a scene is generated using a ranging sensor, positions of obstacles are acquired in a multi-layer grid map using fusion probability and height information.
Background
In recent years, mobile robots and automobile automatic driving have been receiving more and more attention along with the development of artificial intelligence, and one of the main problems to be solved is obstacle detection. The grid map is the most common map in unmanned navigation, and therefore how to use the grid map to complete obstacle detection becomes a very important problem. The obstacle detection method using the grid map mainly uses a Bayesian inference algorithm and a classic Dempster combination rule in an evidence theory framework, so that the method is generally used in a probability grid map. However, the probability grid map is a two-dimensional map in which the detected obstacle has only plane information. If more accurate obstacle detection is to be accomplished it needs to be done in three dimensions. In the grid map model, a grid map including height information of obstacles is called an elevation map. How to fuse two kinds of information to obtain accurate information becomes a difficult point of research. The invention provides a method for detecting obstacles by using a probability grid map and an elevation map in a three-dimensional space and integrating the probability grid map and the elevation map in an algorithm. Compared with the traditional algorithm, the method has the advantages that the probability and the height information are fused, and the position of the obstacle can be detected more accurately.
Disclosure of Invention
The invention provides a grid map obstacle detection method integrating probability and height information. A grid map is a representation of a space in which current scene information is represented on a plane. To illustrate the specific algorithm, the present invention has the following settings, namely, a spatial rectangular coordinate system XYZ is set, the X axis is horizontally rightward, the Y axis is vertically upward, the Z axis is forward, a grid map is established on an XOZ plane, and the XOZ reflects the current horizontal plane. The probability grid map is denoted by P, and the elevation map is denoted by H.
The specific technical scheme of the invention comprises the following steps:
1) bayesian inference
In the P-graph the grid contains probability information whether it is occupied or not, i.e. the state of the grid. If the probability of the grid is more than P, the grid state after the current measurement can be obtained by a Bayesian inference method under the condition that the existing state of the grid and the grid state of the current measurement are knownminThen the H-map is used to restore the virtual points of the grid.
2) Restoring three-dimensional boundaries
The H-diagram contains height information of the grid, i.e. the Y-coordinate. And calculating the X and Z coordinates of the grid according to the position of the grid. All grid vertices C are recovered from the H-diagram, and since these points may not exist in the actual measurement, they are called virtual points in the algorithm. The virtual point describes the upper boundary condition in the scene. If the probability of grid correspondence is less than PminOr Y coordinate is less than hminThen recovery is not performed.
3) Virtual point clustering
3-1) selecting initial value
Virtual points are clustered, such as KMeans clustering. The initial value of KMeans is selected in a space division mode, and sampling is carried out by using a sliding window. Firstly, dividing the current field angle into f shares, wherein f is an integer different from 0, respectively sampling in each share, if a sliding window is adoptedNumber N of grids in which virtual points existwGreater than NminSelecting the virtual point of the grid with the maximum probability in the sliding window as the initial candidate value Ci. Then merging the alternative initial values, and setting the weighted distance at d1Initial value within CiAnd CjSet to the same class k. If a part of the virtual points in a class
Figure BDA0002282050530000021
Points with significantly greater probability than others
Figure BDA0002282050530000022
Namely, it is
Figure BDA0002282050530000023
Is a constant less than 1 and greater than 0, then use
Figure BDA0002282050530000024
Compute class center CkOtherwise, all are used. And when calculating the category center, taking the probability of the virtual point as the weight, wherein the category center is the weighted average of the selected virtual points, and the probability of the category center is the probability average of the virtual points participating in calculation. Stopping when K centers are selected.
3-2) clustering and extracting bounding boxes
After the initial value of K is selected, all the virtual points are clustered, and when the distance to the clustering center is calculated, the distance is weighted by using the probability of the current clustering center. And calculating the weighted average value of all samples in the class as a new clustering center when updating the clustering center, wherein the probability of the center is the probability average value of the virtual points in the class. After clustering is completed, bounding boxes for each category are extracted.
3-3) Merge Categories
And in the case that clustering is completed, excessive classification is possible, and a merging operation is required. If the difference between the probabilities of the centers of the two classes does not exceed p1And the weighted distance is less than d2The two categories are merged; if the bounding boxes of the two classes have a distance in the X and Z directions that is less than d3The two categories are merged.
3-4) modifying the bounding box
Both cases of bounding box overlap or bounding box separation occur after merging. Traversing the bounding box when the bounding boxes are overlapped, and removing an overlapped area; when the bounding box is separated, the surrounding box around the empty area is checked, if the sum of the increase of the bounding box in the X direction and the increase of the bounding box in the Z direction does not exceed T, the bounding box is merged, otherwise, a bounding box is newly built in the empty area, and the bounding box is newly built because the probability value is low and is called fuzzy obstacle.
In the algorithm Pmin、hmin、Nmin、d1、d2、d3、p1And T is a set threshold.
The invention has the beneficial effects that:
the invention designs a grid map obstacle detection method integrating probability and height information. And detecting the obstacles in the space by using Bayesian inference and clustering algorithm fusion probability and height information, and completing the selection of the obstacles by combining a rigorous screening and merging process. Has the following characteristics:
1. the program is simple and easy to realize;
2. the algorithm has high efficiency and strong real-time performance;
3. and the reliability of the fusion detection of various inputs is high.
Drawings
FIG. 1 is an algorithm overall trip.
Fig. 2 is a detailed flow of cluster inference.
Fig. 3 is an input grid map.
FIG. 4 shows the results of the detection, where (a) shows the result of clustering and (b) shows the result after combination.
Detailed Description
The invention provides a grid map obstacle detection method integrating probability and height information, which is described in detail in combination with the accompanying drawings and embodiments as follows:
the overall flow is as shown in fig. 1, firstly, inputting a grid map, and fig. 3 represents an input grid map simulation diagram, which can be divided into a probability grid map (P diagram) and an elevation grid map (H diagram). The grid state update is then completed using the P-graph in conjunction with bayesian inference. And then finishing the obstacle clustering detection by using the H picture and the updated result. And finally, outputting the result.
To illustrate the specific algorithm, the present invention has the following settings, namely, a spatial rectangular coordinate system XYZ is set, the X axis is horizontally rightward, the Y axis is vertically upward, the Z axis is forward, a grid map is established on an XOZ plane, and the XOZ reflects the current horizontal plane. P represents a probability grid map, and H represents an elevation grid map. On the basis, an obstacle detection method of a grid map is described, and the method comprises the following steps as shown in fig. 2:
1) bayesian inference
As shown in fig. 2, only the P diagram enters a bayesian inference stage, and bayesian inference refers to a measured trellis state obtained by combining the trellis state at the time t-1 with the trellis state measured at the current time t. The posterior probability p (m | z) of each grid needs to be obtained in Bayesian inference1:t) Where p (m) represents the probability of map initialization, p (z)1:t) The probability from 1 to t time is obtained, a time recursion form can be formed by applying a log-odds ratio form to Bayesian inference, and the probability value of the local map at t time is finally calculated as
Figure BDA0002282050530000041
Wherein
Figure BDA0002282050530000042
If the probability of the grid is greater than PminThen the H-map is used to help recover the virtual points of the grid;
2) extracting virtual points and restoring three-dimensional boundary
The H-diagram shown in fig. 2 and the updated grid state are used together to extract virtual points. The height information of the grid, i.e. the Y-coordinate, is contained in the H-diagram. The X and Z coordinates of the grid can be calculated from the position of the grid. Therefore, the H-map can recover all the grid vertices, which are called virtual points in the algorithm. The virtual point describes the upper boundary condition in the scene. If the probability of grid correspondence is less than PminOr Y coordinate is less than hminThen do not recover;
3) virtual point clustering
3-1) dividing the field angle, and selecting the initial value by using a sliding window
First, the current field angle is divided into f shares, then a sliding window is used to sample in each share, let the sliding window be W, the size of the window be a × b, if there is a grid number of virtual points N in the sliding windowwGreater than NminSelecting the virtual point of the grid with the maximum probability in the sliding window as the initial candidate value CiProbability value of Pi
3-2) merging and calculating initial values
After the alternative initial value is obtained, the initial value combining and calculating stage is entered as shown in fig. 2. If the alternative initial value CiAnd CjAt a weighted distance, the weight is a probability, at d1If there are some virtual points in the class k, the two are set as the same class k
Figure BDA0002282050530000051
Points with significantly greater probability than others
Figure BDA0002282050530000052
Namely, it is
Figure BDA0002282050530000053
Is a constant less than 1 and greater than 0, the class center C is calculated using this high probability pointkOtherwise, all are used. Class center C using probability of virtual point as weightkTo select a weighted average of the virtual points
Figure BDA0002282050530000054
Center probability PkIs composed of
Figure BDA0002282050530000055
Stopping when K centers are selected;
3-3) clustering and extracting bounding boxes
After the initial value is selected, a clustering stage is carried out, a KMeans clustering algorithm is used for all the virtual points, and all the virtual points are calculated to a clustering center in the clustering processCkWeighted distance P ofk|Ci-Ck|p,||pIs p-norm, if the weighted distance from a virtual point to the center of any cluster exceeds dmaxThen it is rejected. And calculating the weighted average value of all samples in the class as a new cluster center when updating the cluster center, wherein the cluster center is the probability average value of the virtual points in the class. After clustering is finished, extracting the bounding box of each category, wherein the bounding boxes are described by 7 values and are respectively a clustering center CkAnd probability PkMaximum Y coordinate YmaxMaximum and minimum grid number X in X directionmax,XminMaximum and minimum grid number Z in Z directionmax,ZminThe bounding box is visualized on the XOZ plane, fig. 4a illustrating the expected effect at this stage.
The last two stages of the clustering algorithm shown in fig. 2 are the second aggregation stage, which includes merging categories and modifying bounding boxes.
3-4) Merge Categories
When KMeans clustering is completed, the close categories may appear, and the merging operation is needed. If two categories center CiAnd CjProbability of PiAnd PjIs less than d2Merging the two categories if the bounding boxes of the two categories have a distance in one direction in the X and Z directions which is less than d3The two categories are merged;
3-5) modifying the bounding box
Both cases of bounding box overlap or bounding box separation occur after merging. Traversing the bounding box when the bounding boxes are overlapped, and removing an overlapped area; checking surrounding frames around the empty regions when the surrounding frames are separated, if the sum of the increase of the surrounding frames in the X direction and the increase of the surrounding frames in the Z direction does not exceed T when the surrounding frames are merged into one surrounding frame, otherwise checking the empty regions adjacent to the empty regions, and if the number of the empty regions which can not be merged in the same way exceeds num, newly building a surrounding frame around the empty regions, wherein the surrounding frame describes the center B by 5 valuesiMaximum and minimum grid number X in X directionmax,XminMaximum and minimum grid number Z in Z directionmax,ZminThe probability value of the empty region itself is very low, and the newly created bounding box is called as a fuzzy obstacle, and such a box is used for marking the region, as shown in fig. 4 b.
In the algorithm Pmin、hmin、Nmin、d1、d2、d3、p1T and num are set thresholds.

Claims (3)

1. A grid map obstacle detection method integrating probability and height information is characterized by comprising the following steps:
1) bayesian inference
Obtaining the grid state after the current measurement by a Bayesian inference mode under the existing state of the known grid and the grid state of the current measurement, and if the probability P of the gridiGreater than PminThen, the H graph is used to recover the virtual points of the grid; the H map is an elevation grid map;
2) extracting virtual points and restoring three-dimensional boundary
Calculating the X and Z coordinates of the grid according to the position of the grid; recovering the highest point of the grid from the H diagram, and calling the highest point as a virtual point in the algorithm; if the probability of grid correspondence is less than PminOr Y coordinate is less than hminThen do not recover;
3) virtual point clustering
3-1) dividing the field angle, and selecting the initial value by using a sliding window
Firstly, dividing the current field angle into f shares, then sampling each share by using a sliding window, setting the sliding window as W and the size of the window as a × b, and if the grid number N of virtual points exists in the sliding windowwGreater than NminSelecting the virtual point of the grid with the maximum probability in the sliding window as the initial candidate value CiProbability value of Pi
3-2) merging and calculating initial values
If the alternative initial value CiAnd CjThere is a weighted distance between them, the weight is the probability, at d1If there are some virtual points in the class k, the two are set as the same class k
Figure FDA0002612180540000011
Points with significantly greater probability than others
Figure FDA0002612180540000012
Namely, it is
Figure FDA0002612180540000013
Is a constant less than 1 and greater than 0, the class center C is calculated using this high probability pointkOtherwise, all are used; class center C using probability of virtual point as weightkTo select a weighted average of the virtual points
Figure FDA0002612180540000014
Center probability PkIs composed of
Figure FDA0002612180540000015
Stopping when K centers are selected;
3-3) clustering and extracting bounding boxes
Using KMeans clustering algorithm to all virtual points, calculating all virtual points to cluster center C in clustering processkWeighted distance P ofk|Ci-Ck|p,||pIs p-norm, if the weighted distance from a virtual point to the center of any cluster exceeds dmaxIf so, rejecting the product; calculating the weighted average value of all samples in the class as a new clustering center when updating the clustering center, wherein the clustering center is the probability average value of the virtual points in the class; after clustering is finished, extracting the bounding box of each category, wherein the bounding boxes are described by 7 values and are respectively a clustering center CkAnd probability PkMaximum Y coordinate YmaxMaximum and minimum grid number X in X directionmax、XminMaximum and minimum grid number Z in Z directionmax、Zmin
3-4) Merge Categories
3-5) modifying the bounding box
After merging, either the bounding box overlaps or the bounding box separates two(ii) a condition; traversing the bounding box when the bounding boxes are overlapped, and removing an overlapped area; checking surrounding frames around the empty regions when the surrounding frames are separated, if the sum of the increase of the surrounding frames in the X direction and the increase of the surrounding frames in the Z direction does not exceed T when the surrounding frames are merged into one surrounding frame, otherwise checking the empty regions adjacent to the empty regions, and if the number of empty regions which cannot be merged exceeds num, newly building a surrounding frame around the empty regions, wherein the surrounding frame describes the center B by 5 valuesiMaximum and minimum grid numbers X in X directionmax、XminMaximum and minimum grid number Z in Z directionmax、Zmin
2. The grid map obstacle detection method integrating probability and height information as claimed in claim 1, wherein P is the methodmin、hmin、Nmin、d1T and num are set thresholds.
3. A grid map obstacle detection method integrating probability and height information according to claim 1, wherein the step 3-4) merges the categories: if two categories center CiAnd CjProbability of PiAnd PjIs less than d2Merging the two categories if the bounding boxes of the two categories have a distance in one direction in the X and Z directions which is less than d3The two categories are merged; wherein d is2、d3Is a set threshold.
CN201911145461.7A 2019-11-21 2019-11-21 Grid map obstacle detection method integrating probability and height information Active CN110909671B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201911145461.7A CN110909671B (en) 2019-11-21 2019-11-21 Grid map obstacle detection method integrating probability and height information
PCT/CN2020/077975 WO2021098082A1 (en) 2019-11-21 2020-03-05 Obstacle detection method based on grid map integrated with probability and height information
US17/280,745 US11302105B2 (en) 2019-11-21 2020-03-05 Grid map obstacle detection method fusing probability and height information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911145461.7A CN110909671B (en) 2019-11-21 2019-11-21 Grid map obstacle detection method integrating probability and height information

Publications (2)

Publication Number Publication Date
CN110909671A CN110909671A (en) 2020-03-24
CN110909671B true CN110909671B (en) 2020-09-29

Family

ID=69818348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911145461.7A Active CN110909671B (en) 2019-11-21 2019-11-21 Grid map obstacle detection method integrating probability and height information

Country Status (3)

Country Link
US (1) US11302105B2 (en)
CN (1) CN110909671B (en)
WO (1) WO2021098082A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652976B (en) * 2020-06-03 2023-05-05 鲁东大学 View analysis method based on DEM raster data
CN111950501B (en) * 2020-08-21 2024-05-03 东软睿驰汽车技术(沈阳)有限公司 Obstacle detection method and device and electronic equipment
CN112394726B (en) * 2020-10-20 2023-08-04 自然资源部第一海洋研究所 Unmanned ship obstacle fusion detection method based on evidence theory
CN112716401B (en) * 2020-12-30 2022-11-04 北京奇虎科技有限公司 Obstacle-detouring cleaning method, device, equipment and computer-readable storage medium
CN114357099B (en) * 2021-12-28 2024-03-19 福瑞莱环保科技(深圳)股份有限公司 Clustering method, clustering system and storage medium
CN114490909B (en) * 2022-01-26 2024-03-12 北京百度网讯科技有限公司 Object association method and device and electronic equipment
CN114577233B (en) * 2022-05-05 2022-07-29 腾讯科技(深圳)有限公司 Vehicle navigation method and device, computer equipment and storage medium
CN115691026B (en) * 2022-12-29 2023-05-05 湖北省林业科学研究院 Intelligent early warning monitoring management method for forest fire prevention
CN116226697B (en) * 2023-05-06 2023-07-25 北京师范大学 Spatial data clustering method, system, equipment and medium
CN116358561B (en) * 2023-05-31 2023-08-15 自然资源部第一海洋研究所 Unmanned ship obstacle scene reconstruction method based on Bayesian multi-source data fusion

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101975951B (en) * 2010-06-09 2013-03-20 北京理工大学 Field environment barrier detection method fusing distance and image information
KR102096398B1 (en) * 2013-07-03 2020-04-03 삼성전자주식회사 Method for recognizing position of autonomous mobile robot
WO2018115917A1 (en) * 2016-12-20 2018-06-28 Toyota Motor Europe Electronic device, system and method for augmenting image data of a passive optical sensor
US10195992B2 (en) * 2017-04-03 2019-02-05 Ford Global Technologies, Llc Obstacle detection systems and methods
CN107064955A (en) * 2017-04-19 2017-08-18 北京汽车集团有限公司 barrier clustering method and device
CN108931246B (en) * 2017-05-26 2020-12-11 杭州海康机器人技术有限公司 Method and device for detecting existence probability of obstacle at unknown position
KR102452550B1 (en) * 2017-10-17 2022-10-07 현대자동차주식회사 Apparatus for aggregating object based on Lidar data, system having the same and method thereof
CN108226951B (en) * 2017-12-23 2020-12-01 天津国科嘉业医疗科技发展有限公司 Laser sensor based real-time tracking method for fast moving obstacle
CN108764108A (en) 2018-05-22 2018-11-06 湖北省专用汽车研究院 A kind of Foregut fermenters method based on Bayesian inference
CN110335282B (en) * 2018-12-25 2023-04-18 广州启明星机器人有限公司 Contour line segment feature extraction method based on grids

Also Published As

Publication number Publication date
US11302105B2 (en) 2022-04-12
US20210312197A1 (en) 2021-10-07
WO2021098082A1 (en) 2021-05-27
CN110909671A (en) 2020-03-24

Similar Documents

Publication Publication Date Title
CN110909671B (en) Grid map obstacle detection method integrating probability and height information
CN109559320B (en) Method and system for realizing visual SLAM semantic mapping function based on hole convolution deep neural network
Simonelli et al. Disentangling monocular 3d object detection: From single to multi-class recognition
CN111563415B (en) Binocular vision-based three-dimensional target detection system and method
CN109829398B (en) Target detection method in video based on three-dimensional convolution network
JP4979840B2 (en) Moving body detection apparatus and moving body detection method
CN110427797B (en) Three-dimensional vehicle detection method based on geometric condition limitation
CN114424250A (en) Structural modeling
CN110728751A (en) Construction method of indoor 3D point cloud semantic map
Xing et al. DE‐SLAM: SLAM for highly dynamic environment
CN113705636A (en) Method and device for predicting trajectory of automatic driving vehicle and electronic equipment
CN113744311A (en) Twin neural network moving target tracking method based on full-connection attention module
CN115147809B (en) Obstacle detection method, device, equipment and storage medium
CN110889362B (en) Obstacle detection method using grid map height information
CN113420648B (en) Target detection method and system with rotation adaptability
Grotz et al. Graph-based visual semantic perception for humanoid robots
CN110348311B (en) Deep learning-based road intersection identification system and method
CN115294176B (en) Double-light multi-model long-time target tracking method and system and storage medium
CN109657577B (en) Animal detection method based on entropy and motion offset
WO2023030062A1 (en) Flight control method and apparatus for unmanned aerial vehicle, and device, medium and program
CN116664851A (en) Automatic driving data extraction method based on artificial intelligence
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN115685237A (en) Multi-mode three-dimensional target detection method and system combining viewing cones and geometric constraints
CN112258575B (en) Method for quickly identifying object in synchronous positioning and map construction
CN114549825A (en) Target detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant