CN111488812B - Obstacle position recognition method and device, computer equipment and storage medium - Google Patents

Obstacle position recognition method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111488812B
CN111488812B CN202010250438.0A CN202010250438A CN111488812B CN 111488812 B CN111488812 B CN 111488812B CN 202010250438 A CN202010250438 A CN 202010250438A CN 111488812 B CN111488812 B CN 111488812B
Authority
CN
China
Prior art keywords
information
plane
calculating
obstacle
preset position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010250438.0A
Other languages
Chinese (zh)
Other versions
CN111488812A (en
Inventor
党文冰
董远强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010250438.0A priority Critical patent/CN111488812B/en
Publication of CN111488812A publication Critical patent/CN111488812A/en
Application granted granted Critical
Publication of CN111488812B publication Critical patent/CN111488812B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to an obstacle position identification method, an obstacle position identification device, computer equipment and a storage medium. The method comprises the following steps: acquiring depth information, plane area information and main body area information corresponding to the obstacle; when the depth information, the plane area information and the main body area information are similar and matched, determining a target obstacle; acquiring preset position information corresponding to a target obstacle, and respectively calculating depth information, plane area information and distance error information between main body area information and the preset position information; calculating conditional probability distribution of sensor observation information under preset position information according to the distance error information; and estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle. By adopting the method, the accuracy of estimating the positions of the cataract obstacles entering the camera vision partially can be improved, so that the automatic driving performance is improved.

Description

Obstacle position recognition method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for identifying an intention of an obstacle, a computer device, and a storage medium.
Background
With the development of artificial intelligence technology, automatic driving technology has emerged, and an automatic driving automobile depends on the cooperative cooperation of artificial intelligence, visual calculation, radar, monitoring device and global positioning system, so that a computer can automatically and safely operate a motor vehicle without any human active operation. Automatic driving can be achieved through sensor fusion, and position information of a front obstacle in a three-dimensional space is estimated through vehicle tail detection and depth detection by using an image containing the whole vehicle tail.
However, current sensor fusion approaches require two-dimensional vehicle tail detection that includes an image of the entire vehicle tail. All sensor fusion will only occur if the entire obstacle is in the process of entering camera recognition. That is, when the obstacle partially enters the camera view, image detection is not performed by using the vehicle body (for example, the vehicle head entering the camera view) or a part of the vehicle tail, and therefore, the position of the obstacle partially entering the camera view may be estimated by using only the information of the depth sensor, and thus, the optimal estimated position of the obstacle may not be obtained, resulting in a defect in the automatic driving performance.
Disclosure of Invention
In view of the above, it is necessary to provide an obstacle position recognition method, apparatus, computer device, and storage medium capable of improving accuracy of obstacle intention recognition during automatic driving, thereby improving automatic driving performance.
An obstacle position identification method, the method comprising:
acquiring sensor observation information, wherein the sensor observation information comprises depth information, plane area information and main body area information corresponding to the obstacle; performing similar matching on the depth information, the plane area information and the main body area information, and determining a target obstacle when the depth information, the plane area information and the main body area information are in similar matching consistency; acquiring preset position information corresponding to a target obstacle, and respectively calculating depth information, plane area information and distance error information between main body area information and the preset position information; calculating conditional probability distribution of sensor observation information under preset position information according to the distance error information; and estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle.
An obstacle position identifying apparatus, the apparatus comprising:
the information acquisition module is used for acquiring sensor observation information, and the sensor observation information comprises depth information, plane area information and main body area information corresponding to the barrier;
the information matching module is used for performing similar matching on the depth information, the plane area information and the main body area information, and determining a target barrier when the depth information, the plane area information and the main body area information are in similar matching consistency;
the error calculation module is used for acquiring preset position information corresponding to the target obstacle and calculating depth information, plane area information and distance error information between the main body area information and the preset position information respectively;
the conditional distribution calculating module is used for calculating the conditional probability distribution of the sensor observation information under the preset position information according to the distance error information;
and the position obtaining module is used for estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
acquiring sensor observation information, wherein the sensor observation information comprises depth information, plane area information and main body area information corresponding to the obstacle; performing similar matching on the depth information, the plane area information and the main body area information, and determining a target obstacle when the depth information, the plane area information and the main body area information are in similar matching consistency; acquiring preset position information corresponding to a target obstacle, and respectively calculating depth information, plane area information and distance error information between main body area information and the preset position information; calculating conditional probability distribution of sensor observation information under preset position information according to the distance error information; and estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring sensor observation information, wherein the sensor observation information comprises depth information, plane area information and main body area information corresponding to the obstacle; performing similar matching on the depth information, the plane area information and the main body area information, and determining a target obstacle when the depth information, the plane area information and the main body area information are in similar matching consistency; acquiring preset position information corresponding to a target obstacle, and respectively calculating depth information, plane area information and distance error information between main body area information and the preset position information; calculating conditional probability distribution of sensor observation information under preset position information according to the distance error information; and estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle.
The obstacle position identification method and device, the computer equipment and the storage medium are provided. The sensor observation information is matched to determine the target obstacle, then the distance error information of the target obstacle is calculated, the conditional probability distribution of the sensor observation information at the preset position is calculated by using the distance error information, the conditional probability distribution is estimated, the target position information corresponding to the target obstacle is obtained, the accuracy of the estimation of the position of the obstacle in the partial camera vision field can be improved, and therefore the automatic driving performance is improved.
Drawings
FIG. 1 is a diagram of an exemplary environment in which a method for identifying obstacle locations may be implemented;
FIG. 2 is a flow diagram illustrating a method for obstacle location identification in one embodiment;
FIG. 2a is a schematic flow chart of a method for identifying obstacle positions in another embodiment;
FIG. 3 is a schematic diagram of a process for obtaining sensor observation information in one embodiment;
FIG. 3a is a diagram illustrating a pixel-based image segmentation result in one embodiment;
FIG. 4 is a diagram illustrating sensor observations displayed by the in-vehicle terminal in an exemplary embodiment;
FIG. 5 is a schematic flow chart illustrating the determination of a target obstacle in one embodiment;
FIG. 6 is a schematic flow chart illustrating the determination of a target obstacle according to another embodiment;
FIG. 7 is a schematic flowchart of a method for obstacle position identification in yet another embodiment;
FIG. 8 is a schematic plan view of the position of an obstacle in one embodiment;
fig. 9 is a flowchart illustrating an obstacle position identifying method in accordance with still another embodiment;
FIG. 10 is a schematic flow chart illustrating the determination of a conditional probability distribution in one embodiment;
FIG. 11 is a schematic illustration of an obstacle in a flat position in one embodiment;
FIG. 12 is a diagram illustrating a pixel plane observation displayed by the in-vehicle terminal in one embodiment;
FIG. 13 is a schematic flow chart illustrating obstacle tracking in one embodiment;
FIG. 13a is a schematic view of the vehicle terminal of FIG. 13 showing the vehicle attitude of the right side portion of the vehicle body;
FIG. 14 is a schematic flow chart diagram illustrating a method for obstacle location identification in one embodiment;
fig. 15 is a block diagram showing the structure of an obstacle position recognition apparatus in one embodiment;
FIG. 16 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Computer Vision technology (CV) Computer Vision is a science for researching how to make a machine "see", and further refers to that a camera and a Computer are used to replace human eyes to perform machine Vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the Computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. As a scientific discipline, computer vision research-related theories and techniques attempt to build artificial intelligence systems that can capture information from images or multidimensional data. Computer vision technologies generally include image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technologies, virtual reality, augmented reality, synchronous positioning, map construction, and other technologies, and also include common biometric technologies such as face recognition and fingerprint recognition.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The automatic driving technology generally comprises technologies such as high-precision maps, environment perception, behavior decision, path planning, motion control and the like, and the self-determined driving technology has wide application prospects.
With the research and progress of artificial intelligence technology, the artificial intelligence technology is developed and applied in a plurality of fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, unmanned aerial vehicles, robots, smart medical care, smart customer service, and the like.
The scheme provided by the embodiment of the application relates to the computer vision technology of artificial intelligence, the machine learning technology, the automatic driving technology and the like, and is specifically explained by the following embodiments:
the obstacle position identification method provided by the application can be applied to the application environment shown in fig. 1. Wherein, the sensor 102 communicates with the in-vehicle terminal 104 through a network. The vehicle-mounted terminal 104 acquires sensor observation information, wherein the sensor observation information comprises depth information, plane area information and main body area information corresponding to the obstacle; the vehicle-mounted terminal 104 performs similar matching on the depth information, the plane area information and the main body area information, and determines a target obstacle when the depth information, the plane area information and the main body area information are similar and matched; the vehicle-mounted terminal 104 acquires preset position information corresponding to the target obstacle, and respectively calculates depth information, plane area information and distance error information between the main body area information and the preset position information; the vehicle-mounted terminal 104 calculates the conditional probability distribution of the sensor observation information under the preset position information according to the distance error information; and the vehicle-mounted terminal 104 estimates the conditional probability distribution to obtain target position information corresponding to the target obstacle. The in-vehicle terminal 104 may be, but is not limited to, various computers, laptops, smartphones, tablets, and portable wearable devices. The sensor may be, but is not limited to, a video image sensor and a depth sensor.
In one embodiment, as shown in fig. 2, an obstacle position identification method is provided, which is described by taking the method as an example applied to the vehicle-mounted terminal in fig. 1, and includes the following steps:
s202, acquiring sensor observation information, wherein the sensor observation information comprises depth information, plane area information and main body area information corresponding to the obstacle.
The sensor refers to a device installed on the vehicle and capable of observing a certain range around the vehicle, such as a video image sensor for observing video images, including but not limited to a camera and a video camera, and a depth sensor. Depth sensors are used to observe depth information, including but not limited to lidar and millimeter wave radar. Sensor observation information may be acquired by various sensors. An obstacle is an object that can cause an obstruction to a moving vehicle, including other vehicles, objects, pedestrians and animals, and so forth.
The depth information refers to information of an obstacle observed by using a depth sensor, and the depth information is used for representing the three-dimensional position of the obstacle. For example, if the center of the vehicle is used as the origin of coordinates, the obtained depth information is the three-dimensional coordinates of the obstacle. The plane area information is two-dimensional information of the obstacle obtained by performing two-dimensional detection on the obstacle according to a video image shot by a camera or a video camera. The subject refers to an object mainly represented in the video image, such as a vehicle, a person, an object, and the like in the image. The main body area information refers to each pixel area in a video image obtained by performing pixel-level target segmentation on the video image shot by a camera or a video camera, wherein the same type of object is the same type of pixel, and each object has a corresponding pixel area. For example, a pixel area corresponding to each vehicle, a pixel area corresponding to each person, a pixel area corresponding to each object, and the like.
Specifically, the vehicle-mounted terminal may directly acquire sensor observation information from a sensor, where the sensor observation information includes depth information, plane area information, and body area information corresponding to an obstacle. Namely, the sensor processes the original observation information when obtaining the original observation information, and then the processed sensor observation information is sent to the vehicle-mounted terminal. When the camera acquires a video image, the two-dimensional detection and pixel-level target segmentation are carried out on the video image to obtain plane area information and main body area information, and the plane area information and the main body area information are sent to the vehicle-mounted terminal. The depth sensor obtains original laser reflection point information, converts the original laser reflection point information to obtain depth information corresponding to the barrier, and sends the depth information to the vehicle-mounted terminal. .
In one embodiment, the in-vehicle terminal may acquire raw observation information from the sensor and then process the raw observation information to obtain sensor observation information. The vehicle-mounted terminal can acquire a video image from the camera, and the vehicle-mounted terminal performs two-dimensional detection and pixel-level target segmentation on the video image to obtain plane area information and main body area information. Meanwhile, the vehicle-mounted terminal can acquire original laser reflection point information from the laser radar and then convert the laser reflection point information to obtain depth information corresponding to the obstacle.
And S204, performing similar matching on the depth information, the plane area information and the main body area information, and determining the target barrier when the depth information, the plane area information and the main body area information are matched in a similar manner.
The similarity matching means that the depth information, the plane area information and the main area information are matched pairwise by using a similarity matching algorithm and used for judging whether the depth information, the plane area information and the main area information are the same obstacle information. For example, the similarity between the projection information of the depth information in the planar image and the planar region information and the main body region information is calculated by using the Euclidean distance similarity algorithm, a matching result is obtained according to the similarity, and when the depth information is successfully matched with the planar region information and the main body region information, the successfully matched depth information, planar region information and main body region information are determined to belong to the same obstacle. The target obstacle is an obstacle having depth information, plane area information, and body area information at the same time.
Specifically, when there are a plurality of obstacles around the vehicle, some obstacles cannot be observed at the same time because the ranges of observation by different observers are not uniform. For example, an obstacle is observed by the camera and not observed by the lidar. In this case, only the plane area information and the main area information of the obstacle are obtained, and the depth information cannot be obtained. Therefore, the vehicle-mounted terminal performs similar matching on all the obtained depth information, plane area information and main body area information, and when the depth information, the plane area information and the main body area information are matched in a similar manner, the vehicle-mounted terminal is indicated to belong to the same obstacle. At this time, an obstacle in which the depth information, the plane area information, and the body area information are matched similarly is set as a target obstacle.
S206, acquiring preset position information corresponding to the target obstacle, and respectively calculating depth information, the plane area information and distance error information between the main body area information and the preset position information.
The preset position information refers to three-dimensional position information corresponding to a target obstacle assumed in advance. For example, the preset position information may be assumed good three-dimensional coordinate information. The distance error information refers to three-dimensional position distance error information between the depth information and the preset position information, or plane position distance error information between the plane area information and the preset position information, or plane distance error information between the body area information and the preset position information. The three-dimensional position range error information is range error information in three dimensions of space. The plane distance error information refers to distance error information on a plane two-dimension obtained through calculation when the preset position information is projected to the plane.
Specifically, the vehicle-mounted terminal acquires preset position information corresponding to the target obstacle, and then calculates three-dimensional position distance error information between the depth information and the preset position information, plane position distance error information between the plane area information and the preset position information, and plane distance error information between the main body area information and the preset position information, respectively.
S212, calculating conditional probability distribution of the sensor observation information under the preset position information according to the distance error information.
S214, estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle.
The conditional probability distribution refers to a probability distribution of one random variable (X, Y) under the condition that the other random variable obtains a (possible) fixed value, and the probability distribution of X or Y obtained in this way is called a conditional probability distribution, which is called a conditional distribution for short. The target location information is used to characterize the actual location of the target obstacle and may include coordinate points and heading angles in the three-dimensional world.
Specifically, the vehicle-mounted terminal can calculate conditional probability distribution of the sensor observation information under the preset position information according to each distance error information, and estimate the conditional probability distribution when the conditional probability distribution is maximized to obtain target position information corresponding to the target obstacle. For example, the target position information corresponding to the target obstacle may be acquired by using a maximum likelihood estimation algorithm based on the conditional probability distribution. Target position information corresponding to the target obstacle may also be obtained using a probability distribution estimation algorithm.
According to the obstacle position identification method, the obstacle position identification device, the computer equipment and the storage medium, the sensor observation information is obtained, then the sensor observation information is used for fusing sensor observation, namely, the depth information, the plane area information and the main body area information are used for respectively calculating the distance error information, then the conditional probability distribution of the sensor observation information under the preset position information is calculated through the distance error information, the conditional probability distribution is estimated, the target position information corresponding to the target obstacle is obtained, the fusion of sensor observation can be carried out on partial obstacles at the sensor boundary, the position estimation quality of the obstacle at the sensor vision boundary is improved, and therefore the automatic driving performance is improved.
In one embodiment, as shown in fig. 3, the step S202 of acquiring sensor observation information, where the sensor observation information includes depth information, plane area information, and body area information corresponding to an obstacle, includes the steps of:
s302, acquiring each initial three-dimensional point information corresponding to the obstacle, converting the initial three-dimensional point information to obtain target three-dimensional point information, and taking the target three-dimensional point information as depth information corresponding to the obstacle.
The initial three-dimensional point information refers to three-dimensional point information of an obstacle, which is obtained by the depth sensor, and is the most original obstacle, for example, three-dimensional positions of a plurality of laser reflection points on the obstacle, which are obtained by observation through a laser radar. When the millimeter wave radar is used, the speed information of the obstacle can be acquired.
Specifically, the server acquires initial three-dimensional point information corresponding to each observed obstacle through the depth sensor, and converts the initial three-dimensional point information corresponding to each obstacle to obtain information of a preset fixed position three-dimensional point on the obstacle. Wherein the fixed position may be a corner point or an edge center of an obstacle, etc. The accuracy of the fixed position conversion is determined by the accuracy of the sensor measurements and the number of obstacle reflection points.
S304, acquiring a video image, detecting the obstacle in the video image, and obtaining the plane area information corresponding to the obstacle.
The video image refers to an image captured by an image sensor, for example, an image captured by a camera mounted on a vehicle.
Specifically, the vehicle-mounted terminal acquires a video image obtained by shooting by the image sensor, and can perform two-dimensional target detection on an obstacle in the video image by using a preset established machine learning algorithm model, wherein the machine learning algorithm can use a convolutional neural network algorithm. And obtaining plane area information corresponding to the obstacle according to the detection result, wherein the plane area information can be each two-dimensional frame in the video image. Wherein the plane area information may be plane area information of a partial obstacle. For example, as shown in fig. 4, the left front vehicle detects only a two-dimensional frame area (white two-dimensional frame in the figure) of the vehicle body, which is the plane area information of the vehicle.
And S306, performing main body segmentation on the video image based on the pixels to obtain main body area information corresponding to the obstacle.
Specifically, the subject segmentation refers to image segmentation of a subject in a video image. Image segmentation is a technique and process that divides an image into several specific regions with unique properties and proposes an object of interest. The video image may be segmented on a pixel basis using a machine learning algorithm, which may be a deep learning algorithm such as a convolutional neural network (convolutional neural network), a DBN, and a stacked auto-encoder network (stacked auto-encoder network), to generate a pixel segmentation result. And obtaining the main body area information corresponding to each obstacle in the video image. In a specific embodiment, the obtained pixel segmentation result is shown in fig. 3a, wherein the human body is displayed in a yellow pixel area, the vehicle is displayed in a blue pixel area, the backpack on the human body is displayed in a red area, and so on.
In the embodiment, the initial three-dimensional point information and the video image are processed in the vehicle-mounted terminal to obtain the depth information, the plane area information and the main area information, so that the accuracy of obtaining the depth information, the plane area information and the main area information is improved.
In one embodiment, converting the initial three-dimensional point information to obtain the target three-dimensional point information includes:
carrying out geometric fitting on each initial three-dimensional point information to obtain geometric information; and selecting target three-dimensional point information from the geometric information.
Specifically, the geometric fitting refers to fitting of a geometric shape to each piece of initial three-dimensional point information, and the geometric information refers to information of the geometric shape obtained by fitting each piece of initial three-dimensional point information. For example, the geometric information may be information of a straight line or information of an "L" shape. The end point information of the geometric shape can be selected from the geometric information as the target three-dimensional point information, the corner point information can be selected from the geometric information as the target three-dimensional point information, and the central point information in the geometric information can be selected as the target three-dimensional point information.
In the embodiment, the target three-dimensional point information is selected from the geometric information by performing geometric fitting on each piece of initial three-dimensional point information, so that the obtained target three-dimensional point information can be more accurate.
In one embodiment, as shown in fig. 5, step S204, performing similarity matching on the depth information, the plane area information and the body area information, and determining the target obstacle when the depth information, the plane area information and the body area information are matched in a similar manner includes the steps of:
s502a, the depth information is similarly matched with the plane area information.
S502b, the depth information is similarly matched with the body region information.
S504, it is determined whether the depth information coincides with the plane area information and the body area information, respectively.
And S506, when the depth information is respectively consistent with the plane area information and the main body area information, determining the target obstacle.
Specifically, depth information is used to perform similarity matching with all plane area information and main area information in the image, when the depth information is consistent with the plane area information and the depth information is consistent with the main area information, it is indicated that the plane area information is consistent with the main area information, and at this time, it is indicated that the depth information, the plane area information and the main area information are information corresponding to the same obstacle, and the same obstacle is taken as a target obstacle. When the depth information is inconsistent with all the plane area information and the depth information is inconsistent with all the main body area information, it is indicated that the plane area information and the main body area information corresponding to the obstacle corresponding to the depth information are not observed, and the obstacle corresponding to the depth information is not the target obstacle at this time.
In one embodiment, the plane area information may be used to perform similarity matching with all depth information and all body area information, respectively, and the target obstacle may be determined when the body area information coincides with the depth information of one of the obstacles and the body area information of one of the obstacles, respectively.
In one embodiment, the body area information may be used to perform similarity matching with the depth information and the plane area information, respectively, and the target obstacle may be determined when the body area information coincides with the depth information of one of the obstacles and the plane area information of one of the obstacles, respectively.
In one embodiment, when the obstacle corresponding to the depth information is not the target obstacle, the depth information can be directly used for position tracking of the obstacle. Specifically, the method comprises the following steps: when 9 obstacles are in the visual field of the camera and the radar sensor, 1 obstacle is in the visual field of the camera independently, and 3 obstacles are in the visual field of the radar sensor independently. At this time, when matching is performed, it is determined that 9 target obstacles and 4 obstacles that cannot be matched are obtained, and at this time, position tracking can be performed based on the observation information corresponding to the 4 obstacles without performing sensor information fusion, so that position tracking can be performed for each obstacle.
In the above embodiment, the depth information is respectively subjected to similar matching with the plane area information and the main body area information to determine the target obstacle, so that the information observed by different sensors is unified, that is, the depth information, the plane area information and the main body area information corresponding to the same obstacle are determined, thereby facilitating subsequent sensor information fusion.
In one embodiment, as shown in fig. 6, the step S502a of matching the depth information and the plane area information similarly includes the steps of:
s602a, calculates the plane projection information corresponding to the depth information, and calculates the corresponding plane projection area information from the plane projection information.
S604a, calculating the plane similarity between the plane projection area information and the plane area information, and determining the plane matching result according to the plane similarity.
The plane projection information refers to projection information of depth information on a two-dimensional plane, for example, the depth information is a three-dimensional coordinate, and the plane projection information may be a two-dimensional coordinate that projects the three-dimensional coordinate onto the two-dimensional plane. The plane projection area information is information obtained from an area generated from the plane projection information, and for example, a plane area centered on two-dimensional point coordinates is generated within a two-dimensional plane from the two-dimensional point coordinates obtained by projection. The plane similarity refers to the coincidence degree between the plane area information and the plane projection area information, and when the coincidence degree is higher, the plane similarity is higher, for example, the distance between each coordinate point of the plane area information and each coordinate point in the plane projection area information can be calculated, and the coincidence degree is determined according to the distance, and the more the number of the coordinate points with the distance of 0 is, the higher the coincidence degree is, and the higher the plane similarity is. For example, the coordinate extreme point in the plane area information and the coordinate extreme point in the plane projection area information may also be obtained, the distance between the coordinate extreme points is calculated, the degree of coincidence is determined according to the distance, and the plane similarity is finally obtained. The plane matching result refers to a result of whether the depth information determined according to the plane similarity is consistent with the plane area information.
Specifically, the vehicle-mounted terminal calculates the plane similarity between the plane projection area information and the plane area information by using a similarity algorithm, and when the plane similarity exceeds a preset threshold value, a plane matching result with the depth information consistent with the plane area information is obtained. And when the plane similarity does not exceed the preset threshold, obtaining a plane matching result with inconsistent depth information and plane area information. The similarity algorithm may use a euclidean distance similarity algorithm, a cosine similarity algorithm, or the like.
In one embodiment, as shown in fig. 6, the step S502b, performing similarity matching on the depth information and the body region information, includes the steps of:
s602b, calculates the plane projection information corresponding to the depth information.
S604b, calculating the subject similarity between the plane projection information and the subject region information, and determining the subject matching result according to the subject similarity.
The main body similarity refers to the similarity between the depth information and the main body region information. The body matching result is a matching result of whether the depth information and the body region information are consistent.
Specifically, the vehicle-mounted terminal calculates plane projection information corresponding to the depth information, then calculates subject similarity according to the plane projection information and the subject region information by directly using a similarity calculation method, obtains a matching result of the depth information being consistent with the subject region information when the subject similarity exceeds a preset subject similarity threshold, and obtains a matching result of the depth information being inconsistent with the subject region information when the subject similarity does not exceed the preset subject similarity threshold.
In the above embodiment, the similarity between the depth information and the main body region information is calculated by using a similarity algorithm, and a matching result of whether the depth information is consistent with the main body region information is determined according to the similarity, so that the accuracy of the obtained matching result is improved.
In one embodiment, as shown in fig. 2a, the step S206 of calculating distance error information between the depth information, the plane area information, and the body area information and the preset position information respectively includes the steps of:
s206a, calculating three-dimensional position error information between the depth information and the preset position information.
The three-dimensional position error information refers to a position error between preset position information and depth information in a three-dimensional world. The position error may be a displacement converted from the depth information to the preset position information, or may include an observation error of the depth information itself.
Specifically, the vehicle-mounted terminal acquires position information corresponding to a target obstacle assumed in advance, and calculates three-dimensional position error information between the depth information and preset position information.
S206b, obtaining the mapping relation between the space and the plane, calculating the plane line information corresponding to the preset position information according to the mapping relation, and calculating the plane line distance error information of the main body area information and the plane line information.
The mapping relation between the space and the plane refers to a mapping relation from a vehicle space coordinate system to a pixel plane, and comprises an abscissa mapping relation and an ordinate mapping relation, and the vehicle space coordinate system refers to a space coordinate system with the center of a vehicle carrying the vehicle-mounted terminal as an origin. The plane line information is obtained when the preset position information is projected into a plane graph, is a straight line intersecting the edge of the obstacle and the ground, and is an edge close to the vehicle carrying the vehicle-mounted terminal vehicle. The plane line distance error information refers to a distance error between the body region information and the plane line information, and the distance error may be a sum of distance errors between all pixel points in the body region information and the plane line information.
Specifically, the vehicle-mounted terminal obtains a mapping relation between a space and a plane, projects preset position information according to the mapping relation to obtain an obstacle pixel plane, obtains corresponding plane line information, calculates distance error information of each pixel point in the main body area information and the plane line information, and then performs summation calculation on the distance error information corresponding to each pixel point to obtain plane line distance error information.
And S206c, calculating the plane horizontal and vertical information corresponding to the preset position information according to the mapping relation, and calculating the plane horizontal and vertical error information of the plane area information and the plane horizontal and vertical information.
Specifically, the plane horizontal and vertical information refers to plane horizontal coordinate information corresponding to projection of horizontal coordinate information in the preset position information onto an image plane, or plane vertical coordinate information corresponding to projection of vertical coordinate information in the preset position information onto the image plane. The plane horizontal and vertical error information refers to error information between horizontal coordinate information in the plane area information and plane horizontal coordinate information corresponding to the preset position information, or refers to error information between vertical coordinate information in the plane area information and plane vertical coordinate information corresponding to the preset position information. And the vehicle-mounted terminal calculates plane horizontal and vertical information corresponding to the preset position information according to the mapping relation, and then calculates plane horizontal and vertical error information of the plane area information and the plane horizontal and vertical information.
In the above embodiment, the three-dimensional position error information, the plane line distance error information, and the plane lateral and longitudinal error information are obtained through respective calculation, and then the conditional probability distribution of the sensor observation information under the preset position information is calculated by using the three-dimensional position error information, the plane line distance error information, and the plane lateral and longitudinal error information, so that a more accurate conditional probability distribution can be obtained.
In one embodiment, the step S206a of calculating the three-dimensional position error information between the depth information and the preset position information includes the steps of:
and acquiring the incidence relation between the preset position information and the depth information, and calculating the three-dimensional position error information between the depth information and the preset position information according to the incidence relation.
The association relation is a relation between preset position information and depth information, and the relation refers to a position where the sum of the position of the depth information and the error displacement of the three-dimensional position error information is the preset position information.
Specifically, the vehicle-mounted terminal obtains an incidence relation between preset position information and depth information, and calculates three-dimensional position error information between the depth information and the preset position information according to the incidence relation.
In one embodiment, the step S206b of calculating the plane line distance error information of the body region information and the plane line information includes the steps of:
calculating the distance between the main body area information and the plane line information, and acquiring an adjusting constant; and calculating the plane line distance error information of the main body area information and the plane line information according to the distance and the adjusting constant.
Specifically, the in-vehicle terminal calculates the distance between each divided pixel in the body region information and the plane line in the plane line information. The adjustment constant is used to adjust that each divided pixel in the subject region information is on the same side of the plane line, and needs to be set in advance. And calculating the square of each distance, multiplying the square of each distance by an adjusting constant to obtain the distance error of each divided pixel, and summing the distance errors of each divided pixel to obtain the plane line distance error information of the main body area information and the plane line information.
In one embodiment, as shown in fig. 7, the step S206c of calculating the horizontal and vertical plane information corresponding to the preset position information according to the mapping relationship, and calculating the horizontal and vertical plane error information of the plane area information and the horizontal and vertical plane information includes the steps of:
s702, when the preset position information is at the first position of the target obstacle, acquiring an abscissa mapping relation in the mapping relation.
S704, determining a plane abscissa extreme value according to the abscissa mapping relation and the preset position information, and calculating plane abscissa error information according to the plane abscissa extreme value and the abscissa mapping relation.
Wherein the first position of the target obstacle is a position in front of the target obstacle. For example, as shown in fig. 8, in the schematic diagram of the obstacle vehicle in the X-Z plane, Z denotes a direction parallel to the vehicle of the in-vehicle terminal, X denotes a direction perpendicular to the vehicle body of the in-vehicle terminal, and a broken line denotes a visual field boundary of the front view camera. In the left-hand illustration, the tail of the obstacle vehicle has not yet appeared in the field of view. The first position of the obstacle vehicle is at the front corner point of the obstacle vehicle, i.e. the corner point close to the origin, i.e. the black point in the left image.
The plane abscissa extreme value refers to the sum of preset position information and plane abscissa error information in the plane area information and is used for representing the abscissa position between the obstacle and the vehicle.
Specifically, when the preset position information is at the first position of the target obstacle, at this time, the ordinate position of the obstacle cannot be determined. Then, the abscissa mapping relationship in the mapping relationship can be obtained, and the planar abscissa extremum is determined according to the abscissa mapping relationship and the preset position information. And calculating the error information of the plane abscissa according to the extreme value of the plane abscissa and the mapping relation of the abscissa.
In one embodiment, as shown in fig. 9, in step S206c, calculating the horizontal and vertical plane information corresponding to the preset position information according to the mapping relationship, and calculating the horizontal and vertical plane error information between the plane area information and the horizontal and vertical plane information includes:
and S902, acquiring a vertical coordinate mapping relation in the mapping relation when the preset position information is at the second position of the target obstacle.
And S904, determining a plane ordinate extreme value according to the ordinate mapping relation and preset position information, and calculating plane ordinate error information according to the plane ordinate extreme value and the ordinate mapping relation.
Wherein the first position of the target obstacle is a position behind the target obstacle. For example, as shown in fig. 8, in the schematic diagram of the obstacle vehicle in the X-Z plane, Z denotes a direction parallel to the vehicle of the in-vehicle terminal, X denotes a direction perpendicular to the vehicle body of the in-vehicle terminal, and a broken line denotes a visual field boundary of the front view camera. In the right hand schematic, the rear portion of the obstacle vehicle appears in the field of view. The second position of the obstacle vehicle is at the rear corner point of the obstacle vehicle, i.e. a corner point close to the origin, i.e. a black point in the right-hand image.
The plane ordinate extreme value is the sum of preset position information and plane ordinate error information in the plane area information and is used for representing the ordinate position between the obstacle and the vehicle.
Specifically, when the preset position information is at the second position of the target obstacle, at this time, the abscissa position of the obstacle cannot be determined. Then, a vertical coordinate mapping relation in the mapping relation can be obtained, and the minimum value of the plane vertical coordinate is determined according to the vertical coordinate mapping relation and the preset position information. And calculating the error information of the plane vertical coordinate according to the minimum value of the plane total coordinate and the mapping relation of the total coordinate.
In the above embodiment, by determining the error information between the preset position information and the depth information, the plane area information, and the main body area information, the position error information when a part of the obstacles enters the visual field can be made more accurate.
In one embodiment, the distance error information includes three-dimensional position error information, planar line distance error information and planar lateral and longitudinal error information, and then as shown in fig. 10, step S212, which is to calculate a conditional probability distribution of the sensor observation information under the preset position information according to the three-dimensional position error information, the planar line distance error information and the planar lateral and longitudinal error information, includes the steps of:
s1002, calculating a first conditional probability distribution of the depth information under the preset position information according to the three-dimensional position error information.
And S1004, calculating a second conditional probability distribution of the main body region information under the preset position information according to the plane line distance error information.
S1006, calculating a third condition probability distribution of the plane area information under the preset position information according to the plane horizontal and vertical error information.
And S1008, determining the conditional probability distribution of the sensor observation information under the preset position information according to the first conditional probability distribution, the second conditional probability distribution and the third conditional probability distribution.
Specifically, when three-dimensional position error information, plane line distance error information and plane horizontal and vertical error information are obtained, corresponding conditional probability distributions are respectively calculated, that is, a first conditional probability distribution, a second conditional probability distribution and a third conditional probability are obtained, and a product of the first conditional probability distribution, the second conditional probability distribution and the third conditional probability is calculated to obtain a conditional probability distribution of the sensor observation information under the preset position information.
In a specific embodiment, the obstacle position may be estimated by a conditional probability distribution of sensor observation information under preset position information, specifically:
the vehicle spatial coordinate system is defined as follows: the vehicle center corresponding to the vehicle-mounted terminal is used as an original point, the Z axis is parallel to the vehicle body, the X axis is perpendicular to the vehicle body, and the Y axis vertically points to the ground. As shown in FIG. 11, the rectangle is a schematic representation of the obstacle in the X-Z plane of the vehicle's spatial coordinate system. If the corresponding coordinate of the key point position of the obstacle is assumed to be (x)p,yp,zp) Then xpCan be characterized as xp=x0+czpWherein c is the slope of the straight line in the X-Z plane corresponding to the left edge of the obstacle, X0At the intersection of the line with the x-axis, i.e. using c and x0To characterize the positional constraint of the left edge of the obstacle in the X-Z plane. y ispThe position of the key point is the height from the ground. z is a radical ofpThe depth distance of the key point position from the vehicle. Suppose ypWhen known, as the vehicle is moving continuously, i.e. zpWhen changed, the coordinates (x)p,yp,zp) Moving along the left edge of the obstacle. Thus, the keypoint location may be defined by the variable x0C and zpAnd (4) uniquely determining.
At this time, the fixed point coordinate x in the depth information is acquired by using the millimeter wave radarrAnd zr. The relation between the key point position and the depth information is obtained as shown in formula (1):
Figure GDA0002515839370000171
wherein n isxAnd nzIs the position error between the keypoint location and the depth information. When the position error probability distribution is known, the depth information is at x0C and zpThe conditional probability distribution under assumption is as shown in equation (2):
f (depth observation | x)0,c,zp)=fnxnz(x0+czp-xr,zp-zr) Formula (2)
Wherein f isnxnzRefers to a joint probability distribution function of position errors.
Fig. 12 is a schematic diagram showing the observation result in the pixel plane. Wherein the abscissa of the pixel is u and the ordinate of the pixel is v. The dashed line in the figure is the projection of the obstacle directly in the pixel plane close to the edge of the origin of the vehicle coordinate system. Wherein, the extreme value of the abscissa close to the center of the image in the plane area information (white two-dimensional frame in the figure) corresponding to the left obstacle vehicle is uinnerThe minimum value of the ordinate near the center of the image is vbottomThe intersection of the white two-dimensional frame and the dotted line (white dot in the figure) is the projected position of the key point position in the pixel plane. At this time, the relationship between any point and the projection point in the acquired vehicle space coordinate system is shown in formula (3):
Figure GDA0002515839370000185
wherein the g (x, y, z) function and the h (x, y, z) function represent a mapping of a vehicle space coordinate system to the horizontal and vertical coordinates of the pixels. The relationship between the grounding straight line of the left edge of the obstacle vehicle and the projection to the pixel plane can be obtained according to the formula (1) and the formula (3) and is shown in the formula (4):
Figure GDA0002515839370000181
wherein u islvlRepresenting the straight line of grounding of the left edge of the obstacle vehicle projected onto the pixel plane, which is a constraint line that constrains the pixel to the same side of the line. The pixel division result and the straight line u in the pixel plane can be known according to the above formulalvlTangent. Then the pixel-based image segmentation is computed at x0C and zpThe conditional probability distribution under assumption is as shown in equation (5):
Figure GDA0002515839370000182
where F is a monotonic decreasing function. N represents the number of pixels in the body region information corresponding to the obstacle.
Figure GDA0002515839370000183
(distance from the nth divided pixel to the plane line)2Indicating planar line distance error information. c. CnTo adjust the constant, when the pixel is outside the plane line, cnA value of 1, c when the pixel is inside the plane linenTake a constant greater than 1. The influence of the segmentation boundary error on the probability distribution can be effectively avoided by adjusting the constant.
When the key point position is in front of the obstacle vehicle, as shown in the left diagram of fig. 8. The extreme value of the pixel abscissa of the plane area information of the obstacle vehicle towards the center of the image is calculated to satisfy the following formula (6):
uinner=g(x0+czp,yp,zp)+nuformula (6)
Wherein n isuIndicating the error of the abscissa in the plane area information. When the probability distribution of the error is known, the x-coordinate extreme value in the plane area information is calculated0C and zpThe conditional probability distribution under assumption is as shown in equation (7):
Figure GDA0002515839370000186
wherein u isinner-g(x0+czp,yp,zp) Error information indicating the abscissa in the plane area information.
Figure GDA0002515839370000184
Is a probability distribution function of the ordinate error of the plane area information.
When the key point position is behind the obstacle vehicle, as shown in the right-hand diagram of fig. 8. The minimum value of the plane area information of the obstacle vehicle towards the pixel ordinate of the image center is calculated to satisfy the following formula (8):
vbottom=h(x0+cztize,yp,ztize)+nvformula (8)
Wherein n isvError of ordinate, z, in the information representing the plane areatizeThe depth of the wheel grounding point closest to the vehicle of the vehicle-mounted terminal in the obstacle vehicle image is referred to. Then the minimum value of the ordinate in the plane area information is calculated to be x0C and zpThe conditional probability distribution under assumption is as shown in equation (9):
Figure GDA0002515839370000195
wherein v isbottom-h(x0+cztize,yp,ztizeIndicating the error of the ordinate in the plane area information.
Figure GDA0002515839370000193
Is a probability distribution function of the ordinate error of the plane area information.
Therefore, the plane area information obtained from the formula (7) and the formula (9) is x0C and zpThe conditional probability distribution under assumption is as shown in equation (10):
Figure GDA0002515839370000191
in summary, the position parameter x of the observation information of the sensor at the key point is obtained0C and zpThe conditional probability distribution under assumption is as shown in equation (11):
Figure GDA0002515839370000194
at this time, the parameter x0C and zpThe maximum likelihood estimate of (c) can be shown in equation (12):
Figure GDA0002515839370000192
then according to the parameter x0C and zpAnd calculating the mean value and the covariance to obtain the position information of the key point.
In one embodiment, when the parameter x0C and zpWhen the prior probability distribution of (a) is known, a parameter x is calculated0C and zpThe parameter x is expected to be obtained with respect to the conditions under which all sensors observe information0C and zpMean and covariance of (a). As shown in equation (13):
Figure GDA0002515839370000201
using Bayesian formula, parameter x is obtained0C and zpThe conditional probability for all sensor observations is shown in equation (14):
Figure GDA0002515839370000202
wherein, f (x)0,c,zp) Is the parameter x0C and zpPrior probability distribution. k is a normalization factor, and a parameter x can be used0C and zpConditional probability f (x) of0,c,zp| observation information) to sum to 1 approximate solution. Then the parameter x is obtained0C and zpThe condition for observing information with respect to all sensors is desirably
Figure GDA0002515839370000203
The parameter x is obtained according to the formula (15) and the formula (13)0C and zpI.e., conditional expectation with respect to the observed information. When the condition expectation is determined, the bars may be estimated from a plurality of sampled parameter points according to the definition of covariancePiece covariance, according to parameter x0C and zpAnd calculating the mean value and the covariance to obtain the position information of the key point. In one embodiment, the importance sampling method can be used to reduce the computational complexity of the integration and improve the computational efficiency.
In the embodiment, the values of the parameters are obtained through calculation, and then the position information of the key points is obtained, so that the accuracy of obtaining the position of the obstacle is improved.
In one embodiment, after step S241, that is, after estimating the conditional probability distribution to obtain the target position information corresponding to the target obstacle, the method further includes the steps of:
and acquiring lane line information, and determining the corresponding intention of the target barrier according to the target position information and the lane line information.
The lane line information is information of a range marking of a lane on which the vehicle is traveling on the road. As shown in fig. 12 for the white markings on the road surface. The vehicle-mounted terminal acquires the video image through the camera, and identifies lane line information in the video image, for example, the lane line information in the video image can be identified through a pre-established machine learning algorithm model, wherein the machine learning algorithm can be a convolutional neural network algorithm or the like. Then, time-series target position information corresponding to the target obstacle within a period of time may be acquired, or sensor observation information may be acquired according to a preset time interval, so that each piece of target position information corresponding to the target obstacle is acquired. Whether the target obstacle has the intention of inserting the pressing line is determined by calculating the distance between each target position information and the lane line information. When the distance between each piece of target position information and the lane line information is smaller and smaller, the intention that the target obstacle is subjected to line pressing insertion is shown, and when the distance between each piece of target position information and the lane line information is kept stable and unchanged, the intention that the target obstacle is not subjected to line pressing insertion is shown.
In an embodiment, the passable area of the vehicle corresponding to the vehicle-mounted terminal in the video image may also be identified, and the intention corresponding to the target obstacle may be determined according to the passable area and the target position information, that is, the passable area in the video image may be identified by using a machine learning algorithm model established in advance, where the passable area may be an area obtained according to lane lines on the left and right sides of the vehicle corresponding to the vehicle-mounted terminal. Then, time-series target position information of each target obstacle within a period of time can be acquired, or sensor observation information can be acquired according to preset time intervals, so that each target position information corresponding to the target obstacle is acquired, then, the distance between each target position information and the position of the left lane line or the position of the right lane line is calculated, when the distance is smaller and smaller, the intention of the target obstacle to insert into the lane is acquired, and when the distance is stored stably, the intention of the target obstacle not to insert into the lane is acquired.
In one embodiment, after the conditional probability distribution is estimated to obtain the target position information corresponding to the target obstacle, the sensor observation information may be continuously obtained, and the corresponding target position information may be obtained, so that the position information of the target obstacle may be tracked.
In one embodiment, after the conditional probability distribution is estimated to obtain target position information corresponding to a target obstacle, the target position information is acquired, the target position information is marked in a video image, and a top view of the obstacle is generated according to the target position information for displaying.
In an embodiment, as shown in fig. 13, the obstacle tracking using the obstacle position identification method specifically includes: depth information is detected through a depth sensor, and a visual two-dimensional detection result and an instantiated image segmentation result are obtained through recognition image depth learning detection. Matching the visual two-dimensional detection result with the depth information, matching the instantiated image segmentation result with the depth information, determining the barrier with the visual two-dimensional detection result, the instantiated image segmentation result and the depth information, fusing sensor observation information by using a fusion algorithm through the visual two-dimensional detection result, the instantiated image segmentation result and the depth information so as to estimate and obtain barrier position information, and then tracking the barrier by using the barrier position information. As shown in fig. 13a, a schematic diagram of the right vehicle insertion intention displayed for the vehicle-mounted terminal, wherein the right vehicle is partially detected, the obstacle position information of the right vehicle is obtained by recognition, then the obstacle position information of a plurality of continuous time instants is obtained, the distances between the obstacle position information of the plurality of time instants and the lane line are calculated to be smaller and smaller, so that the right vehicle has the intention of being inserted into the own lane from the adjacent lane, and then the vehicle-mounted terminal automatically drives and judges that the own vehicle may need to be decelerated and crawled.
In a specific embodiment, as shown in fig. 14, the method for identifying the position of an obstacle specifically includes the following steps:
s1302, acquiring each initial three-dimensional point information corresponding to the obstacle, converting the initial three-dimensional point information to obtain target three-dimensional point information, and taking the target three-dimensional point information as depth information corresponding to the obstacle.
And S1304, acquiring a video image, detecting the obstacle in the video image, and obtaining plane area information corresponding to the obstacle.
S1306, subject segmentation is performed on the video image based on the pixels, and subject region information corresponding to the obstacle is obtained.
S1308, performing similarity matching on the depth information and the plane region information, and performing similarity matching on the depth information and the body region information.
S1310, when the depth information coincides with the plane area information and the body area information, respectively, a target obstacle is determined.
S1312, acquiring preset position information corresponding to the target obstacle, acquiring a correlation between the preset position information and the depth information, calculating three-dimensional position error information between the depth information and the preset position information according to the correlation, and calculating a first conditional probability distribution of the depth information under the preset position information according to the three-dimensional position error information.
And S1314, acquiring a mapping relation between the space and the plane, calculating plane line information corresponding to the preset position information according to the mapping relation, calculating a distance between the main body area information and the plane line information, acquiring an adjustment constant, calculating plane line distance error information of the main body area information and the plane line information according to the distance and the adjustment constant, and calculating a second conditional probability distribution of the main body area information under the preset position information according to the plane line distance error information.
S1316, according to the mapping relation, plane horizontal and vertical information corresponding to the preset position information is calculated, plane horizontal and vertical error information of the plane area information and the plane horizontal and vertical information is calculated, and according to the plane horizontal and vertical error information, third condition probability distribution of the plane area information under the preset position information is calculated.
S1318, determining conditional probability distribution of the sensor observation information under the preset position information according to the first conditional probability distribution, the second conditional probability distribution and the third conditional probability distribution.
S1320, estimating the conditional probability distribution to obtain the target position information corresponding to the target obstacle.
It should be understood that although the various steps of the flow charts in fig. 2-14 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-14 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
In one embodiment, as shown in fig. 15, there is provided an obstacle position recognition apparatus 1500, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, and specifically includes: an information acquisition module 1502, an information matching module 1504, an error calculation module 1506, a conditional distribution calculation module 1508, and a location derivation module 1510, wherein:
an information obtaining module 1502, configured to obtain sensor observation information, where the sensor observation information includes depth information, plane area information, and main area information corresponding to an obstacle;
the information matching module 1504 is used for performing similar matching on the depth information, the plane area information and the main body area information, and determining a target obstacle when the depth information, the plane area information and the main body area information are in similar matching consistency;
the error calculation module 1506 is configured to obtain preset position information corresponding to the target obstacle, and calculate depth information, plane area information, and distance error information between the main body area information and the preset position information, respectively;
a conditional distribution calculating module 1508, configured to calculate a conditional probability distribution of the sensor observation information under the preset position information according to the distance error information;
a position obtaining module 1510, configured to estimate the conditional probability distribution to obtain target position information corresponding to the target obstacle
In one embodiment, the information acquisition module 1502 includes:
the device comprises a depth information obtaining unit, a depth information obtaining unit and a depth information obtaining unit, wherein the depth information obtaining unit is used for obtaining each initial three-dimensional point information corresponding to an obstacle, converting the initial three-dimensional point information to obtain target three-dimensional point information, and using the target three-dimensional point information as the depth information corresponding to the obstacle;
the plane area information obtaining unit is used for obtaining a video image, detecting an obstacle in the video image and obtaining plane area information corresponding to the obstacle;
and a main body area information obtaining unit, configured to perform main body segmentation on the video image based on the pixels to obtain main body area information corresponding to the obstacle.
In one embodiment, the depth information obtaining unit is further configured to perform geometric fitting on each piece of initial three-dimensional point information to obtain geometric information; and selecting target three-dimensional point information from the geometric information.
In one embodiment, the information matching module 1504 further includes:
the similarity matching unit is used for performing similarity matching on the depth information and the plane area information and performing similarity matching on the depth information and the main body area information;
a target determination unit for determining a target obstacle when the depth information coincides with the plane area information and the body area information, respectively.
In one embodiment, the similarity matching unit is further configured to calculate plane projection information corresponding to the depth information, and calculate corresponding plane projection area information according to the plane projection information; and calculating the plane similarity of the plane projection area information and the plane area information, and determining a plane matching result according to the plane similarity.
In one embodiment, the similarity matching unit is further configured to calculate planar projection information corresponding to the depth information; and calculating the main body similarity of the plane projection information and the main body region information, and determining a main body matching result according to the main body similarity.
In one embodiment, the error calculation module 1506 includes:
and the depth error calculation unit is used for acquiring the incidence relation between the preset position information and the depth information and calculating the three-dimensional position error information between the depth information and the preset position information according to the incidence relation.
In one embodiment, the error calculation module 1506 includes:
a distance error calculation unit for calculating a distance between the main body region information and the plane line information, and acquiring an adjustment constant; and calculating the plane line distance error information of the main body area information and the plane line information according to the distance and the adjusting constant.
In one embodiment, the error calculation module 1506 includes:
the abscissa error calculation unit is used for acquiring an abscissa mapping relation in the mapping relation when the preset position information is at the first position of the target obstacle; and determining a plane abscissa extreme value according to the abscissa mapping relation and the preset position information, and calculating plane abscissa error information according to the plane abscissa extreme value and the abscissa mapping relation.
In one embodiment, the error calculation module 1506 includes:
the vertical coordinate error calculation unit is used for acquiring a vertical coordinate mapping relation in the mapping relation when the preset position information is at the second position of the target obstacle; and determining a plane longitudinal coordinate extreme value according to the longitudinal coordinate mapping relation and the preset position information, and calculating plane longitudinal coordinate error information according to the plane longitudinal coordinate extreme value and the longitudinal coordinate mapping relation.
In one embodiment, the conditional distribution calculating module 1508 is further configured to calculate a first conditional probability distribution of the depth information under the preset position information according to the three-dimensional position error information; calculating second conditional probability distribution of the main body region information under the preset position information according to the plane line distance error information; calculating third conditional probability distribution of the plane area information under the preset position information according to the plane horizontal and longitudinal error information; and determining the conditional probability distribution of the sensor observation information under the preset position information according to the first conditional probability distribution, the second conditional probability distribution and the third conditional probability distribution.
In one embodiment, the obstacle position identifying apparatus 1500 further includes:
and the intention determining module is used for acquiring lane line information and determining the intention corresponding to the target obstacle according to the target position information and the lane line information.
For the specific definition of the obstacle position identification device, reference may be made to the above definition of the obstacle position identification method, which is not described herein again. Each module in the obstacle position recognition apparatus may be wholly or partially implemented by software, hardware, or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 16. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement an obstacle position identification method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 16 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is further provided, which includes a memory and a processor, the memory stores a computer program, and the processor implements the steps of the above method embodiments when executing the computer program.
In an embodiment, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (26)

1. An obstacle position recognition method, characterized by comprising:
acquiring sensor observation information, wherein the sensor observation information comprises depth information, plane area information and main body area information corresponding to an obstacle;
performing similar matching on the depth information, the plane area information and the main body area information, and determining a target obstacle when the depth information, the plane area information and the main body area information are in similar matching consistency;
acquiring preset position information corresponding to the target obstacle, and respectively calculating the depth information, the plane area information and distance error information between the main body area information and the preset position information, wherein the distance error information comprises three-dimensional position error information between the depth information and the preset position information, plane line distance error information between the plane area information and plane line information corresponding to the preset position information and plane horizontal and vertical error information between the main body area information and plane horizontal and vertical information corresponding to the preset position information;
calculating conditional probability distribution of the sensor observation information under the preset position information according to the distance error information;
and estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle.
2. The method of claim 1, wherein the obtaining sensor observation information including depth information, planar area information, and body area information corresponding to an obstacle comprises:
acquiring initial three-dimensional point information corresponding to an obstacle, converting the initial three-dimensional point information to obtain target three-dimensional point information, and taking the target three-dimensional point information as depth information corresponding to the obstacle;
acquiring a video image, and detecting an obstacle in the video image to obtain plane area information corresponding to the obstacle;
and carrying out main body segmentation on the video image based on pixels to obtain main body area information corresponding to the obstacle.
3. The method of claim 2, wherein converting the initial three-dimensional point information to obtain target three-dimensional point information comprises:
carrying out geometric fitting on each piece of initial three-dimensional point information to obtain geometric information;
and selecting target three-dimensional point information from the geometric information.
4. The method of claim 1, wherein the similarity matching the depth information, the plane area information, and the subject area information, and when the similarity matching of the depth information, the plane area information, and the subject area information is consistent, determining a target obstacle comprises:
performing similarity matching on the depth information and the plane area information, and performing similarity matching on the depth information and the main body area information;
and when the depth information is consistent with the plane area information and the main body area information respectively, determining a target obstacle.
5. The method of claim 4, wherein the similarity matching the depth information with the planar region information comprises:
calculating plane projection information corresponding to the depth information, and calculating corresponding plane projection area information according to the plane projection information;
and calculating the plane similarity of the plane projection area information and the plane area information, and determining a plane matching result according to the plane similarity.
6. The method of claim 4, wherein the similarity matching the depth information with the subject region information comprises:
calculating plane projection information corresponding to the depth information;
and calculating the main body similarity of the plane projection information and the main body region information, and determining a main body matching result according to the main body similarity.
7. The method according to claim 1, wherein the calculating distance error information between the depth information, the plane area information, and the body area information and the preset position information, respectively, comprises:
calculating three-dimensional position error information between the depth information and the preset position information;
acquiring a mapping relation between a space and a plane, calculating plane line information corresponding to the preset position information according to the mapping relation, and calculating plane line distance error information of the main body area information and the plane line information;
and calculating plane horizontal and vertical information corresponding to the preset position information according to the mapping relation, and calculating plane horizontal and vertical error information of the plane area information and the plane horizontal and vertical information.
8. The method according to claim 7, wherein said calculating three-dimensional position error information between the depth information and the preset position information comprises:
acquiring the incidence relation between the preset position information and the depth information, and calculating the three-dimensional position error information between the depth information and the preset position information according to the incidence relation.
9. The method of claim 7, wherein said calculating plane line distance error information of said body region information and said plane line information comprises:
calculating the distance between the main body area information and the plane line information, and acquiring an adjusting constant;
and calculating the plane line distance error information of the main body area information and the plane line information according to the distance and the adjusting constant.
10. The method according to claim 7, wherein the calculating, according to the mapping relationship, the plane horizontal and vertical information corresponding to the preset position information, and the calculating of the plane horizontal and vertical error information of the plane area information and the plane horizontal and vertical information includes:
when the preset position information is at the first position of the target obstacle, acquiring an abscissa mapping relation in the mapping relation;
and determining a plane abscissa extreme value according to the abscissa mapping relation and the preset position information, and calculating plane abscissa error information according to the plane abscissa extreme value and the abscissa mapping relation.
11. The method according to claim 7, wherein the calculating, according to the mapping relationship, the plane horizontal and vertical information corresponding to the preset position information, and the calculating of the plane horizontal and vertical error information of the plane area information and the plane horizontal and vertical information includes:
when the preset position information is at the second position of the target obstacle, acquiring a vertical coordinate mapping relation in the mapping relation;
and determining a plane longitudinal coordinate extreme value according to the longitudinal coordinate mapping relation and the preset position information, and calculating plane longitudinal coordinate error information according to the plane longitudinal coordinate extreme value and the longitudinal coordinate mapping relation.
12. The method of claim 7, wherein the distance error information includes three-dimensional position error information, planar line distance error information and planar lateral and longitudinal error information, and the calculating the conditional probability distribution of the sensor observation information under the preset position information according to the distance error information includes:
calculating a first conditional probability distribution of the depth information under the preset position information according to the three-dimensional position error information;
calculating second conditional probability distribution of the main body region information under the preset position information according to the plane line distance error information;
calculating third conditional probability distribution of the plane area information under the preset position information according to the plane horizontal and longitudinal error information;
and determining the conditional probability distribution of the sensor observation information under the preset position information according to the first conditional probability distribution, the second conditional probability distribution and the third conditional probability distribution.
13. An obstacle position recognition apparatus, characterized in that the apparatus comprises:
the information acquisition module is used for acquiring sensor observation information, wherein the sensor observation information comprises depth information, plane area information and main body area information corresponding to the obstacle;
the information matching module is used for performing similar matching on the depth information, the plane area information and the main body area information, and determining a target obstacle when the depth information, the plane area information and the main body area information are in similar matching consistency;
an error calculation module, configured to obtain preset position information corresponding to the target obstacle, and calculate distance error information between the depth information, the plane area information, and the main body area information and the preset position information, respectively, where the distance error information includes three-dimensional position error information between the depth information and the preset position information, plane line distance error information between the plane area information and plane line information corresponding to the preset position information, and plane horizontal and vertical error information between the main body area information and plane horizontal and vertical information corresponding to the preset position information;
the conditional distribution calculation module is used for calculating the conditional probability distribution of the sensor observation information under the preset position information according to the distance error information;
and the position obtaining module is used for estimating the conditional probability distribution to obtain target position information corresponding to the target obstacle.
14. The apparatus of claim 13, wherein the information obtaining module comprises:
the device comprises a depth information obtaining unit, a depth information obtaining unit and a depth information obtaining unit, wherein the depth information obtaining unit is used for obtaining each initial three-dimensional point information corresponding to an obstacle, converting the initial three-dimensional point information to obtain target three-dimensional point information, and using the target three-dimensional point information as the depth information corresponding to the obstacle;
the plane area information obtaining unit is used for obtaining a video image, detecting an obstacle in the video image and obtaining plane area information corresponding to the obstacle;
and a main body area information obtaining unit, configured to perform main body segmentation on the video image based on the pixels to obtain main body area information corresponding to the obstacle.
15. The apparatus according to claim 14, wherein the depth information obtaining unit is further configured to perform geometric fitting on each piece of initial three-dimensional point information to obtain geometric information; and selecting target three-dimensional point information from the geometric information.
16. The apparatus of claim 13, wherein the information matching module comprises:
the similarity matching unit is used for performing similarity matching on the depth information and the plane area information and performing similarity matching on the depth information and the main body area information;
a target determination unit for determining a target obstacle when the depth information coincides with the plane area information and the body area information, respectively.
17. The apparatus of claim 16, wherein the similarity matching unit is further configured to calculate planar projection information corresponding to the depth information, and calculate corresponding planar projection area information according to the planar projection information; and calculating the plane similarity of the plane projection area information and the plane area information, and determining a plane matching result according to the plane similarity.
18. The apparatus of claim 16, wherein the similarity matching unit is further configured to calculate planar projection information corresponding to the depth information; and calculating the main body similarity of the plane projection information and the main body region information, and determining a main body matching result according to the main body similarity.
19. The apparatus of claim 13, wherein the error calculation module comprises:
a depth error calculation unit for calculating three-dimensional position error information between the depth information and the preset position information;
the distance error calculation unit is used for acquiring a mapping relation between a space and a plane, calculating plane line information corresponding to the preset position information according to the mapping relation, and calculating plane line distance error information of the main body area information and the plane line information;
and the horizontal and vertical coordinate error calculation unit is used for calculating plane horizontal and vertical information corresponding to the preset position information according to the mapping relation and calculating plane horizontal and vertical error information of the plane area information and the plane horizontal and vertical information.
20. The apparatus according to claim 19, wherein the depth error calculating unit is further configured to obtain a correlation between the preset position information and the depth information, and calculate three-dimensional position error information between the depth information and the preset position information according to the correlation.
21. The apparatus according to claim 19, wherein the distance error calculation unit is further configured to calculate a distance between the body region information and the plane line information, and obtain an adjustment constant; and calculating the plane line distance error information of the main body area information and the plane line information according to the distance and the adjusting constant.
22. The apparatus according to claim 19, wherein the abscissa error calculation unit is further configured to obtain an abscissa mapping relationship among the mapping relationships when the preset position information is at the first position of the target obstacle; and determining a plane abscissa extreme value according to the abscissa mapping relation and the preset position information, and calculating plane abscissa error information according to the plane abscissa extreme value and the abscissa mapping relation.
23. The apparatus according to claim 19, wherein the abscissa and ordinate error calculating unit is further configured to obtain an ordinate mapping relationship among the mapping relationships when the preset position information is at the second position of the target obstacle; and determining a plane longitudinal coordinate extreme value according to the longitudinal coordinate mapping relation and the preset position information, and calculating plane longitudinal coordinate error information according to the plane longitudinal coordinate extreme value and the longitudinal coordinate mapping relation.
24. The apparatus of claim 19, wherein the distance error information includes three-dimensional position error information, planar line distance error information, and planar lateral and longitudinal error information, and the conditional distribution calculating module is further configured to calculate a first conditional probability distribution of the depth information under the preset position information according to the three-dimensional position error information; calculating second conditional probability distribution of the main body region information under the preset position information according to the plane line distance error information; calculating third conditional probability distribution of the plane area information under the preset position information according to the plane horizontal and longitudinal error information; and determining the conditional probability distribution of the sensor observation information under the preset position information according to the first conditional probability distribution, the second conditional probability distribution and the third conditional probability distribution.
25. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method of any of claims 1 to 12.
26. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 12.
CN202010250438.0A 2020-04-01 2020-04-01 Obstacle position recognition method and device, computer equipment and storage medium Active CN111488812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010250438.0A CN111488812B (en) 2020-04-01 2020-04-01 Obstacle position recognition method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010250438.0A CN111488812B (en) 2020-04-01 2020-04-01 Obstacle position recognition method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111488812A CN111488812A (en) 2020-08-04
CN111488812B true CN111488812B (en) 2022-02-22

Family

ID=71810887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010250438.0A Active CN111488812B (en) 2020-04-01 2020-04-01 Obstacle position recognition method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111488812B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112417967B (en) * 2020-10-22 2021-12-14 腾讯科技(深圳)有限公司 Obstacle detection method, obstacle detection device, computer device, and storage medium
CN112697188B (en) * 2020-12-08 2022-12-23 北京百度网讯科技有限公司 Detection system test method and device, computer equipment, medium and program product
CN112883909B (en) * 2021-03-16 2024-06-14 东软睿驰汽车技术(沈阳)有限公司 Obstacle position detection method and device based on bounding box and electronic equipment
CN113781539A (en) * 2021-09-06 2021-12-10 京东鲲鹏(江苏)科技有限公司 Depth information acquisition method and device, electronic equipment and computer readable medium
CN113792655A (en) * 2021-09-14 2021-12-14 京东鲲鹏(江苏)科技有限公司 Intention identification method and device, electronic equipment and computer readable medium
CN114296458B (en) * 2021-12-29 2023-08-01 深圳创维数字技术有限公司 Vehicle control method, device and computer readable storage medium
CN114926508B (en) * 2022-07-21 2022-11-25 深圳市海清视讯科技有限公司 Visual field boundary determining method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102338874A (en) * 2011-06-24 2012-02-01 浙江大学 Global probability data correlation method used for passive multi-sensor target tracking
CN108764108A (en) * 2018-05-22 2018-11-06 湖北省专用汽车研究院 A kind of Foregut fermenters method based on Bayesian inference
CN109459723A (en) * 2018-11-06 2019-03-12 西北工业大学 A kind of Pure orientation Passive Location based on first heuristic algorithm
CN109740632A (en) * 2018-12-07 2019-05-10 百度在线网络技术(北京)有限公司 Similarity model training method and device based on the more measurands of multisensor
CN110070570A (en) * 2019-03-20 2019-07-30 重庆邮电大学 A kind of obstacle detection system and method based on depth information
CN110864670A (en) * 2019-11-27 2020-03-06 苏州智加科技有限公司 Method and system for acquiring position of target obstacle
CN110913344A (en) * 2018-08-27 2020-03-24 香港科技大学 Cooperative target tracking system and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228110B (en) * 2016-07-07 2019-09-20 浙江零跑科技有限公司 A kind of barrier and drivable region detection method based on vehicle-mounted binocular camera

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102338874A (en) * 2011-06-24 2012-02-01 浙江大学 Global probability data correlation method used for passive multi-sensor target tracking
CN108764108A (en) * 2018-05-22 2018-11-06 湖北省专用汽车研究院 A kind of Foregut fermenters method based on Bayesian inference
CN110913344A (en) * 2018-08-27 2020-03-24 香港科技大学 Cooperative target tracking system and method
CN109459723A (en) * 2018-11-06 2019-03-12 西北工业大学 A kind of Pure orientation Passive Location based on first heuristic algorithm
CN109740632A (en) * 2018-12-07 2019-05-10 百度在线网络技术(北京)有限公司 Similarity model training method and device based on the more measurands of multisensor
CN110070570A (en) * 2019-03-20 2019-07-30 重庆邮电大学 A kind of obstacle detection system and method based on depth information
CN110864670A (en) * 2019-11-27 2020-03-06 苏州智加科技有限公司 Method and system for acquiring position of target obstacle

Also Published As

Publication number Publication date
CN111488812A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN111488812B (en) Obstacle position recognition method and device, computer equipment and storage medium
CN112417967B (en) Obstacle detection method, obstacle detection device, computer device, and storage medium
CN111666921B (en) Vehicle control method, apparatus, computer device, and computer-readable storage medium
EP3581890B1 (en) Method and device for positioning
US10679075B2 (en) Dense correspondence estimation with multi-level metric learning and hierarchical matching
WO2021160184A1 (en) Target detection method, training method, electronic device, and computer-readable medium
US9990736B2 (en) Robust anytime tracking combining 3D shape, color, and motion with annealed dynamic histograms
CN113819890B (en) Distance measuring method, distance measuring device, electronic equipment and storage medium
Erbs et al. Moving vehicle detection by optimal segmentation of the dynamic stixel world
CN112947419B (en) Obstacle avoidance method, device and equipment
Andreasson et al. Mini-SLAM: Minimalistic visual SLAM in large-scale environments based on a new interpretation of image similarity
CN110992424B (en) Positioning method and system based on binocular vision
CN113240734B (en) Vehicle cross-position judging method, device, equipment and medium based on aerial view
CN114063098A (en) Multi-target tracking method, device, computer equipment and storage medium
CN112699834A (en) Traffic identification detection method and device, computer equipment and storage medium
CN114692720A (en) Image classification method, device, equipment and storage medium based on aerial view
CN112257668A (en) Main and auxiliary road judging method and device, electronic equipment and storage medium
Li et al. High-precision motion detection and tracking based on point cloud registration and radius search
EP4001965A1 (en) Lidar localization using optical flow
CN110864670B (en) Method and system for acquiring position of target obstacle
CN114170499A (en) Target detection method, tracking method, device, visual sensor and medium
CN116403191A (en) Three-dimensional vehicle tracking method and device based on monocular vision and electronic equipment
Xiong et al. A 3d estimation of structural road surface based on lane-line information
CN114119757A (en) Image processing method, apparatus, device, medium, and computer program product
WO2021114775A1 (en) Object detection method, object detection device, terminal device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40027434

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221019

Address after: 518057 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.