CN111126363B - Object recognition method and device for automatic driving vehicle - Google Patents

Object recognition method and device for automatic driving vehicle Download PDF

Info

Publication number
CN111126363B
CN111126363B CN202010233505.8A CN202010233505A CN111126363B CN 111126363 B CN111126363 B CN 111126363B CN 202010233505 A CN202010233505 A CN 202010233505A CN 111126363 B CN111126363 B CN 111126363B
Authority
CN
China
Prior art keywords
data
target
information
automatic driving
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010233505.8A
Other languages
Chinese (zh)
Other versions
CN111126363A (en
Inventor
成晟
陈刚
毛克成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Xuzhitong Information Technology Co ltd
Jiangsu Guangyu Technology Industry Development Co ltd
Original Assignee
Nanjing Xuzhitong Information Technology Co ltd
Jiangsu Guangyu Technology Industry Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Xuzhitong Information Technology Co ltd, Jiangsu Guangyu Technology Industry Development Co ltd filed Critical Nanjing Xuzhitong Information Technology Co ltd
Priority to CN202010233505.8A priority Critical patent/CN111126363B/en
Publication of CN111126363A publication Critical patent/CN111126363A/en
Application granted granted Critical
Publication of CN111126363B publication Critical patent/CN111126363B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an object identification method and device for an automatic driving vehicle. The invention relates to the technical field of automatic driving, and solves the problem that an existing automatic driving vehicle is difficult to accurately identify obstacles. The invention utilizes video to identify object appearance data, utilizes mixed detection of high-precision imaging radar as secondary supplement of object appearance data detection, simultaneously utilizes precise distance measurement of the high-precision imaging radar to form projection of multiple data types in a space, truly realizes object target classification of 3D view in automatic driving, and realizes distance, does not depend on ground flatness assumption, but utilizes radar or laser radar data, only needs data of one camera to project the radar data into a 3D model reconstructed by video data acquisition, truly realizes object target classification in the 3D view, and distinguishes precise distance between an automatic driving vehicle and each type of target.

Description

Object recognition method and device for automatic driving vehicle
Technical Field
The invention relates to the technical field of automatic driving, in particular to an object identification method and device of an automatic driving vehicle.
Background
In the automatic driving technologies of the L3-level to the L5-level, the distance between the automatic driving vehicle and each target is accurately judged, and the method is very important for the driving process of the automatic driving vehicle. For humans, we have two high resolution and highly synchronized vision sensors, our eyes. Our eyes are able to measure distance through stereoscopic vision processing of the brain. However, in an autonomous vehicle, it is very difficult to perform visual ranging by simulating human eyes with a binocular camera because the problem of high-precision synchronization of the binocular camera is difficult to solve, which may cause a distance estimation error.
Therefore, in the existing automatic driving technology, data of a monocular camera is mostly adopted to identify an object and measure and calculate the distance between obstacles, and the measuring and calculating method is roughly as follows: firstly, assuming that the ground on which the automatic driving vehicle runs is flat, then, performing three-dimensional modeling by using two-dimensional information in an image acquired by a monocular camera on the automatic driving vehicle, and then estimating the distance between an obstacle and the automatic driving vehicle by means of a geometrical optics method.
However, this method has the following drawbacks: when the autonomous vehicle ascends and descends a slope, the result of the estimation is inaccurate due to uneven actual ground, and the automatic cruise control, the lane change safety check and the lane change execution all depend on the correct judgment of the distance to the obstacle, so that the wrong prediction may have negative effects. In terms of measuring distance, radar can be used for detection, but the radar detection resolution is not high, only the existence of an obstacle can be detected, but the specific type of the obstacle cannot be confirmed. Furthermore, the doppler frequency pattern of the radar is able to distinguish a moving car from a stationary background, such as an overhead bridge or the like, but if the car remains stationary, the radar may not be able to recognize and the car may be considered part of the stationary background.
Disclosure of Invention
The invention provides an object identification method and device of an automatic driving vehicle, which aim to solve the problem that the existing automatic driving vehicle is difficult to accurately identify an obstacle.
In a first aspect, the present invention provides an object recognition method for an autonomous vehicle, the method comprising:
acquiring video data in front of an autonomous vehicle;
extracting two-dimensional data of an environment in front of the autonomous vehicle from the video data;
obtaining depth data of an environment in front of an autonomous vehicle;
establishing a three-dimensional model of the environment in front of the automatic driving vehicle according to the two-dimensional data and the depth data;
identifying a target object in the three-dimensional model to obtain appearance data of the target object;
identifying first class information of the target object according to the appearance data;
judging the static motion information of the target object;
forming a comprehensive view according to the three-dimensional model and the static motion information;
acquiring radar data in front of an autonomous vehicle;
identifying target objects in front of the automatic driving vehicle from the radar data to obtain radar perception data of each target object;
identifying second category information of the target object according to the radar perception data;
performing category matching on the target object identified from the three-dimensional model and the target object identified from the radar data according to the first category information and the second category information;
projecting the radar data into the comprehensive view according to a category matching result to obtain a comprehensive model;
and projecting the distance data between the automatic driving vehicle and the target object in the radar data to the comprehensive model.
With reference to the first aspect, in a first implementable manner of the first aspect, the obtaining depth data of an environment in front of the autonomous vehicle includes:
identifying and distinguishing in-road information, road boundary information and out-of-boundary information from video data in front of an autonomous vehicle;
identifying feature object information from the in-road information, the road boundary information, and the out-of-boundary information, respectively;
positioning a two-dimensional coordinate system data group or a data matrix of the characteristic object information according to a video depth analysis algorithm;
and respectively calculating a distance data matrix between each piece of characteristic object information and the automatic driving vehicle according to the attribute of the characteristic object information and the relation between the characteristic object information and the automatic driving vehicle so as to form depth data of the environment in front of the automatic driving vehicle according to the two-dimensional coordinate system data set or the data matrix and the distance data matrix.
With reference to the first aspect, in a second implementable manner of the first aspect, identifying, according to the appearance data, first category information of the target object includes:
extracting basic features of the target object from the appearance data;
performing feature matching on the basic features and reference features in a deep learning library;
and determining the category information of the reference features as first category information of the target object according to the feature matching result.
With reference to the first implementable manner of the first aspect, in a third implementable manner of the first aspect, the determining the stationary motion information of the target object includes:
defining the characteristic object information as a static object and a movable object according to the attribute of the characteristic object information;
marking all the static targets and the movable targets in a three-dimensional model of the environment in front of the automatic driving vehicle, and modeling the primarily recognized static targets as initial static areas;
when the automatic driving vehicle runs, comparing and analyzing the difference variable quantity of each frame of the static target in the video relative to the first 1-3 frames through a video frame difference recognition algorithm to obtain the change rule of the static target and the static area, and adjusting or updating the static target and the static area according to the difference variable quantity;
comparing and analyzing the difference variable quantity of each frame of the movable target in the video relative to the first 1-3 frames by a video frame difference identification algorithm to obtain the change rule of the movable target, and comparing the change rule of the movable target with the change rules of the static target and the static area;
if the change rule of the movable target is the same as the change rule of the stationary target and the stationary region, judging that the movable target is in a stationary state;
if the change rule of the movable target is different from the change rule of the static target and the static area, judging that the movable target is in a motion state;
according to the change of the movable object in the motion state and the static area, the motion track of the movable object in the motion state is determined, and the track of the subsequent possible motion of the movable object in the motion state can be deduced.
With reference to the first aspect, in a fourth implementable manner of the first aspect, identifying, according to radar sensing data, second category information of the target object includes:
forming a gridding radiation unit by using the minimum angular resolution and the minimum distance resolution of the radar, and establishing a corresponding space model;
storing radar sensing data in different gridding radiation units according to a time axis;
determining different target objects according to different gridding radiation units and data of different time differences, recording morphological characteristic attributes, distance attributes and motion and static attributes of each target object, and realizing three-dimensional shadow imaging of the target objects in a space model while determining the target objects;
establishing a radar imaging library of all target objects, and recording and marking the characteristic attributes of the target objects acquired by each radar;
based on a deep learning technology, training the radar to recognize and image different angles and different states of different target objects according to the differences of the shapes, sizes, association degrees, intervals and motion states of the objects;
in the driving process of the automatic driving vehicle, comparing a radar imaging library according to the differences of the shapes, sizes, association degrees, intervals and motion states of target objects in data acquired by a radar so as to identify and distinguish different target objects;
extracting second category information of the target object from the corresponding radar imaging library;
and incorporating the new characteristic attribute of the target object newly acquired by the radar into an imaging library, providing a new analysis sample for deep learning, and correspondingly adjusting the second category information of the target object according to the new characteristic attribute.
In a second aspect, the present invention provides an object recognition apparatus for an autonomous vehicle, the apparatus comprising:
a first acquisition unit for acquiring video data ahead of the autonomous vehicle,
an extraction unit configured to extract two-dimensional data of an environment ahead of an autonomous vehicle from the video data;
a second acquisition unit for acquiring depth data of an environment in front of the autonomous vehicle;
the establishing unit is used for establishing a three-dimensional model of the environment in front of the automatic driving vehicle according to the two-dimensional data and the depth data;
the first identification unit is used for identifying a target object in the three-dimensional model to obtain appearance data of the target object;
the second identification unit is used for identifying the first class information of the target object according to the appearance data;
the judging unit is used for judging the static motion information of the target object;
the forming unit is used for forming a comprehensive view according to the three-dimensional model and the static motion information;
a third acquisition unit for acquiring radar data in front of the autonomous vehicle;
the third identification unit is used for identifying target objects in front of the automatic driving vehicle from the radar data to obtain radar perception data of each target object;
the fourth identification unit is used for identifying second category information of the target object according to the radar perception data;
a matching unit, configured to perform category matching on a target object identified from the three-dimensional model and a target object identified from the radar data according to the first category information and the second category information;
the projection unit is used for projecting the radar data into the comprehensive view according to the category matching result to obtain a comprehensive model;
and the projection unit is used for projecting the distance data between the automatic driving vehicle and the target object in the radar data to the comprehensive model.
With reference to the second aspect, in a first implementable manner of the second aspect, the second obtaining unit includes:
the first identification subunit is used for identifying and distinguishing the in-road information, the road boundary information and the out-of-boundary information from the video data in front of the automatic driving vehicle;
a second identifying subunit, configured to identify feature object information from the in-road information, the road boundary information, and the out-of-boundary information, respectively;
the positioning subunit is used for positioning a two-dimensional coordinate system data group or a data matrix of the characteristic object information according to a video depth analysis algorithm;
and the calculating subunit is used for respectively calculating a distance data matrix between each piece of characteristic object information and the automatic driving vehicle according to the attribute of the characteristic object information and the relationship between the characteristic object information and the automatic driving vehicle so as to obtain the depth data of the environment in front of the automatic driving vehicle according to the two-dimensional coordinate system data set or the data matrix and the distance data matrix.
With reference to the second aspect, in a second implementable manner of the second aspect, the second identifying unit includes:
a first extraction subunit, configured to extract a basic feature of the target object from the appearance data;
the matching subunit is used for performing feature matching on the basic features and reference features in a deep learning library;
and the first determining subunit is configured to determine, according to the feature matching result, the category information of the reference feature as the first category information of the target object.
With reference to the first implementable manner of the second aspect, in a third implementable manner of the second aspect, the determining unit includes:
a definition subunit, configured to define the characteristic object information as a stationary object and a movable object according to the attribute of the characteristic object information;
a marking subunit for marking both the stationary target and the movable target in a three-dimensional model of an environment in front of the autonomous vehicle, and modeling the primarily recognized stationary target as an initial stationary region;
the comparison subunit is used for comparing and analyzing the difference variation of each frame of the static target in the video relative to the first 1-3 frames through a video frame difference recognition algorithm when the automatic driving vehicle runs to obtain the variation rule of the static target and the static area, and adjusting or updating the static target and the static area according to the difference variation;
the comparison subunit is further configured to compare and analyze a difference variation of each frame of the movable target in the video with respect to the first 1-3 frames by using a video frame difference recognition algorithm to obtain a variation rule of the movable target, and compare the variation rule of the movable target with the variation rules of the stationary target and the stationary region;
a determination subunit for determining that the movable target is in a stationary state in a case where a change rule of the movable target is the same as a change rule of the stationary target and the stationary area; under the condition that the change rule of the movable target is different from the change rule of the static target and the static area, judging that the movable target is in a motion state;
and the second determining subunit is used for determining the motion track of the movable object in the motion state according to the change of the movable object in the motion state and the static area, and can deduce the track of the subsequent possible motion of the movable object in the motion state.
With reference to the second aspect, in a fourth implementable manner of the second aspect, the fourth identifying unit includes:
the first establishing subunit is used for forming a gridding radiation unit by utilizing the minimum angular resolution and the minimum distance resolution of the radar and establishing a corresponding spatial model;
the storage subunit is used for storing the radar sensing data in different gridding radiation units according to a time axis;
the third determining subunit is used for determining different target objects according to the data of different gridding radiation units and different time differences, recording morphological characteristic attributes, distance attributes and motion and static attributes of each target object, and realizing three-dimensional shadow imaging of the target objects in the space model while determining the target objects;
the second establishing subunit is used for establishing a radar imaging library of all target objects and recording and marking the characteristic attributes of the target objects acquired by each radar;
the training subunit is used for training the recognition and imaging capabilities of the radar to different angles and different states of different target objects according to the differences of the shapes, sizes, association degrees, intervals and motion states of the objects on the basis of a deep learning technology;
the third identification subunit is used for comparing the radar imaging library according to the differences in the aspects of the form, the size, the association degree, the distance and the motion state of the target object in the data acquired from the radar in the running process of the automatic driving vehicle so as to identify and distinguish different target objects;
the second extraction subunit is used for extracting second category information of the target object from the corresponding radar imaging library;
and the adjusting subunit is used for bringing the new characteristic attribute of the target object newly acquired by the radar into the imaging library, providing a new analysis sample for deep learning, and correspondingly adjusting the second category information of the target object according to the new characteristic attribute.
According to the technical scheme, the object appearance data is identified by the aid of the video, hybrid detection of the high-precision imaging radar is used as secondary supplement of object appearance data detection, meanwhile, projection of multiple data types in one space is formed by the aid of accurate distance measurement of the high-precision imaging radar, object target classification of a 3D view in automatic driving is really achieved, distance is achieved, the object target classification of the 3D view is really achieved by the aid of radar or laser radar data through only needing data of one camera by means of data of the radar or the laser radar, and accurate distances between the automatic driving vehicle and the targets of the various types are distinguished.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious to those skilled in the art that other drawings can be obtained according to the drawings without any inventive exercise.
Fig. 1 is a flowchart of an object recognition method of an autonomous vehicle according to the present invention.
Fig. 2 is a flowchart of an embodiment of an object recognition method for an autonomous vehicle according to the present invention.
Fig. 3 is a flowchart of an embodiment of an object recognition method for an autonomous vehicle according to the present invention.
Fig. 4 is a flowchart of an embodiment of an object recognition method for an autonomous vehicle according to the present invention.
FIG. 5 is a flowchart illustrating an embodiment of an object recognition method for an autonomous vehicle according to the present invention.
Fig. 6 is a schematic diagram of an object recognition apparatus of an autonomous vehicle according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the specific embodiments of the present invention and the accompanying drawings. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The technical solutions provided by the embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of an object recognition method for an autonomous vehicle according to an embodiment of the present invention is shown, where an execution subject of the method may be a processor, and the method specifically includes:
step S101, video data in front of the autonomous vehicle is acquired.
In this embodiment, video data in front of the autonomous vehicle may be collected by a single front-facing camera.
Step S102, two-dimensional data of the environment in front of the automatic driving vehicle is extracted from the video data.
Step S103, obtaining depth data of the environment in front of the automatic driving vehicle.
As shown in fig. 2, in this embodiment, acquiring depth data of an environment in front of an autonomous vehicle may specifically include the following steps:
in step S201, in-road information, road boundary information, and out-of-boundary information are identified and distinguished from video data in front of the autonomous vehicle.
Step S202, respectively identifying characteristic object information from the road interior information, the road boundary information and the boundary exterior information.
Specifically, lane information, lane line information, zebra crossing information, motor/non-motor vehicle information, pedestrian information, traffic light information, other obstacle information, and the like may be identified in the in-road information.
The method comprises the steps of identifying landmark information on two side edges of a road (such as a fence, a double yellow line, a yellow dotted line, a green belt, a pedestrian path and the like), end boundary information in front of a video area (such as a turning road edge boundary and a T-shaped intersection edge), a sign plate arranged on a traffic edge and the like from road boundary information.
Green belt information, machine/non-isolated lane information, sidewalk information, reverse lane information, green building parking lot information, entrance/exit information, information on vehicles traveling outside the boundary, information on pedestrians outside the boundary, information on other moving objects outside the boundary, and the like are identified from the information outside the boundary.
And S203, positioning a two-dimensional coordinate system data group or a data matrix of the characteristic object information according to a video depth analysis algorithm.
Step S204, respectively calculating a distance data matrix between each piece of characteristic object information and the automatic driving vehicle according to the attribute of the characteristic object information and the relation between the characteristic object information and the automatic driving vehicle, so as to form depth data of the environment in front of the automatic driving vehicle according to the two-dimensional coordinate system data set or the data matrix and the distance data matrix.
And step S104, establishing a three-dimensional model of the environment in front of the automatic driving vehicle according to the two-dimensional data and the depth data.
And S105, identifying the target object in the three-dimensional model to obtain appearance data of the target object.
And step S106, identifying first class information of the target object according to the appearance data.
As shown in fig. 3, in this embodiment, identifying the first category information of the target object according to the appearance data may specifically include the following steps:
step S301, extracting the basic features of the target object from the appearance data.
And step S302, performing feature matching on the basic features and reference features in a deep learning library.
Step S303, determining the category information of the reference feature as the first category information of the target object according to the feature matching result.
And step S107, judging the static motion information of the target object.
As shown in fig. 4, in this embodiment, the determining the still motion information of the target object may specifically include the following steps:
step S401 defines the characteristic object information as a stationary object and a movable object according to the attribute of the characteristic object information.
Specifically, for the attribute of each feature object information identified and distinguished by video, lane information, lane line information, zebra crossing information, isolated column information, double yellow line information, yellow dotted line information, green belt information, pedestrian path information, sign plate information and the like can be directly defined as static targets; and vehicles, people, animals, etc. are defined as moveable objects.
Step S402 marks both the stationary object and the movable object in the three-dimensional model of the environment in front of the autonomous vehicle, and models the initially recognized stationary object as an initial stationary region.
And S403, comparing and analyzing the difference variation of each frame of the static target in the video relative to the first 1-3 frames by using a video frame difference recognition algorithm when the automatic driving vehicle runs to obtain the variation rule of the static target and the static area, and adjusting or updating the static target and the static area according to the difference variation.
And S404, comparing and analyzing the difference variable quantity of each frame of the movable target in the video relative to the previous 1-3 frames through a video frame difference recognition algorithm to obtain the change rule of the movable target, and comparing the change rule of the movable target with the change rules of the static target and the static area.
In step S405, if the change rule of the movable target is the same as the change rule of the stationary target and the stationary area, it is determined that the movable target is in the stationary state.
In step S406, if the change rule of the movable target is different from the change rule of the stationary target and the stationary area, it is determined that the movable target is in a moving state.
In step S407, according to the change between the moving object in the moving state and the stationary area, the motion trajectory of the moving object in the moving state is determined, and the trajectory of the subsequent possible motion of the moving object in the moving state can be deduced.
And S108, forming a comprehensive view according to the three-dimensional model and the static motion information.
Step S109, obtaining radar data in front of the autonomous vehicle, where the radar in this embodiment is a high-precision imaging radar. Step S110, identifying a target object in front of the autonomous vehicle from the radar data, and obtaining radar sensing data of each target object.
And step S111, identifying second category information of the target object according to the radar perception data.
The radar adopted by the invention is a high-resolution radar, and active remote sensing is adopted, so that the target object can be subjected to multi-angle, high-density grid and ultra-long-distance information capture, and the radar has specific penetration capacity, thereby obtaining more details of the target object.
As shown in fig. 5, in this embodiment, identifying the second category information of the target object according to the radar sensing data may specifically include the following steps:
and S501, forming a gridding radiation unit by using the minimum angle resolution and the minimum distance resolution of the radar, and establishing a corresponding space model. The gridding radiation unit is particularly a high-density gridding radiation unit.
And step S502, storing the radar sensing data in different gridding radiation units according to the time axis. In particular, in time axis among different grid cells sufficient to refine the segmentation.
Step S503, according to the data of different gridding radiation units and different time differences, different target objects are determined, the morphological characteristic attribute, the distance attribute and the motion static attribute of each target object are recorded, and the target objects are determined and simultaneously three-dimensional shadow imaging of the target objects is realized in the space model.
Step S504, a radar imaging library of all target objects is established, and the characteristic attributes of the target objects acquired by the radar each time are recorded and labeled.
Step S505, based on the deep learning technology, the radar is trained to recognize and image different angles and different states of different target objects according to differences in the form, size, association degree, distance and motion state of the objects.
Step S506, in the driving process of the automatic driving vehicle, comparing the radar imaging library according to the differences of the form, the size, the association degree, the distance and the motion state of the target object in the data acquired by the radar so as to identify and distinguish different target objects.
And step S507, extracting second category information of the target object from the corresponding radar imaging library.
Step S508, bringing new feature attributes of the target object newly acquired by the radar into the imaging library, providing a new analysis sample for deep learning, and correspondingly adjusting the second category information of the target object according to the new feature attributes.
And step S112, performing category matching on the target object identified from the three-dimensional model and the target object identified from the radar data according to the first category information and the second category information.
And S113, projecting the radar data into the comprehensive view according to the category matching result to obtain a comprehensive model.
Step S114, projecting the distance data between the autonomous vehicle and the target object in the radar data to the integrated model.
It can be known from the above embodiments that, in the object recognition method for the autonomous vehicle according to the present invention, the video is used to recognize the object appearance data, the hybrid detection of the high-precision imaging radar is used as the secondary supplement of the object appearance data detection, and meanwhile, the accurate ranging of the high-precision imaging radar is used to form the projection of multiple data types in one space, so as to truly realize the object target classification of the 3D view in the autonomous vehicle, and realize the distance, which does not depend on the assumption of "ground flatness", but projects the radar data into the 3D model reconstructed by video acquisition data by using the data of one camera through the radar or the lidar data, thereby realistically realizing the object target classification in the 3D view, and distinguishing the accurate distance between the autonomous vehicle and each type of target.
Referring to fig. 6, the present invention provides an object recognition apparatus for an autonomous vehicle, the apparatus including:
a first acquisition unit 601 for acquiring video data ahead of the autonomous vehicle,
an extraction unit 602 for extracting two-dimensional data of an environment in front of the autonomous vehicle from the video data.
A second acquisition unit 603 for acquiring depth data of the environment in front of the autonomous vehicle.
A building unit 604 for building a three-dimensional model of the environment in front of the autonomous vehicle based on the two-dimensional data and the depth data.
The first identification unit 605 is configured to identify a target object in the three-dimensional model, and obtain appearance data of the target object.
A second identifying unit 606, configured to identify the first category information of the target object according to the appearance data.
A determining unit 607 for determining the still motion information of the target object.
A forming unit 608, configured to form a comprehensive view according to the three-dimensional model and the static motion information.
A third acquisition unit 609 for acquiring radar data in front of the autonomous vehicle.
A third identifying unit 610, configured to identify a target object in front of the autonomous vehicle from the radar data, and obtain radar sensing data of each target object.
A fourth identifying unit 611, configured to identify the second category information of the target object according to the radar sensing data.
A matching unit 612, configured to perform category matching on the target object identified from the three-dimensional model and the target object identified from the radar data according to the first category information and the second category information.
And a projection unit 613, configured to project the radar data into the comprehensive view according to the category matching result, so as to obtain a comprehensive model.
A projection unit 614, configured to project distance data between an autonomous vehicle and the target object in the radar data to the comprehensive model.
In this embodiment, the second obtaining unit 603 may include:
the first identification subunit is used for identifying and distinguishing the in-road information, the road boundary information and the out-of-boundary information from the video data in front of the automatic driving vehicle.
And the second identification subunit is used for identifying characteristic object information from the in-road information, the road boundary information and the out-of-boundary information respectively.
And the positioning subunit is used for positioning the two-dimensional coordinate system data group or the data matrix of the characteristic object information according to a video depth analysis algorithm.
And the calculating subunit is used for respectively calculating a distance data matrix between each piece of characteristic object information and the automatic driving vehicle according to the attribute of the characteristic object information and the relationship between the characteristic object information and the automatic driving vehicle so as to obtain the depth data of the environment in front of the automatic driving vehicle according to the two-dimensional coordinate system data set or the data matrix and the distance data matrix.
In this embodiment, the second identifying unit 606 may include:
and the first extraction subunit is used for extracting the basic features of the target object from the appearance data.
And the matching subunit is used for performing feature matching on the basic features and the reference features in the deep learning library.
And the first determining subunit is configured to determine, according to the feature matching result, the category information of the reference feature as the first category information of the target object.
In this embodiment, the determining unit 607 may include:
and a definition subunit, configured to define the characteristic object information as a stationary object and a movable object according to the attribute of the characteristic object information.
A marking subunit for marking both the stationary object and the movable object in a three-dimensional model of an environment in front of the autonomous vehicle and modeling the initially recognized stationary object as an initial stationary region.
And the comparison subunit is used for comparing and analyzing the difference variation of each frame of the static target in the video relative to the first 1-3 frames through a video frame difference recognition algorithm when the automatic driving vehicle runs to obtain the variation rule of the static target and the static area, and adjusting or updating the static target and the static area according to the difference variation.
The comparison subunit is further configured to compare and analyze a difference variation of each frame of the movable target in the video with respect to the first 1-3 frames by using a video frame difference identification algorithm to obtain a variation rule of the movable target, and compare the variation rule of the movable target with the variation rules of the stationary target and the stationary region.
And a determination subunit for determining that the movable target is in the stationary state in a case where a change rule of the movable target is the same as a change rule of the stationary target and the stationary area. In the case where the change rule of the movable target is different from the change rule of the stationary target and the stationary area, it is determined that the movable target is in a moving state.
And the second determining subunit is used for determining the motion track of the movable object in the motion state according to the change of the movable object in the motion state and the static area, and can deduce the track of the subsequent possible motion of the movable object in the motion state.
In this embodiment, the fourth identifying unit 611 may include:
and the first establishing subunit is used for forming a gridding radiation unit by using the minimum angular resolution and the minimum distance resolution of the radar and establishing a corresponding spatial model.
And the storage subunit is used for storing the radar perception data in different gridding radiation units according to the time axis.
And the third determining subunit is used for determining different target objects according to the data of different gridding radiation units and different time differences, recording morphological characteristic attributes, distance attributes and motion and static attributes of each target object, and realizing three-dimensional shadow imaging of the target objects in the space model while determining the target objects.
And the second establishing subunit is used for establishing a radar imaging library of all the target objects and recording and labeling the characteristic attributes of the target objects acquired by the radar each time.
And the training subunit is used for training the radar to recognize and image the different angles and different states of different target objects according to the differences of the shapes, sizes, association degrees, intervals and motion states of the objects on the basis of the deep learning technology.
And the third identification subunit is used for comparing the radar imaging library according to the differences in the aspects of the form, the size, the association degree, the distance and the motion state of the target object in the data acquired from the radar in the running process of the automatic driving vehicle so as to identify and distinguish different target objects.
And the second extraction subunit is used for extracting the second category information of the target object from the corresponding radar imaging library.
And the adjusting subunit is used for bringing the new characteristic attribute of the target object newly acquired by the radar into the imaging library, providing a new analysis sample for deep learning, and correspondingly adjusting the second category information of the target object according to the new characteristic attribute.
Embodiments of the present invention further provide a storage medium, and embodiments of the present invention further provide a storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements some or all of the steps in each embodiment of the object identification method for an autonomous vehicle provided by the present invention. The storage medium may be a magnetic disk, an optical disk, a Read-only memory (ROM) or a Random Access Memory (RAM).
Those skilled in the art will readily appreciate that the techniques of the embodiments of the present invention may be implemented as software plus a required general purpose hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the embodiment of the object recognition device of the autonomous vehicle, since it is basically similar to the embodiment of the method, the description is relatively simple, and the relevant points can be referred to the description in the embodiment of the method.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (8)

1. An object recognition method of an autonomous vehicle, the method comprising:
acquiring video data in front of an autonomous vehicle;
extracting two-dimensional data of an environment in front of the autonomous vehicle from the video data;
obtaining depth data of an environment in front of an autonomous vehicle;
establishing a three-dimensional model of the environment in front of the automatic driving vehicle according to the two-dimensional data and the depth data;
identifying a target object in the three-dimensional model to obtain appearance data of the target object;
identifying first class information of the target object according to the appearance data;
judging the static motion information of the target object;
forming a comprehensive view according to the three-dimensional model and the static motion information;
acquiring radar data in front of an automatic driving vehicle, wherein the radar data is acquired by a radar which is in active remote sensing and can capture multi-angle and grid information of a target object;
identifying target objects in front of the automatic driving vehicle from the radar data to obtain radar perception data of each target object;
identifying second category information of the target object according to the radar perception data;
performing category matching on the target object identified from the three-dimensional model and the target object identified from the radar data according to the first category information and the second category information;
projecting the radar data into the comprehensive view according to a category matching result to obtain a comprehensive model;
and projecting the distance data between the automatic driving vehicle and the target object in the radar data to the comprehensive model.
2. The method of claim 1, wherein obtaining depth data for an environment in front of an autonomous vehicle comprises:
identifying and distinguishing in-road information, road boundary information and out-of-boundary information from video data in front of an autonomous vehicle;
identifying feature object information from the in-road information, the road boundary information, and the out-of-boundary information, respectively;
positioning a two-dimensional coordinate system data group or a data matrix of the characteristic object information according to a video depth analysis algorithm;
and respectively calculating a distance data matrix between each piece of characteristic object information and the automatic driving vehicle according to the attribute of the characteristic object information and the relation between the characteristic object information and the automatic driving vehicle so as to form depth data of the environment in front of the automatic driving vehicle according to the two-dimensional coordinate system data set or the data matrix and the distance data matrix.
3. The method of claim 1, wherein identifying a first category of information for the target object based on the appearance data comprises:
extracting basic features of the target object from the appearance data;
performing feature matching on the basic features and reference features in a deep learning library;
and determining the category information of the reference features as first category information of the target object according to the feature matching result.
4. The method of claim 2, wherein determining the stationary motion information of the target object comprises:
defining the characteristic object information as a static object and a movable object according to the attribute of the characteristic object information;
marking all the static targets and the movable targets in a three-dimensional model of the environment in front of the automatic driving vehicle, and modeling the primarily recognized static targets as initial static areas;
when the automatic driving vehicle runs, comparing and analyzing the difference variable quantity of each frame of the static target in the video relative to the first 1-3 frames through a video frame difference recognition algorithm to obtain the change rule of the static target and the static area, and adjusting or updating the static target and the static area according to the difference variable quantity;
comparing and analyzing the difference variable quantity of each frame of the movable target in the video relative to the first 1-3 frames by a video frame difference identification algorithm to obtain the change rule of the movable target, and comparing the change rule of the movable target with the change rules of the static target and the static area;
if the change rule of the movable target is the same as the change rule of the stationary target and the stationary region, judging that the movable target is in a stationary state;
if the change rule of the movable target is different from the change rule of the static target and the static area, judging that the movable target is in a motion state;
according to the change of the movable object in the motion state and the static area, the motion track of the movable object in the motion state is determined, and the track of the subsequent possible motion of the movable object in the motion state can be deduced.
5. An object recognition device for an autonomous vehicle, the device comprising:
a first acquisition unit for acquiring video data ahead of the autonomous vehicle,
an extraction unit configured to extract two-dimensional data of an environment ahead of an autonomous vehicle from the video data;
a second acquisition unit for acquiring depth data of an environment in front of the autonomous vehicle;
the establishing unit is used for establishing a three-dimensional model of the environment in front of the automatic driving vehicle according to the two-dimensional data and the depth data;
the first identification unit is used for identifying a target object in the three-dimensional model to obtain appearance data of the target object;
the second identification unit is used for identifying the first class information of the target object according to the appearance data;
the judging unit is used for judging the static motion information of the target object;
the forming unit is used for forming a comprehensive view according to the three-dimensional model and the static motion information;
the third acquisition unit is used for acquiring radar data in front of the automatic driving vehicle, wherein the radar data is acquired by a radar which is in active remote sensing and can capture multi-angle and grid information of a target object;
the third identification unit is used for identifying target objects in front of the automatic driving vehicle from the radar data to obtain radar perception data of each target object;
the fourth identification unit is used for identifying second category information of the target object according to the radar perception data;
a matching unit, configured to perform category matching on a target object identified from the three-dimensional model and a target object identified from the radar data according to the first category information and the second category information;
the projection unit is used for projecting the radar data into the comprehensive view according to the category matching result to obtain a comprehensive model;
and the projection unit is used for projecting the distance data between the automatic driving vehicle and the target object in the radar data to the comprehensive model.
6. The apparatus of claim 5, wherein the second obtaining unit comprises:
the first identification subunit is used for identifying and distinguishing the in-road information, the road boundary information and the out-of-boundary information from the video data in front of the automatic driving vehicle;
a second identifying subunit, configured to identify feature object information from the in-road information, the road boundary information, and the out-of-boundary information, respectively;
the positioning subunit is used for positioning a two-dimensional coordinate system data group or a data matrix of the characteristic object information according to a video depth analysis algorithm;
and the calculating subunit is used for respectively calculating a distance data matrix between each piece of characteristic object information and the automatic driving vehicle according to the attribute of the characteristic object information and the relationship between the characteristic object information and the automatic driving vehicle so as to obtain the depth data of the environment in front of the automatic driving vehicle according to the two-dimensional coordinate system data set or the data matrix and the distance data matrix.
7. The apparatus of claim 5, wherein the second identification unit comprises:
a first extraction subunit, configured to extract a basic feature of the target object from the appearance data;
the matching subunit is used for performing feature matching on the basic features and reference features in a deep learning library;
and the first determining subunit is configured to determine, according to the feature matching result, the category information of the reference feature as the first category information of the target object.
8. The apparatus of claim 6, wherein the determining unit comprises:
a definition subunit, configured to define the characteristic object information as a stationary object and a movable object according to the attribute of the characteristic object information;
a marking subunit for marking both the stationary target and the movable target in a three-dimensional model of an environment in front of the autonomous vehicle, and modeling the primarily recognized stationary target as an initial stationary region;
the comparison subunit is used for comparing and analyzing the difference variation of each frame of the static target in the video relative to the first 1-3 frames through a video frame difference recognition algorithm when the automatic driving vehicle runs to obtain the variation rule of the static target and the static area, and adjusting or updating the static target and the static area according to the difference variation;
the comparison subunit is further configured to compare and analyze a difference variation of each frame of the movable target in the video with respect to the first 1-3 frames by using a video frame difference recognition algorithm to obtain a variation rule of the movable target, and compare the variation rule of the movable target with the variation rules of the stationary target and the stationary region;
a determination subunit for determining that the movable target is in a stationary state in a case where a change rule of the movable target is the same as a change rule of the stationary target and the stationary area; under the condition that the change rule of the movable target is different from the change rule of the static target and the static area, judging that the movable target is in a motion state;
and the second determining subunit is used for determining the motion track of the movable object in the motion state according to the change of the movable object in the motion state and the static area, and can deduce the track of the subsequent possible motion of the movable object in the motion state.
CN202010233505.8A 2020-03-30 2020-03-30 Object recognition method and device for automatic driving vehicle Active CN111126363B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010233505.8A CN111126363B (en) 2020-03-30 2020-03-30 Object recognition method and device for automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010233505.8A CN111126363B (en) 2020-03-30 2020-03-30 Object recognition method and device for automatic driving vehicle

Publications (2)

Publication Number Publication Date
CN111126363A CN111126363A (en) 2020-05-08
CN111126363B true CN111126363B (en) 2020-06-26

Family

ID=70493974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010233505.8A Active CN111126363B (en) 2020-03-30 2020-03-30 Object recognition method and device for automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN111126363B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113359673B (en) * 2020-06-29 2022-09-30 钧捷智能(深圳)有限公司 Automatic driving automobile performance judgment system based on big data
CN112113536B (en) * 2020-08-10 2022-10-04 浙江吉利汽车研究院有限公司 Vehicle-mounted camera ranging method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014133246A1 (en) * 2013-02-28 2014-09-04 Samsung Techwin Co., Ltd Mini integrated control device
CN108062569A (en) * 2017-12-21 2018-05-22 东华大学 It is a kind of based on infrared and radar unmanned vehicle Driving Decision-making method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017157967A1 (en) * 2016-03-14 2017-09-21 Imra Europe Sas Processing method of a 3d point cloud

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014133246A1 (en) * 2013-02-28 2014-09-04 Samsung Techwin Co., Ltd Mini integrated control device
CN108062569A (en) * 2017-12-21 2018-05-22 东华大学 It is a kind of based on infrared and radar unmanned vehicle Driving Decision-making method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Lidar-IMU and Wheel Odometer Based Autonomous Vehicle Localization System;Shaojiang Zhang等;《2019 Chinese Control And Decision Conference》;20190630;1-6 *
基于三维激光雷达和深度图像的自动驾驶汽车障碍物检测方法;王新竹等;《吉林大学学报(工学版)》;20160331;第46卷(第2期);360-365 *

Also Published As

Publication number Publication date
CN111126363A (en) 2020-05-08

Similar Documents

Publication Publication Date Title
Oniga et al. Processing dense stereo data using elevation maps: Road surface, traffic isle, and obstacle detection
CN105957342B (en) Track grade road plotting method and system based on crowdsourcing space-time big data
Toulminet et al. Vehicle detection by means of stereo vision-based obstacles features extraction and monocular pattern analysis
JP4328692B2 (en) Object detection device
US8154594B2 (en) Mobile peripheral monitor
CN110379168B (en) Traffic vehicle information acquisition method based on Mask R-CNN
Nedevschi et al. A sensor for urban driving assistance systems based on dense stereovision
CN104700414A (en) Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN108594244B (en) Obstacle recognition transfer learning method based on stereoscopic vision and laser radar
CN108645375B (en) Rapid vehicle distance measurement optimization method for vehicle-mounted binocular system
JP2006053757A (en) Plane detector and detection method
CN110197173B (en) Road edge detection method based on binocular vision
CN106446785A (en) Passable road detection method based on binocular vision
CN112346463B (en) Unmanned vehicle path planning method based on speed sampling
CN111611853A (en) Sensing information fusion method and device and storage medium
CN111880191B (en) Map generation method based on multi-agent laser radar and visual information fusion
CN111126363B (en) Object recognition method and device for automatic driving vehicle
Vu et al. Traffic sign detection, state estimation, and identification using onboard sensors
Petrovai et al. A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices
CN114295139A (en) Cooperative sensing positioning method and system
CN116573017A (en) Urban rail train running clearance foreign matter sensing method, system, device and medium
CN112699748B (en) Human-vehicle distance estimation method based on YOLO and RGB image
Fakhfakh et al. Weighted v-disparity approach for obstacles localization in highway environments
Dueholm et al. Multi-perspective vehicle detection and tracking: Challenges, dataset, and metrics
KR20210098534A (en) Methods and systems for creating environmental models for positioning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant