CN117284320A - Vehicle feature recognition method and system for point cloud data - Google Patents

Vehicle feature recognition method and system for point cloud data Download PDF

Info

Publication number
CN117284320A
CN117284320A CN202311119497.4A CN202311119497A CN117284320A CN 117284320 A CN117284320 A CN 117284320A CN 202311119497 A CN202311119497 A CN 202311119497A CN 117284320 A CN117284320 A CN 117284320A
Authority
CN
China
Prior art keywords
vehicle
point cloud
cloud data
dimensional
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311119497.4A
Other languages
Chinese (zh)
Inventor
闫军
王伟
冯澍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Smart Intercommunication Technology Co ltd
Original Assignee
Smart Intercommunication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Intercommunication Technology Co ltd filed Critical Smart Intercommunication Technology Co ltd
Priority to CN202311119497.4A priority Critical patent/CN117284320A/en
Publication of CN117284320A publication Critical patent/CN117284320A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a vehicle feature recognition method and system of point cloud data, belonging to the field of intelligent driving, wherein the method comprises the following steps: image acquisition is carried out on the surrounding environment of the target vehicle, and an environment image acquisition result is obtained; image segmentation is carried out on the environmental image acquisition result to obtain an environmental image segmentation result; performing three-dimensional scene restoration according to the segmentation result to obtain a three-dimensional environment scene of the target vehicle; acquiring laser point cloud data of a target vehicle, performing three-dimensional distribution on the laser point cloud data, and constructing three-dimensional point cloud scene data; extracting vehicle point cloud data in the three-dimensional point cloud scene data, and generating a vehicle running prediction track based on the vehicle point cloud data; and performing automatic driving control on the target vehicle according to the predicted track. The technical problems that in the prior art, the identification precision and the automatic driving control effect of the vehicle feature identification system are poor are solved, and the technical effects of realizing high-precision vehicle feature identification and accurate automatic driving control are achieved.

Description

Vehicle feature recognition method and system for point cloud data
Technical Field
The invention relates to the field of intelligent driving, in particular to a vehicle feature recognition method and system of point cloud data.
Background
With the development of artificial intelligence technology, the technology of environmental perception and automatic driving control of intelligent vehicles has been greatly improved. The existing vehicle feature recognition system mainly recognizes vehicle features based on two-dimensional images acquired by cameras, and can realize recognition and automatic driving control of the vehicle features under specific conditions, but recognition accuracy is greatly influenced by factors such as ambient light, image resolution and the like, and three-dimensional structure information of a vehicle cannot be accurately recognized.
Disclosure of Invention
The application provides a vehicle feature recognition method and system of point cloud data, and aims to solve the technical problems of poor recognition accuracy and poor automatic driving control effect of a vehicle feature recognition system in the prior art.
In view of the above, the present application provides a vehicle feature recognition method and system for point cloud data.
In a first aspect of the present disclosure, a method for identifying a vehicle feature of point cloud data is provided, where the method includes: image acquisition is carried out on the surrounding environment of the target vehicle through a plurality of CCD image sensors, and an environment image acquisition result is obtained; image segmentation is carried out on the environmental image acquisition result to obtain an environmental image segmentation result; performing three-dimensional scene restoration according to the environmental image segmentation result to obtain a three-dimensional environmental scene of the target vehicle; acquiring laser point cloud data of a target vehicle through a vehicle-mounted laser radar, and performing three-dimensional distribution on the laser point cloud data based on a three-dimensional environment scene to construct three-dimensional point cloud scene data; extracting vehicle point cloud data in the three-dimensional point cloud scene data, and generating a vehicle running prediction track based on the vehicle point cloud data; and performing automatic driving control on the target vehicle according to the vehicle running prediction track.
In another aspect of the present disclosure, a vehicle feature identification system for point cloud data is provided, the system comprising: the environment image acquisition module is used for acquiring images of the surrounding environment of the target vehicle through a plurality of CCD image sensors to obtain an environment image acquisition result; the environment image segmentation module is used for carrying out image segmentation on the environment image acquisition result to obtain an environment image segmentation result; the three-dimensional scene restoration module is used for carrying out three-dimensional scene restoration according to the environmental image segmentation result to obtain a three-dimensional environmental scene of the target vehicle; the point cloud data distribution module is used for acquiring laser point cloud data of a target vehicle through the vehicle-mounted laser radar, and carrying out three-dimensional distribution on the laser point cloud data based on a three-dimensional environment scene to construct three-dimensional point cloud scene data; the vehicle data extraction module is used for extracting vehicle point cloud data in the three-dimensional point cloud scene data and generating a vehicle running prediction track based on the vehicle point cloud data; and the automatic driving control module is used for carrying out automatic driving control on the target vehicle according to the vehicle running prediction track.
One or more technical solutions provided in the present application have at least the following technical effects or advantages:
acquiring environmental images around the target vehicle through a plurality of CCD image sensors, and performing image segmentation and three-dimensional scene restoration to acquire three-dimensional environmental information around the target vehicle; acquiring laser point cloud data of a target vehicle through a vehicle-mounted laser radar, and carrying out three-dimensional distribution on the point cloud data under registration of a three-dimensional environment scene to construct a three-dimensional point cloud scene of the target vehicle; and in the three-dimensional point cloud scene, extracting a vehicle point cloud, and generating a running prediction track of the vehicle according to the vehicle point cloud. Based on the vehicle running prediction track, the technical scheme for realizing automatic driving control of the target vehicle solves the technical problems of poor recognition precision and poor automatic driving control effect of the vehicle feature recognition system in the prior art, and achieves the technical effects of realizing high-precision vehicle feature recognition and accurate automatic driving control.
The foregoing description is only an overview of the technical solutions of the present application, and may be implemented according to the content of the specification in order to make the technical means of the present application more clearly understood, and in order to make the above-mentioned and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
Fig. 1 is a schematic flow chart of a possible vehicle feature recognition method of point cloud data according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a possible process of obtaining an environmental image acquisition result in a vehicle feature recognition method of point cloud data according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a possible three-dimensional distribution in a vehicle feature recognition method of point cloud data according to an embodiment of the present application;
fig. 4 is a schematic diagram of a possible structure of a vehicle feature recognition system with point cloud data according to an embodiment of the present application.
Reference numerals illustrate: the system comprises an environment image acquisition module 11, an environment image segmentation module 12, a three-dimensional scene restoration module 13, a point cloud data distribution module 14, a vehicle data extraction module 15 and an automatic driving control module 16.
Detailed Description
The technical scheme provided by the application has the following overall thought:
the embodiment of the application provides a vehicle feature identification method and system for point cloud data. Firstly, environmental images around the target vehicle are acquired through a plurality of CCD image sensors, and three-dimensional space information around the target vehicle is recovered through image segmentation and three-dimensional scene restoration. Meanwhile, the vehicle-mounted laser radar acquires laser point cloud data of the target vehicle. And under the registration of the three-dimensional environment scene, carrying out three-dimensional distribution on the laser point cloud data to construct a three-dimensional point cloud scene of the target vehicle. And in the three-dimensional point cloud scene, extracting a vehicle point cloud, and generating a predicted track of vehicle running according to the data characteristics of the vehicle point cloud. Based on the vehicle travel prediction trajectory, automatic driving control of the target vehicle is realized.
Having described the basic principles of the present application, various non-limiting embodiments of the present application will now be described in detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an embodiment of the present application provides a vehicle feature recognition method of point cloud data, where the method is applied to a vehicle feature recognition system, and the system is in communication connection with a vehicle-mounted laser radar and a CCD image sensor.
In the embodiment of the application, the vehicle feature recognition method of the point cloud data is applied to a vehicle feature recognition system, and the system is used for acquiring feature information of a target vehicle so as to realize automatic driving control of the target vehicle. The vehicle characteristic recognition system is in communication connection with the vehicle-mounted laser radar and the CCD image sensor, the vehicle-mounted laser radar is used for acquiring laser point cloud data, and the CCD image sensor is used for acquiring an environment image so as to acquire the surrounding environment of the target vehicle. The vehicle-mounted laser radar and the CCD image sensor are in communication connection with the vehicle feature recognition system in a wired or wireless mode.
The vehicle feature recognition method comprises the following steps:
step S100: image acquisition is carried out on the surrounding environment of the target vehicle through a plurality of CCD image sensors, and an environment image acquisition result is obtained;
as shown in fig. 2, step S100 further includes the steps of:
step S110: image acquisition is carried out on the surrounding environment of the target vehicle through a plurality of CCD image sensors, so that a plurality of image acquisition results are obtained, wherein the CCD image sensors have position marks;
step S120: image denoising is carried out on a plurality of image acquisition results by using a mean value filtering algorithm, and a plurality of image denoising results are obtained;
step S130: performing image enhancement on a plurality of image denoising results through gray level transformation to obtain a plurality of image enhancement results;
step S140: and performing image stitching on a plurality of image enhancement results according to the position identification of the image sensor to obtain the environment image acquisition result.
In the embodiment of the application, the CCD image sensor is an image acquisition device, and can acquire images of the surrounding environment of a target vehicle and acquire environmental images. A plurality of CCD image sensors are mounted at different positions of the target vehicle to acquire panoramic images of the surroundings of the target vehicle. The CCD image sensor has a position mark for indicating its mounting position. First, a plurality of CCD image sensors are started to acquire original images of the surrounding environment of a target vehicle, and a plurality of image acquisition results are obtained. And then, calculating the average value of each pixel neighborhood pixel point in each image by using an average value filtering algorithm for the acquired multiple image acquisition results, and replacing the central pixel point with the average value to perform image denoising so as to acquire multiple image denoising results. And then, carrying out gray level transformation on the obtained denoising results of the plurality of images, and realizing contrast and detail enhancement by changing the gray level range and gray level distribution of the images to obtain a plurality of image enhancement results. And finally, according to the position identification of the CCD image sensor, splicing the obtained plurality of image enhancement results to obtain a panoramic image of the surrounding environment of the target vehicle, namely an environment image acquisition result.
The method comprises the steps of collecting an environment image through a CCD image sensor, denoising the image by using a mean value filtering algorithm, enhancing the image by using a gray level transformation method and splicing the image according to the position identification, finally obtaining an environment image collecting result, realizing effective collection and pretreatment of the environment image, and providing image information for subsequent environment image segmentation and three-dimensional scene restoration.
Step S200: image segmentation is carried out on the environmental image acquisition result to obtain an environmental image segmentation result;
further, the step S200 includes the following steps:
step S210: presetting the number of pixel clusters, and determining N clustering center points in the environmental image acquisition result according to the number of pixel clusters, wherein N is the number of pixel clusters, and N is an integer greater than 1;
step S220: determining N clustering neighborhoods according to the N clustering center points, and calculating gradient values of pixel points in the N clustering neighborhoods;
step S230: transferring N clustering center points to positions with minimum gradient values of pixel points in N clustering neighborhoods according to the gradient value calculation result to obtain N first clustering center points;
step S240: searching the pixel points in N clustering neighborhoods according to preset searching distances based on N first clustering center points, carrying out average distance calculation on the searched pixel points, and determining N second clustering center points according to an average distance calculation result;
step S250: and continuously performing iterative clustering, and obtaining an image segmentation result when the iterative clustering times meet a preset iterative times threshold value, and taking the image segmentation result as an environment image segmentation result.
In the embodiment of the application, the acquired environmental image acquisition result is subjected to image segmentation to obtain an environmental image segmentation result. Firstly, presetting the number N of pixel clusters, and randomly selecting N pixel points in an environmental image acquisition result as initial cluster center points. The pixel clustering number refers to the number of area blocks after image segmentation, and the value N of the pixel clustering number is greater than 1; the cluster center point is the center position of each region block. And then, according to the determined N clustering center points, taking each clustering center point as a center, taking a preset neighborhood radius, and determining N clustering neighbors, wherein each clustering neighborhood comprises one clustering center point and surrounding pixel points. Then, calculating the gradient value of each pixel point in each clustering neighborhood, taking the gray values of two adjacent pixel points, calculating the difference value of the gray values, taking the difference value as the gradient value, repeating the operation, calculating the gradient value of each pixel point, thereby determining the clustering neighborhood corresponding to each clustering center point, and calculating the gradient value of each pixel point in each clustering neighborhood.
And then, in each clustering neighborhood, according to the calculated gradient value of the pixel points, finding the pixel point with the minimum gradient value. And transferring the corresponding cluster center point to the pixel point position with the minimum gradient value. And repeating the operation, transferring the N clustering center points to the pixel point position with the minimum gradient value in each clustering neighborhood, and obtaining N first clustering center points. And then, searching the clustering neighborhood of the N first clustering center points according to a preset searching distance by taking each first clustering center point as a center to obtain searched pixel points, and calculating the average distance from the searched pixel points to the first clustering center points. And selecting the pixel point with the nearest average distance as a second cluster center point, and determining N second cluster center points. Next, the determined N second cluster center points are used as new cluster center points, and based on the new cluster center points, new cluster center points are determined by redefining a cluster neighborhood, calculating gradient values, shifting the cluster center points and searching the neighborhood. And continuously iterating the operation until the iteration times reach a preset threshold value, and finally determining the N area blocks divided by the clustering center points as image segmentation results and taking the image segmentation results as environment image segmentation results.
The segmentation of the environmental image acquisition result is realized by setting an initial clustering center point, determining a clustering neighborhood, calculating a gradient value, transferring the clustering center point and iterative clustering, and the environmental image segmentation result is obtained, so that a foundation is provided for the subsequent three-dimensional scene restoration.
Step S300: performing three-dimensional scene restoration according to the environmental image segmentation result to obtain a three-dimensional environmental scene of the target vehicle;
in the embodiment of the application, three-dimensional scene restoration refers to reconstructing a three-dimensional scene according to two-dimensional image information. Firstly, according to the characteristics of each regional block in the environmental image segmentation result, judging which category each regional block belongs to, such as categories of sky, roads, pedestrians and the like. And then estimating depth information according to the aggregation degree and texture information of the pixel points in each area block, and estimating the depth information of the pixel points in the three-dimensional space to obtain the three-dimensional information of each area block. And integrating the three-dimensional information of the area blocks according to the mutual position relation among the area blocks, and reconstructing a three-dimensional scene around the target vehicle to obtain the three-dimensional environment scene of the target vehicle.
The depth information of each regional block in the three-dimensional space is calculated according to the environmental image segmentation result, and the three-dimensional scene around the target vehicle is reconstructed according to the position relation among the regional blocks, so that the acquisition of the three-dimensional environmental scene is realized, and a foundation is laid for the subsequent construction of the point cloud scene and the feature extraction.
Step S400: acquiring laser point cloud data of the target vehicle through the vehicle-mounted laser radar, and performing three-dimensional distribution on the laser point cloud data based on the three-dimensional environment scene to construct three-dimensional point cloud scene data;
further, as shown in fig. 3, step S400 includes the steps of:
step S410: performing distribution density calculation on the laser point cloud data to obtain a distribution density calculation result;
step S420: removing point cloud data smaller than a preset distribution density threshold value in the distribution density calculation result to obtain density denoising point cloud data;
step S430: and performing point cloud filtering smoothing on the density denoising point cloud data to obtain denoising laser point cloud data.
In the embodiment of the application, the vehicle-mounted laser radar is a point cloud data acquisition device, and is used for acquiring point cloud data of the surrounding environment of a target vehicle, namely laser point cloud data. The three-dimensional point cloud scene data refers to point cloud scene information obtained by performing three-dimensional distribution on the laser point cloud data in a three-dimensional environment scene.
Firstly, a vehicle-mounted laser radar is started, laser pulses sent by the vehicle-mounted laser radar scan a target vehicle, the target vehicle can reflect the laser pulses, the laser radar receives the reflected laser pulses, and distance information of surface points of the target vehicle is obtained. Then, the laser radar determines points on the surface of the target vehicle according to the received laser pulse, wherein the points are point cloud data points and comprise three-dimensional coordinate information of the points on the surface of the target vehicle. Then, the laser radar calculates the three-dimensional coordinates of each point cloud data point according to the internal reference information (such as the installation position and the orientation) of the laser radar, the scanning angle when sending and receiving laser pulses, and other information. And then preprocessing the three-dimensional coordinates of the obtained point cloud data points to obtain laser point cloud data.
And then, selecting each point cloud data point in the laser point cloud data as a center point, searching the point cloud data points around the center point by using a preset neighborhood radius, counting the number of the searched point cloud data points, and dividing the number of the searched point cloud data points by the neighborhood volume to obtain the point density of the center point. And repeatedly carrying out point density calculation on all the point cloud data points in the laser point cloud data to obtain a distribution density calculation result. And then, according to the obtained distribution density calculation result, determining whether the point density of each point cloud data point is smaller than a preset distribution density threshold value, judging the point cloud data point with the point density smaller than the threshold value as a low-density outlier, and removing. And removing all low-density outliers to obtain density denoising point cloud data. And selecting the neighborhood inner points of each point cloud data point in the obtained density denoising point cloud data, counting the characteristics such as the average value, the median value, the variance and the like of the neighborhood inner points, and comparing the characteristic value with the coordinate value of each point cloud data point. If the coordinate value of the point cloud data point is larger than the characteristic value of the neighborhood point, the point cloud data point is judged to be a noise point, and the noise point is filtered. And repeatedly carrying out filtering processing on all the point cloud data points to obtain denoising laser point cloud data. And finally, reconstructing a point cloud scene around the target vehicle according to the denoising laser point cloud data to obtain three-dimensional point cloud scene data.
The three-dimensional point cloud information acquisition of the surrounding environment of the target vehicle is realized by acquiring laser point cloud data acquired by the vehicle-mounted laser radar, determining the three-dimensional coordinates of each point cloud data point in the acquired three-dimensional environment scene, and finally reconstructing the point cloud scene, so that a foundation is laid for the subsequent extraction and feature identification of the point cloud data of the vehicle, and the comprehensive acquisition of the environment information is realized.
Step S500: extracting vehicle point cloud data in the three-dimensional point cloud scene data, and generating a vehicle running prediction track based on the vehicle point cloud data;
further, step S500 includes the steps of:
step S510: performing closed curve fitting on the three-dimensional point cloud scene data, and performing region division on the three-dimensional point cloud scene data according to a closed curve fitting result to obtain a plurality of region division results;
step S520: obtaining a plurality of region division profiles based on the plurality of region division results;
step S530: and carrying out similarity comparison on the plurality of regional division outlines according to the vehicle outline characteristics, and extracting point cloud data with similarity comparison results larger than a preset similarity threshold value to obtain vehicle point cloud data.
Step S540: extracting vehicle motion characteristics from the vehicle point cloud data to obtain vehicle motion characteristics, wherein the vehicle motion characteristics comprise vehicle position coordinates, running speed, front wheel motion direction and front wheel deviation angle;
step S550: acquiring a driving environment image of the vehicle point cloud data based on the environment image segmentation result, and acquiring driving environment characteristics, wherein the driving environment characteristics comprise lane driving lines and road signs;
step S560: constructing a vehicle running track prediction model;
step S570: and inputting the vehicle position coordinates, the running speed, the front wheel movement direction, the front wheel deviation angle, the lane running line and the road sign into the vehicle running track prediction model, and outputting the vehicle running prediction track.
In the embodiment of the application, firstly, statistical analysis is carried out on three-dimensional point cloud scene data, point cloud distribution characteristics such as point cloud density, curvature and the like are extracted, and a point cloud distribution characteristic diagram is constructed; according to the point cloud distribution feature diagram, extracting a point cloud distribution boundary by adopting a contour line extraction algorithm, and completing closed curve fitting; and dividing the three-dimensional point cloud scene data into a plurality of areas according to the closed curve fitting result to obtain a plurality of area dividing results. And secondly, extracting boundary lines of each region division result according to the obtained region division results to obtain a plurality of region division outlines. The regional division contour consists of point cloud data points and represents contour features of each regional division result. And finally, carrying out similarity comparison on the plurality of regional division contours according to the vehicle contour features, and extracting point cloud data with similarity comparison results larger than a preset similarity threshold value to obtain vehicle point cloud data.
Then, selecting two continuous vehicle point cloud data, calculating the acquisition time difference of the two point cloud data, obtaining the running time interval of the target vehicle, and selecting corresponding points in the two point cloud data. Then, calculating the coordinate difference of the vehicle in the three-dimensional space, and dividing the coordinate difference by the running time interval to obtain a speed vector of the target vehicle; calculating the running speed and the movement direction of the target vehicle according to the speed vector; and selecting points of the vehicle contour part in the two continuous point cloud data, fitting to obtain a front wheel contour line, and calculating the movement direction and the deviation angle of the front wheel according to the change of the front wheel contour line. And then selecting the three-dimensional coordinates of the vehicle center point from the point cloud data as vehicle position coordinates. Thus, the vehicle position coordinates, running speed, front wheel movement direction, and front wheel deviation angle of the target vehicle are obtained as vehicle movement characteristics.
And then, determining an area corresponding to the point cloud data of the target vehicle according to the obtained environmental image segmentation result, selecting a running environment image in the area to obtain a running environment image of the target vehicle, and extracting characteristics such as a lane running line, a road sign and the like from the running environment image to serve as running environment characteristics. Then, image data and corresponding kinematic data of a large number of target vehicles when running in different scenes are collected and used as sample data, and a Bayesian network is adopted to obtain a vehicle running track prediction model according to the sample data. Next, the obtained vehicle motion characteristics including the position coordinates, running speed, front wheel motion direction and front wheel deviation angle, and the obtained running environment characteristics including the lane running line and road sign are input as model inputs to the constructed vehicle running track prediction model. The vehicle running track prediction model predicts the running track of the target vehicle in a period of time in the future according to the input characteristics, outputs the vehicle running prediction track and provides an important basis for automatic driving control of the vehicle.
Further, the preferred vehicle travel trajectory prediction model is a vehicle travel expert system based on a combination of artificial intelligence and a database.
In a preferred embodiment, first, a driving knowledge rule base is established according to driving knowledge and experience of an expert, and the knowledge rule includes driving strategies under various complex road conditions and the like. And then, a machine learning method is utilized to learn and summarize a large amount of target vehicle driving data, so as to obtain driving rules and characteristics. And fusing the driving knowledge rule base with driving rule features obtained by the machine learning method to construct a vehicle driving expert system based on knowledge and data driving.
In the use process of the model, the model judges the current scene according to the position, speed, running environment and other information of the target vehicle; and then inquiring a knowledge rule base to check whether a driving strategy corresponding to the scene exists or not, and if so, directly adopting the driving strategy. If the historical driving data and the current information of the target vehicle do not exist, inputting the historical driving data and the current information of the target vehicle into a machine learning model, and obtaining a driving prediction result or a decision suggestion; comparing the decision result of the machine learning model with the related content of the knowledge rule base, judging the decision weight of the knowledge driving part and the data driving part, and carrying out decision fusion; and providing a vehicle running prediction track according to a final decision result so as to realize automatic driving control of the target vehicle.
By constructing a vehicle driving expert system based on knowledge and data, automatic driving control of a target vehicle under different scenes is realized, driving knowledge of the expert is combined with a machine learning method, advantages are complementary, optimal output can be achieved comprehensively and accurately, and a foundation is laid for realizing high-automatic driving of the vehicle.
Step S600: and carrying out automatic driving control on the target vehicle according to the vehicle running prediction track.
In the embodiment of the present application, the automatic driving control means that speed control, steering control, and the like are performed on the target vehicle according to the travel prediction trajectory, so as to realize automatic driving. First, an output predicted trajectory of the vehicle is obtained, which contains information such as the path and speed of the target vehicle over a period of time. And then comparing the obtained running predicted track with the real-time position and speed information, and calculating the speed deviation and the path deviation. And then calculating the speed control quantity and the steering control quantity by adopting a sliding mode control algorithm according to the speed deviation and the path deviation. And finally, the calculated speed control quantity and steering control quantity are sent to a driving system of the target vehicle, so that the control of the vehicle speed and steering is realized, and the automatic driving control is carried out on the target vehicle according to the running prediction track. Meanwhile, the automatic driving control adopts a kinematic constraint and environment sensing means to prevent the collision between the vehicle and other participants, so that the automatic driving safety is improved.
The control quantity is calculated according to the comparison of the obtained vehicle running predicted track and the real-time information, and the control quantity is sent to the vehicle, so that the purpose of controlling the speed and the direction is achieved, the target vehicle runs according to the predicted track, automatic driving control is realized, and the safety and the stable running of the automatic driving of the vehicle are ensured.
In summary, the vehicle feature recognition method for point cloud data provided by the embodiment of the application has the following technical effects:
image acquisition is carried out on the surrounding environment of the target vehicle through a plurality of CCD image sensors, an environment image acquisition result is obtained, and two-dimensional image information of the surrounding of the target vehicle is obtained; image segmentation is carried out on the environmental image acquisition result to obtain an environmental image segmentation result, and a target vehicle and other objects in the environment are segmented from the image; performing three-dimensional scene restoration according to the environmental image segmentation result to obtain a three-dimensional environmental scene of the target vehicle, and recovering three-dimensional space structure information around the target vehicle based on the image segmentation result; acquiring laser point cloud data of a target vehicle through a vehicle-mounted laser radar, and performing three-dimensional distribution on the laser point cloud data based on a three-dimensional environment scene to construct three-dimensional point cloud scene data so as to obtain a three-dimensional point cloud scene of the target vehicle; extracting vehicle point cloud data in the three-dimensional point cloud scene data, and generating a vehicle running prediction track based on the vehicle point cloud data; according to the vehicle running prediction track, the target vehicle is automatically driven and controlled, so that the technical effects of high-precision vehicle feature recognition and accurate automatic driving control are achieved.
Example two
Based on the same inventive concept as the vehicle feature recognition method of the point cloud data in the foregoing embodiment, as shown in fig. 4, an embodiment of the present application provides a vehicle feature recognition system of the point cloud data, including:
the environment image acquisition module 11 is used for acquiring images of the surrounding environment of the target vehicle through a plurality of CCD image sensors to obtain an environment image acquisition result;
an environmental image segmentation module 12, configured to perform image segmentation on the environmental image acquisition result to obtain an environmental image segmentation result;
the three-dimensional scene restoration module 13 is used for carrying out three-dimensional scene restoration according to the environmental image segmentation result to obtain a three-dimensional environmental scene of the target vehicle;
the point cloud data distribution module 14 is configured to obtain laser point cloud data of the target vehicle through the vehicle-mounted laser radar, perform three-dimensional distribution on the laser point cloud data based on the three-dimensional environment scene, and construct three-dimensional point cloud scene data;
the vehicle data extraction module 15 is configured to extract vehicle point cloud data in the three-dimensional point cloud scene data, and generate a vehicle running prediction track based on the vehicle point cloud data;
and the automatic driving control module 16 is used for performing automatic driving control on the target vehicle according to the vehicle running prediction track.
Further, the environmental image capturing module 11 includes the following steps:
image acquisition is carried out on the surrounding environment of the target vehicle through a plurality of CCD image sensors, so that a plurality of image acquisition results are obtained, wherein the CCD image sensors have position marks;
image denoising is carried out on a plurality of image acquisition results by using a mean value filtering algorithm, and a plurality of image denoising results are obtained;
performing image enhancement on a plurality of image denoising results through gray level transformation to obtain a plurality of image enhancement results;
and performing image stitching on a plurality of image enhancement results according to the position identification of the image sensor to obtain the environment image acquisition result.
Further, the environmental image segmentation module 12 includes the following steps:
presetting the number of pixel clusters, and determining N clustering center points in the environmental image acquisition result according to the number of pixel clusters, wherein N is the number of pixel clusters, and N is an integer greater than 1;
determining N clustering neighborhoods according to the N clustering center points, and calculating gradient values of pixel points in the N clustering neighborhoods;
transferring N clustering center points to positions with minimum gradient values of pixel points in N clustering neighborhoods according to the gradient value calculation result to obtain N first clustering center points;
searching the pixel points in N clustering neighborhoods according to preset searching distances based on N first clustering center points, carrying out average distance calculation on the searched pixel points, and determining N second clustering center points according to an average distance calculation result;
and continuously performing iterative clustering, and obtaining an image segmentation result when the iterative clustering times meet a preset iterative times threshold value, and taking the image segmentation result as an environment image segmentation result.
Further, the point cloud data distribution module 14 includes the following steps:
performing distribution density calculation on the laser point cloud data to obtain a distribution density calculation result;
removing point cloud data smaller than a preset distribution density threshold value in the distribution density calculation result to obtain density denoising point cloud data;
and performing point cloud filtering smoothing on the density denoising point cloud data to obtain denoising laser point cloud data.
Further, the vehicle data extraction module 15 includes the following steps:
performing closed curve fitting on the three-dimensional point cloud scene data, and performing region division on the three-dimensional point cloud scene data according to a closed curve fitting result to obtain a plurality of region division results;
obtaining a plurality of region division profiles based on the plurality of region division results;
and carrying out similarity comparison on the plurality of regional division outlines according to the vehicle outline characteristics, and extracting point cloud data with similarity comparison results larger than a preset similarity threshold value to obtain vehicle point cloud data.
Further, the vehicle data extraction module 15 further includes the following steps:
extracting vehicle motion characteristics from the vehicle point cloud data to obtain vehicle motion characteristics, wherein the vehicle motion characteristics comprise vehicle position coordinates, running speed, front wheel motion direction and front wheel deviation angle;
acquiring a driving environment image of the vehicle point cloud data based on the environment image segmentation result, and acquiring driving environment characteristics, wherein the driving environment characteristics comprise lane driving lines and road signs;
constructing a vehicle running track prediction model;
and inputting the vehicle position coordinates, the running speed, the front wheel movement direction, the front wheel deviation angle, the lane running line and the road sign into the vehicle running track prediction model, and outputting the vehicle running prediction track.
7. The method of claim 1, wherein the vehicle travel track prediction model is a vehicle travel expert system based on a combination of artificial intelligence and a database.
Any of the steps of the methods described above may be stored as computer instructions or programs in a non-limiting computer memory and may be called by a non-limiting computer processor to identify any of the methods to implement embodiments of the present application, without unnecessary limitations.
Further, the first or second element may not only represent a sequential relationship, but may also represent a particular concept, and/or may be selected individually or in whole among a plurality of elements. It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the present application and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (8)

1. A method for identifying vehicle characteristics of point cloud data, which is characterized in that the method is applied to a vehicle characteristic identification system, wherein the system is in communication connection with a vehicle-mounted laser radar and a CCD (charge coupled device) image sensor, and the method comprises the following steps:
image acquisition is carried out on the surrounding environment of the target vehicle through a plurality of CCD image sensors, and an environment image acquisition result is obtained;
image segmentation is carried out on the environmental image acquisition result to obtain an environmental image segmentation result;
performing three-dimensional scene restoration according to the environmental image segmentation result to obtain a three-dimensional environmental scene of the target vehicle;
acquiring laser point cloud data of the target vehicle through the vehicle-mounted laser radar, and performing three-dimensional distribution on the laser point cloud data based on the three-dimensional environment scene to construct three-dimensional point cloud scene data;
extracting vehicle point cloud data in the three-dimensional point cloud scene data, and generating a vehicle running prediction track based on the vehicle point cloud data;
and carrying out automatic driving control on the target vehicle according to the vehicle running prediction track.
2. The method of claim 1, wherein the obtaining environmental image acquisition results further comprises:
image acquisition is carried out on the surrounding environment of the target vehicle through a plurality of CCD image sensors, so that a plurality of image acquisition results are obtained, wherein the CCD image sensors have position marks;
image denoising is carried out on a plurality of image acquisition results by using a mean value filtering algorithm, and a plurality of image denoising results are obtained;
performing image enhancement on a plurality of image denoising results through gray level transformation to obtain a plurality of image enhancement results;
and performing image stitching on a plurality of image enhancement results according to the position identification of the image sensor to obtain the environment image acquisition result.
3. The method of claim 1, wherein the image segmentation of the environmental image acquisition result further comprises:
presetting the number of pixel clusters, and determining N clustering center points in the environmental image acquisition result according to the number of pixel clusters, wherein N is the number of pixel clusters, and N is an integer greater than 1;
determining N clustering neighborhoods according to the N clustering center points, and calculating gradient values of pixel points in the N clustering neighborhoods;
transferring N clustering center points to positions with minimum gradient values of pixel points in N clustering neighborhoods according to the gradient value calculation result to obtain N first clustering center points;
searching the pixel points in N clustering neighborhoods according to preset searching distances based on N first clustering center points, carrying out average distance calculation on the searched pixel points, and determining N second clustering center points according to an average distance calculation result;
and continuously performing iterative clustering, and obtaining an image segmentation result when the iterative clustering times meet a preset iterative times threshold value, and taking the image segmentation result as an environment image segmentation result.
4. The method of claim 1, wherein prior to three-dimensionally distributing the laser point cloud data based on the three-dimensional environmental scene, further comprising:
performing distribution density calculation on the laser point cloud data to obtain a distribution density calculation result;
removing point cloud data smaller than a preset distribution density threshold value in the distribution density calculation result to obtain density denoising point cloud data;
and performing point cloud filtering smoothing on the density denoising point cloud data to obtain denoising laser point cloud data.
5. The method of claim 1, wherein the extracting vehicle point cloud data in the three-dimensional point cloud scene data further comprises:
performing closed curve fitting on the three-dimensional point cloud scene data, and performing region division on the three-dimensional point cloud scene data according to a closed curve fitting result to obtain a plurality of region division results;
obtaining a plurality of region division profiles based on the plurality of region division results;
and carrying out similarity comparison on the plurality of regional division outlines according to the vehicle outline characteristics, and extracting point cloud data with similarity comparison results larger than a preset similarity threshold value to obtain vehicle point cloud data.
6. The method of claim 5, wherein generating a vehicle travel prediction trajectory based on vehicle point cloud data further comprises:
extracting vehicle motion characteristics from the vehicle point cloud data to obtain vehicle motion characteristics, wherein the vehicle motion characteristics comprise vehicle position coordinates, running speed, front wheel motion direction and front wheel deviation angle;
acquiring a driving environment image of the vehicle point cloud data based on the environment image segmentation result, and acquiring driving environment characteristics, wherein the driving environment characteristics comprise lane driving lines and road signs;
constructing a vehicle running track prediction model;
and inputting the vehicle position coordinates, the running speed, the front wheel movement direction, the front wheel deviation angle, the lane running line and the road sign into the vehicle running track prediction model, and outputting the vehicle running prediction track.
7. The method of claim 1, wherein the vehicle travel track prediction model is a vehicle travel expert system based on a combination of artificial intelligence and a database.
8. A vehicle feature recognition system for point cloud data, for implementing a method for recognizing vehicle features for point cloud data according to any one of claims 1 to 7, said system being communicatively connected to a vehicle-mounted lidar, a CCD image sensor, said system comprising:
the environment image acquisition module is used for acquiring images of the surrounding environment of the target vehicle through a plurality of CCD image sensors to obtain an environment image acquisition result;
the environment image segmentation module is used for carrying out image segmentation on the environment image acquisition result to obtain an environment image segmentation result;
the three-dimensional scene restoration module is used for carrying out three-dimensional scene restoration according to the environmental image segmentation result to obtain a three-dimensional environmental scene of the target vehicle;
the point cloud data distribution module is used for acquiring laser point cloud data of the target vehicle through the vehicle-mounted laser radar, and carrying out three-dimensional distribution on the laser point cloud data based on the three-dimensional environment scene to construct three-dimensional point cloud scene data;
the vehicle data extraction module is used for extracting vehicle point cloud data in the three-dimensional point cloud scene data and generating a vehicle running prediction track based on the vehicle point cloud data;
and the automatic driving control module is used for carrying out automatic driving control on the target vehicle according to the vehicle running prediction track.
CN202311119497.4A 2023-09-01 2023-09-01 Vehicle feature recognition method and system for point cloud data Pending CN117284320A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311119497.4A CN117284320A (en) 2023-09-01 2023-09-01 Vehicle feature recognition method and system for point cloud data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311119497.4A CN117284320A (en) 2023-09-01 2023-09-01 Vehicle feature recognition method and system for point cloud data

Publications (1)

Publication Number Publication Date
CN117284320A true CN117284320A (en) 2023-12-26

Family

ID=89250817

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311119497.4A Pending CN117284320A (en) 2023-09-01 2023-09-01 Vehicle feature recognition method and system for point cloud data

Country Status (1)

Country Link
CN (1) CN117284320A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117872907A (en) * 2024-01-19 2024-04-12 广州华夏职业学院 Shopping cart control method and system based on unmanned technology

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117872907A (en) * 2024-01-19 2024-04-12 广州华夏职业学院 Shopping cart control method and system based on unmanned technology

Similar Documents

Publication Publication Date Title
Zhang et al. Vehicle tracking and speed estimation from roadside lidar
CN107506711B (en) Convolutional neural network-based binocular vision barrier detection system and method
CN111369541B (en) Vehicle detection method for intelligent automobile under severe weather condition
Li et al. Springrobot: A prototype autonomous vehicle and its algorithms for lane detection
US8670592B2 (en) Clear path detection using segmentation-based method
CN111551957B (en) Park low-speed automatic cruise and emergency braking system based on laser radar sensing
CN115032651A (en) Target detection method based on fusion of laser radar and machine vision
CN111488812B (en) Obstacle position recognition method and device, computer equipment and storage medium
CN102044151A (en) Night vehicle video detection method based on illumination visibility identification
CN115049700A (en) Target detection method and device
CN115187964A (en) Automatic driving decision-making method based on multi-sensor data fusion and SoC chip
CN113848545B (en) Fusion target detection and tracking method based on vision and millimeter wave radar
CN113516853B (en) Multi-lane traffic flow detection method for complex monitoring scene
Chao et al. Multi-lane detection based on deep convolutional neural network
Konrad et al. Localization in digital maps for road course estimation using grid maps
CN117284320A (en) Vehicle feature recognition method and system for point cloud data
CN113095152A (en) Lane line detection method and system based on regression
CN117576652B (en) Road object identification method and device, storage medium and electronic equipment
CN113408550B (en) Intelligent weighing management system based on image processing
CN115083199B (en) Parking space information determining method and related equipment thereof
JP3538476B2 (en) Recognition device for vehicle lane markings
CN117197019A (en) Vehicle three-dimensional point cloud image fusion method and system
CN114620059A (en) Automatic driving method and system thereof, and computer readable storage medium
CN114170499A (en) Target detection method, tracking method, device, visual sensor and medium
CN109191473B (en) Vehicle adhesion segmentation method based on symmetry analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination