CN111626288A - Data processing method, data processing device, computer equipment and storage medium - Google Patents

Data processing method, data processing device, computer equipment and storage medium Download PDF

Info

Publication number
CN111626288A
CN111626288A CN201910150131.0A CN201910150131A CN111626288A CN 111626288 A CN111626288 A CN 111626288A CN 201910150131 A CN201910150131 A CN 201910150131A CN 111626288 A CN111626288 A CN 111626288A
Authority
CN
China
Prior art keywords
point cloud
data
cloud data
target detection
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910150131.0A
Other languages
Chinese (zh)
Other versions
CN111626288B (en
Inventor
徐棨森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suteng Innovation Technology Co Ltd
Original Assignee
Suteng Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suteng Innovation Technology Co Ltd filed Critical Suteng Innovation Technology Co Ltd
Priority to CN201910150131.0A priority Critical patent/CN111626288B/en
Publication of CN111626288A publication Critical patent/CN111626288A/en
Application granted granted Critical
Publication of CN111626288B publication Critical patent/CN111626288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application relates to a data processing method, a data processing device, computer equipment and a storage medium. The method comprises the following steps: the method comprises the steps of inputting acquired first point cloud data into a neural network model to obtain a first target detection result, acquiring operation data generated by operation on the first point cloud data, processing the first point cloud data according to the operation data to obtain second point cloud data, inputting the second point cloud data into the neural network model to obtain a second target detection result, acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data. According to the difference data, the influence degree of the operation data on the target detection can be determined, and when the computer equipment uses the model to detect the target, the neural network model can be adjusted according to the influence degree of the operation data on the target detection, so that the accuracy of the output result of the model is improved.

Description

Data processing method, data processing device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology, more and more models are applied to the life and work of people. Most models can be used for data processing, and the models are relatively dependent on input characteristic parameters during data processing. When the traditional model is used for data processing, the input characteristic parameters are different, and the obtained data processing results are also different.
However, when the data processing result is obtained by performing data processing on the input characteristic parameters by using the traditional model, the obtained data processing result is inaccurate.
Disclosure of Invention
In view of the above, it is necessary to provide a data processing method, an apparatus, a computer device and a storage medium for improving the accuracy of the output result of the model.
A method of data, the method comprising:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
obtaining operation data generated by operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the method further comprises:
collecting a road scene in the driving process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene;
and preprocessing the laser radar point cloud data to obtain the first point cloud data.
In one embodiment, the inputting the acquired first point cloud data into the neural network model to obtain a first target detection result includes:
acquiring first point cloud characteristics extracted from the first point cloud data;
inputting the first point cloud feature into the neural network model to obtain the first target detection result.
In one embodiment, the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data includes:
when the operation instruction is a modification instruction, modifying the first point cloud feature in the first point cloud data according to modification data contained in the operation instruction to obtain a second point cloud feature;
and obtaining the second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data includes:
when the operation instruction is an adding instruction, obtaining adding point cloud data contained in the operation instruction; and obtaining the second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the method further comprises:
acquiring a reference target detection result corresponding to the first point cloud data;
comparing the second target detection result with the reference target detection result to obtain a difference index;
the determining the influence degree of the operation data on target detection according to the difference data comprises:
and determining the influence degree of the operation data on target detection according to the difference data and the difference index.
A data processing apparatus, the apparatus comprising:
the first target detection result acquisition module is used for inputting the acquired first point cloud data into the neural network model to obtain a first target detection result;
the operation data acquisition module is used for acquiring operation data generated by the operation on the first point cloud data;
the second point cloud data acquisition module is used for processing the first point cloud data according to the operation data to obtain second point cloud data;
the second target detection result acquisition module is used for inputting the second point cloud data into the neural network model to obtain a second target detection result;
and the difference data acquisition module is used for acquiring difference data between the second target detection result and the first target detection result and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the apparatus further comprises:
the laser radar point cloud data acquisition module is used for acquiring a road scene in the driving process of a vehicle through a laser radar and acquiring laser radar point cloud data corresponding to the road scene;
and the first point cloud data acquisition module is used for preprocessing the laser radar point cloud data to obtain the first point cloud data.
In one embodiment, the first target detection result obtaining module is further configured to obtain a first point cloud feature extracted from the first point cloud data; inputting the first point cloud feature into the neural network model to obtain the first target detection result.
In one embodiment, the second point cloud data obtaining module is further configured to modify the first point cloud feature in the first point cloud data according to modification data included in the operation instruction to obtain a second point cloud feature when the operation instruction is a modification instruction; and obtaining the second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the second point cloud data acquisition module is further configured to acquire, when the operation instruction is an adding instruction, adding point cloud data included in the operation instruction; and obtaining the second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the difference data acquiring module is further configured to acquire a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on target detection according to the difference data and the difference index.
A computer device comprising a memory and a processor, the memory storing a computer program, the processor implementing the following steps when executing the computer program:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
obtaining operation data generated by operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
obtaining operation data generated by operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
According to the data processing method, the data processing device, the computer equipment and the storage medium, the acquired first point cloud data are input into the neural network model to obtain a first target detection result, operation data generated by operation on the first point cloud data are acquired, the first point cloud data are processed according to the operation data to obtain second point cloud data, the second point cloud data are input into the neural network model to obtain a second target detection result, difference data between the second target detection result and the first target detection result are acquired, and the influence degree of the operation data on target detection is determined according to the difference data. The second target detection result is obtained according to the second point cloud data, and the second point cloud data is obtained by processing the first point cloud data according to the operation data by the computer equipment, so that the influence degree of the operation data on the target detection can be determined according to the difference data between the first target detection result and the second target detection result, and when the computer equipment uses the model to perform the target detection, the neural network model can be adjusted according to the influence degree of the operation data on the target detection, so that the accuracy of the model output result is improved.
Drawings
FIG. 1 is a diagram of an application environment of a data processing method in one embodiment;
FIG. 2 is a flow diagram illustrating a data processing method according to one embodiment;
FIG. 3 is a schematic diagram of modifying a first point cloud feature in one embodiment;
FIG. 4 is a schematic flow chart illustrating the addition of point cloud data in one embodiment;
FIG. 5 is a block diagram showing the structure of a data processing apparatus according to an embodiment;
FIG. 6 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, the first point cloud data may be referred to as second point cloud data, and similarly, the second point cloud data may be referred to as first point cloud data, without departing from the scope of the present application. The first point cloud data and the second point cloud data are both point cloud data, but they are not the same point cloud data.
The data processing method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. As shown in FIG. 1, the application environment may include a computer device 110. The computer device 110 may input the acquired first point cloud data into the neural network model to obtain a first target detection result. The computer device 110 may obtain operation data generated by operating the first point cloud data, and perform a process of removing the first point cloud data according to the operation data to obtain second point cloud data. The computer device 110 may input the second point cloud data into the neural network model to obtain a second target detection result. The computer device 110 may obtain difference data between the second target detection result and the first target detection result, and determine the degree of influence of the operation data on the target detection according to the difference data. The computer device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and the like, and is not limited herein.
In one embodiment, as shown in fig. 2, there is provided a data processing method including the steps of:
step 202, inputting the acquired first point cloud data into a neural network model to obtain a first target detection result.
The point cloud data may be a large amount of data acquired by a three-dimensional scanner, and the data may be recorded in the form of points, and each point may include three-dimensional coordinate information, color information, reflection intensity information, and the like. The color information can be obtained by a camera to obtain a color image, and then the color information of the pixel at the corresponding position is given to the corresponding point in the point cloud data; the reflection intensity information may be echo intensity information collected by the laser scanner receiving device.
Neural network models are described based on mathematical models of neurons. The neural network model may be classified into a convolutional neural network model, a deep neural network model, and the like, wherein the convolutional neural network model may be used for target detection. The first target detection result may be a detection result of a vehicle, a pedestrian, a flower, an animal, or the like.
After the computer device acquires the first point cloud data, the acquired first point cloud data can be input into the neural model, and after the neural network model performs target detection on the first point cloud data, a first target detection result can be output. That is, the computer device may obtain the first target detection result output from the neural network model.
Step 204, obtaining operation data generated by the operation on the first point cloud data.
The operation on the first point cloud data may be a modify point cloud operation, an add point cloud operation, or the like. The operation on the first point cloud data may be used to process the first point cloud data. The operation data may include modifying the point cloud data, adding the point cloud data, and the like, which are not limited herein. The operation data may be generated by a user operating software on the computer device, for example, the user may operate software in the computer device for editing point cloud data to generate operation data, and the computer device may obtain the operation data generated by operating the first point cloud data.
And step 206, processing the first point cloud data according to the operation data to obtain second point cloud data.
The computer device may process the first point cloud data according to the operation data. For example, when the operation data acquired by the computer device is modified point cloud data, the computer device may modify the first point cloud data; when the operation data acquired by the computer device is added point cloud data, the computer device may add the point cloud data on the basis of the first point cloud data. After the computer device processes the first point cloud data, second point cloud data can be obtained. For example, after the computer device modifies the first point cloud data, the modified first point cloud data is the second point cloud data.
And 208, inputting the second point cloud data into the neural network model to obtain a second target detection result.
The second target detection result may be a detection result of a vehicle, a pedestrian, a flower, an animal, or the like.
After the computer device obtains the second point cloud data, the obtained second point cloud data may be input into the neural network model. It is understood that the neural network model may output a second target detection result after performing the target detection on the second point cloud data. That is, the computer device may obtain the second target detection result output from the neural network model.
And step 210, acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on the target detection according to the difference data.
The difference data may be used to represent a difference between the first target detection result and the second target detection result. Specifically, the difference data may be a specific value, for example, the difference data may be a specific value such as 2%, 5%, 19%, etc. The larger the difference data is, the larger the difference between the first target detection result and the second target detection result is. For example, when the first target detection result is a small car and the second target detection result is a medium car, the computer device may acquire the difference data between the small car and the large car as 19%.
The computer device may compare the acquired first target detection result with the second target detection result, and calculate difference data between the first target detection result and the second target detection result. The computer device may determine a degree of influence of the operational data on the target detection based on the difference data. Specifically, the larger the difference data obtained by the computer device is, the larger the influence degree of the operation data on the target detection is. The second point cloud data is obtained by processing the first point cloud data by using the operation data, and the second target detection result is obtained according to the second point cloud data, so that the influence degree of the operation data on target detection can be determined by the difference data obtained by the computer equipment.
In this embodiment, the computer device obtains a first target detection result by inputting the obtained first point cloud data into the neural network model, obtains operation data generated by an operation on the first point cloud data, processes the first point cloud data according to the operation data to obtain second point cloud data, inputs the second point cloud data into the neural network model to obtain a second target detection result, obtains difference data between the second target detection result and the first target detection result, and determines an influence degree of the operation data on target detection according to the difference data. The second target detection result is obtained according to the second point cloud data, and the second point cloud data is obtained by processing the first point cloud data according to the operation data by the computer equipment, so that the influence degree of the operation data on the target detection can be determined according to the difference data between the first target detection result and the second target detection result, and when the computer equipment uses the model to perform the target detection, the neural network model can be adjusted according to the influence degree of the operation data on the target detection, so that the accuracy of the model output result is improved.
In an embodiment, the provided data processing method may further include a process of obtaining the first point cloud data, where the specific process includes: collecting a road scene in the driving process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene; and preprocessing the laser radar point cloud data to obtain first point cloud data.
The point cloud data can be collected by a laser radar, and the computer equipment can collect road scenes in the driving process of the vehicle by using the laser radar. Specifically, the computer device may scan a road scene in a driving process of the vehicle by using the laser radar, so as to obtain laser radar point cloud data corresponding to the road scene. For example, the computer device may scan vehicles, pedestrians, flowers, buildings, and the like, which appear during the driving of the vehicle, by using the lidar, so as to obtain lidar point cloud data corresponding to the vehicles, the lidar point cloud data corresponding to the pedestrians, the lidar point cloud data corresponding to the flowers, and the lidar point cloud data corresponding to the buildings. The computer equipment can also use a camera to collect a road scene in the driving process of the vehicle, so as to obtain point cloud data corresponding to the road scene. Among them, the computer equipment can use the binocular camera to gather the road scene in the vehicle driving process.
The pre-processing of the lidar point cloud data may be point cloud filtering of the lidar point cloud data. That is, the collected lidar point cloud data may include a large number of hash points and isolated points, and the computer device may filter the hash points and isolated points by preprocessing the lidar point cloud data. The laser radar point cloud data can be preprocessed by three-dimensional matching of the laser radar point cloud data. That is, the collected lidar point cloud data cannot be obtained from the point cloud data of the whole object through one-time scanning because the scanning beam of the lidar is shielded by the object, and therefore the object needs to be scanned from different positions and angles, and the point cloud data of adjacent scanning are spliced together.
After the computer device preprocesses the laser radar point cloud data, the first point cloud data can be obtained.
In this embodiment, the computer device collects a road scene in a vehicle driving process through the laser radar, acquires laser radar point cloud data corresponding to the road scene, and preprocesses the laser radar point cloud data to obtain first point cloud data. The computer equipment can enable the acquired first point cloud data to be more accurate by preprocessing the acquired laser radar point cloud data.
In an embodiment, the provided data processing method may further include a process of obtaining a first target detection result, where the specific process includes: acquiring first point cloud characteristics extracted from the first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
The first point cloud data may include first point cloud features, where the first point cloud features may be features such as a point cloud height, a point cloud density, a point cloud reflection intensity, and a point cloud normal vector, and are not limited herein.
The computer device may extract a first point cloud feature from the first point cloud data, and the computer device may input the extracted first point cloud feature into the neural network to obtain a first target detection result.
In this embodiment, the computer device obtains a first target detection result by acquiring a first point cloud feature extracted from the first point cloud data and inputting the first point cloud feature into the neural network model. The computer equipment inputs the extracted first point cloud characteristics into the neural network model, so that the obtained first target detection result is more accurate.
In an embodiment, the provided data processing method may further include a process of obtaining second point cloud data, where the specific process includes: when the operation instruction is a modification instruction, modifying the first point cloud characteristics in the first point cloud data according to modification data contained in the operation instruction to obtain second point cloud characteristics; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
After the computer device acquires the operation instruction of the first point cloud data, whether the acquired operation instruction is a modification instruction or not can be judged. When the computer device determines that the acquired operation instruction is a modification instruction, the computer device may extract modification data included in the modification instruction. The computer device can modify the first point cloud characteristics in the first point cloud data according to the modification data to obtain second point cloud characteristics. When the first point cloud feature is modified, the modification mode may be a mode of rotating, reducing, translating, folding, enlarging, and the like of the point cloud, and is not limited herein. For example, the operation instruction determined by the computer device is a modification instruction, the modification data included in the modification instruction extracted by the computer device is to increase the height of the point cloud by 3 centimeters, the computer device may increase the height of the point cloud in the first point cloud data by 3 centimeters, and the point cloud height increased by 3 centimeters is the second point cloud feature.
When the computer device modifies the first point cloud feature in the first point cloud data according to the modification data, various modification modes can be provided. For example, the computer device may modify all the first point cloud features in the first point cloud data, may also find the obstacle point cloud features from the first point cloud features, and modify the obstacle point cloud features, and the like.
The computer equipment can obtain second point cloud data according to the obtained second point cloud characteristics and the first point cloud data. Specifically, the computer device may separate the modified first point cloud feature from the first point cloud data to obtain separated first point cloud data, and the computer device may obtain the second point cloud data according to the obtained second point cloud feature and the separated first point cloud data. For example, the first point cloud feature included in the first point cloud data obtained by the computer device includes a point cloud height, a point cloud density, and a point cloud reflection intensity, the computer device modifies the point cloud height in the first point cloud feature to obtain a modified point cloud height, the modified point cloud height is the second point cloud feature, and the computer device can obtain the second point cloud data according to the modified point cloud height, the point cloud density in the first point cloud feature, and the point cloud reflection intensity in the first point cloud feature.
In this embodiment, when the operation instruction is a modification instruction, the computer device modifies the first point cloud feature in the first point cloud data according to modification data included in the operation instruction to obtain a second point cloud feature, and obtains second point cloud data according to the second point cloud feature and the first point cloud data. The computer equipment can modify the first point cloud characteristics according to the modification data to obtain second point cloud characteristics, so that the second point cloud data is obtained.
In one embodiment, as shown in FIG. 3, a schematic diagram of modifying a first point cloud feature is provided. When the computer device determines that the operation instruction is a modification instruction, the computer device may modify the first point cloud feature in the first point cloud data according to modification data included in the operation instruction to obtain a second point cloud feature. Taking the first point cloud feature as the point cloud height as an example, the point cloud height 312 of the first point cloud feature in the first point cloud data 310 acquired by the computer device is 2 cm, the point cloud height 312 is increased by 3 cm by the modified data acquired by the computer device, and the point cloud height 312 of the first point cloud feature can be increased by 3 cm by the computer device, so that the second point cloud feature with the point cloud height 322 of 5 cm is obtained. A computer device may obtain the second point cloud data 320.
In another embodiment, the provided data processing method may further include a process of obtaining second point cloud data, where the specific process includes: when the operation instruction is an adding instruction, obtaining adding point cloud data contained in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
After the computer device acquires the operation instruction of the first point cloud data, whether the acquired operation instruction is an addition instruction or not can be judged. When the computer device determines that the acquired operation instruction is an adding instruction, the computer device may acquire adding point cloud data included in the operation instruction. Wherein, the point cloud data can be added by the user through the operating software in the computer device. Specifically, the point cloud data added may be point cloud data corresponding to a specific object added by the user through operating software in the computer device. For example, a user adds a pedestrian through operating software in the computer device, and the added point cloud data may be point cloud data corresponding to the pedestrian.
The computer device can obtain second point cloud data according to the added point cloud data and the first point cloud data. Specifically, the computer device may superimpose the obtained added point cloud data with the first point cloud data to obtain second point cloud data. For example, the first point cloud data obtained by the computer device is point cloud data corresponding to a vehicle, the user adds a pedestrian through operation software in the computer device, the added point cloud data is point cloud data corresponding to the pedestrian, and the second point cloud data obtained by the computer device may be point cloud data corresponding to the vehicle and the pedestrian.
In this embodiment, when the operation instruction is an addition instruction, the computer device obtains second point cloud data according to the addition point cloud data and the first point cloud data by obtaining addition point cloud data included in the operation instruction. The computer equipment can overlap the added point cloud data with the first point cloud data to obtain second point cloud data, and the first point cloud data does not need to be completely modified, so that the second point cloud data can be generated more conveniently.
As shown in FIG. 4, in one embodiment, a schematic diagram of adding point cloud data is provided. When the computer device judges that the operation instruction is the adding instruction, the computer device can acquire the adding point cloud data and obtain second point cloud data according to the adding point cloud data and the first point cloud data. Taking the added point cloud data as the point cloud data corresponding to the pedestrian as an example, the first point cloud data 410 acquired by the computer device is the point cloud data corresponding to the vehicle, and the computer device may add the added point cloud data 420, that is, the point cloud data corresponding to the pedestrian, to the first point cloud data 420 to obtain the second point cloud data 430.
In an embodiment, the provided data processing method may further include a process of obtaining an influence degree of the operation data on the target detection, where the specific process includes: acquiring a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
The reference target detection result may be used to represent a standard detection result obtained by inputting the first point cloud data into the neural network model. The reference target detection result may be input by a user through a computer device according to the first point cloud data, and the reference target detection result may be a target detection result of a vehicle, a pedestrian, flowers, buildings and the like. For example, the user may input the reference target detection result of the pedestrian through the computer device.
The difference index may be used to represent difference data between the second target detection result and the reference target detection result. The difference index may be an index such as a recall rate of the result, and the difference index may be a specific numerical value, where the recall rate of the result may be used to indicate that the second target detection result is equal to the reference target detection result in proportion. For example, the computer device derived indicator of difference may be 20%. The computer device may compare the obtained reference target detection result with the second target detection result, thereby obtaining a difference index.
The computer device may determine the degree of influence of the operation data on the target detection according to difference data between the first target detection result and the second target detection result and a difference index between the second target detection result and the reference target detection result. When determining the influence degree of the operation data on the target detection, the computer device may determine the influence degree of the operation data on the target detection, based on the difference data between the first target detection result and the second target detection result, that is, the computer device may determine the influence degree of the operation data on the target detection, where the priority of the difference data between the first target detection result and the second target detection result is higher than the priority of the difference index between the second target detection result and the reference target detection result.
In this embodiment, the computer device obtains a difference index by obtaining a reference target detection result corresponding to the first point cloud data, comparing the second target detection result with the reference target detection result, and determining an influence degree of the operation data on the target detection according to the difference data and the difference index. The computer equipment determines the influence degree of the operation data on the target detection according to the difference index predicted by the difference data, and can improve the accuracy of the influence degree of the operation data on the target detection.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 5, there is provided a data processing apparatus including: a first target detection result obtaining module 510, an operation data obtaining module 520, a second point cloud data obtaining module 530, a second target detection result obtaining module 540, and a difference data obtaining module 550, wherein:
the first target detection result obtaining module 510 is configured to input the obtained first point cloud data into the neural network model, so as to obtain a first target detection result.
An operation data obtaining module 520, configured to obtain operation data generated by an operation on the first point cloud data.
The second point cloud data obtaining module 530 is configured to process the first point cloud data according to the operation data to obtain second point cloud data.
And a second target detection result obtaining module 540, configured to input the second point cloud data into the neural network model, so as to obtain a second target detection result.
The difference data acquiring module 550 is configured to acquire difference data between the second target detection result and the first target detection result, and determine an influence degree of the operation data on the target detection according to the difference data.
In one embodiment, a data processing apparatus is provided that may further include: laser radar point cloud data acquisition module and first point cloud data acquisition module, wherein:
and the laser radar point cloud data acquisition module is used for acquiring a road scene in the driving process of the vehicle through a laser radar and acquiring laser radar point cloud data corresponding to the road scene.
The first point cloud data acquisition module is used for preprocessing the point cloud data of the laser radar to obtain first point cloud data.
In one embodiment, the first target detection result obtaining module 510 is further configured to obtain a first point cloud feature extracted from the first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
In an embodiment, the second point cloud data obtaining module 530 is further configured to modify the first point cloud feature in the first point cloud data according to modification data included in the operation instruction to obtain a second point cloud feature when the operation instruction is a modification instruction; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the second point cloud data obtaining module 530 is further configured to, when the operation instruction is an add instruction, obtain add point cloud data included in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the difference data acquiring module 550 is further configured to acquire a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
For specific limitations of the data processing apparatus, reference may be made to the above limitations of the data processing method, which are not described herein again. The various modules in the data processing apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 6 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: collecting a road scene in the driving process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene; and preprocessing the laser radar point cloud data to obtain first point cloud data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring first point cloud characteristics extracted from the first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the operation instruction is a modification instruction, modifying the first point cloud characteristics in the first point cloud data according to modification data contained in the operation instruction to obtain second point cloud characteristics; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the operation instruction is an adding instruction, obtaining adding point cloud data contained in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the computer program when executed by the processor further performs the steps of: collecting a road scene in the driving process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene; and preprocessing the laser radar point cloud data to obtain first point cloud data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring first point cloud characteristics extracted from the first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the operation instruction is a modification instruction, modifying the first point cloud characteristics in the first point cloud data according to modification data contained in the operation instruction to obtain second point cloud characteristics; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the operation instruction is an adding instruction, obtaining adding point cloud data contained in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of data processing, the method comprising:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
obtaining operation data generated by operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
2. The method of claim 1, further comprising:
collecting a road scene in the driving process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene;
and preprocessing the laser radar point cloud data to obtain the first point cloud data.
3. The method of claim 1, wherein inputting the acquired first point cloud data into a neural network model to obtain a first target detection result comprises:
acquiring first point cloud characteristics extracted from the first point cloud data;
inputting the first point cloud feature into the neural network model to obtain the first target detection result.
4. The method according to claim 3, wherein the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data comprises:
when the operation instruction is a modification instruction, modifying the first point cloud feature in the first point cloud data according to modification data contained in the operation instruction to obtain a second point cloud feature;
and obtaining the second point cloud data according to the second point cloud characteristics and the first point cloud data.
5. The method according to claim 1, wherein the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data comprises:
when the operation instruction is an adding instruction, obtaining adding point cloud data contained in the operation instruction;
and obtaining the second point cloud data according to the added point cloud data and the first point cloud data.
6. The method of claim 1, further comprising:
acquiring a reference target detection result corresponding to the first point cloud data;
comparing the second target detection result with the reference target detection result to obtain a difference index;
the determining the influence degree of the operation data on target detection according to the difference data comprises:
and determining the influence degree of the operation data on target detection according to the difference data and the difference index.
7. A data processing apparatus, characterized in that the apparatus comprises:
the first target detection result acquisition module is used for inputting the acquired first point cloud data into the neural network model to obtain a first target detection result;
the operation data acquisition module is used for acquiring operation data generated by the operation on the first point cloud data;
the second point cloud data acquisition module is used for processing the first point cloud data according to the operation data to obtain second point cloud data;
the second target detection result acquisition module is used for inputting the second point cloud data into the neural network model to obtain a second target detection result;
and the difference data acquisition module is used for acquiring difference data between the second target detection result and the first target detection result and determining the influence degree of the operation data on target detection according to the difference data.
8. The apparatus of claim 7, further comprising:
the laser radar point cloud data acquisition module is used for acquiring a road scene in the driving process of a vehicle through a laser radar and acquiring laser radar point cloud data corresponding to the road scene;
and the first point cloud data acquisition module is used for preprocessing the laser radar point cloud data to obtain the first point cloud data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN201910150131.0A 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium Active CN111626288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910150131.0A CN111626288B (en) 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910150131.0A CN111626288B (en) 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111626288A true CN111626288A (en) 2020-09-04
CN111626288B CN111626288B (en) 2023-12-01

Family

ID=72270721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910150131.0A Active CN111626288B (en) 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111626288B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020466A1 (en) * 2015-08-04 2017-02-09 百度在线网络技术(北京)有限公司 Urban road recognition method, apparatus, storage medium and device based on laser point cloud
US20170300059A1 (en) * 2017-05-03 2017-10-19 GM Global Technology Operations LLC Methods and systems for lidar point cloud anomalies
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation
CN108399609A (en) * 2018-03-06 2018-08-14 北京因时机器人科技有限公司 A kind of method for repairing and mending of three dimensional point cloud, device and robot
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020466A1 (en) * 2015-08-04 2017-02-09 百度在线网络技术(北京)有限公司 Urban road recognition method, apparatus, storage medium and device based on laser point cloud
US20170300059A1 (en) * 2017-05-03 2017-10-19 GM Global Technology Operations LLC Methods and systems for lidar point cloud anomalies
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation
CN108399609A (en) * 2018-03-06 2018-08-14 北京因时机器人科技有限公司 A kind of method for repairing and mending of three dimensional point cloud, device and robot
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111626288B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
CN110660066B (en) Training method of network, image processing method, network, terminal equipment and medium
CN111797650B (en) Obstacle identification method, obstacle identification device, computer equipment and storage medium
CN112633144A (en) Face occlusion detection method, system, device and storage medium
CN112633152B (en) Parking space detection method and device, computer equipment and storage medium
CN110634153A (en) Target tracking template updating method and device, computer equipment and storage medium
CN111753649B (en) Parking space detection method, device, computer equipment and storage medium
CN110706261A (en) Vehicle violation detection method and device, computer equipment and storage medium
CN112336342B (en) Hand key point detection method and device and terminal equipment
CN110827202A (en) Target detection method, target detection device, computer equipment and storage medium
CN111161202A (en) Vehicle behavior information acquisition method and device, computer equipment and storage medium
CN110826484A (en) Vehicle weight recognition method and device, computer equipment and model training method
CN111028212B (en) Key point detection method, device, computer equipment and storage medium
CN111179274A (en) Map ground segmentation method, map ground segmentation device, computer equipment and storage medium
KR20220093187A (en) Positioning method and apparatus, electronic device, computer readable storage medium
CN112001983B (en) Method and device for generating occlusion image, computer equipment and storage medium
CN111144372A (en) Vehicle detection method, device, computer equipment and storage medium
CN111144228A (en) Obstacle identification method based on 3D point cloud data and computer equipment
CN114930402A (en) Point cloud normal vector calculation method and device, computer equipment and storage medium
CN114663598A (en) Three-dimensional modeling method, device and storage medium
CN113160199B (en) Image recognition method and device, computer equipment and storage medium
CN111626288B (en) Data processing method, device, computer equipment and storage medium
CN110880003A (en) Image matching method and device, storage medium and automobile
CN111721283A (en) Precision detection method and device of positioning algorithm, computer equipment and storage medium
CN111091504A (en) Image deviation field correction method, computer device, and storage medium
CN111178202B (en) Target detection method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant