CN111626288B - Data processing method, device, computer equipment and storage medium - Google Patents

Data processing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111626288B
CN111626288B CN201910150131.0A CN201910150131A CN111626288B CN 111626288 B CN111626288 B CN 111626288B CN 201910150131 A CN201910150131 A CN 201910150131A CN 111626288 B CN111626288 B CN 111626288B
Authority
CN
China
Prior art keywords
point cloud
cloud data
data
target detection
detection result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910150131.0A
Other languages
Chinese (zh)
Other versions
CN111626288A (en
Inventor
徐棨森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suteng Innovation Technology Co Ltd
Original Assignee
Suteng Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suteng Innovation Technology Co Ltd filed Critical Suteng Innovation Technology Co Ltd
Priority to CN201910150131.0A priority Critical patent/CN111626288B/en
Publication of CN111626288A publication Critical patent/CN111626288A/en
Application granted granted Critical
Publication of CN111626288B publication Critical patent/CN111626288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The application relates to a data processing method, a data processing device, computer equipment and a storage medium. The method comprises the following steps: inputting the acquired first point cloud data into a neural network model to obtain a first target detection result, acquiring operation data generated by operation on the first point cloud data, processing the first point cloud data according to the operation data to obtain second point cloud data, inputting the second point cloud data into the neural network model to obtain a second target detection result, acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data. According to the difference data, the influence degree of the operation data on the target detection can be determined, and when the computer equipment uses the model to carry out the target detection, the neural network model can be adjusted according to the influence degree of the operation data on the target detection, so that the accuracy of the model output result is improved.

Description

Data processing method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method, apparatus, computer device, and storage medium.
Background
With the development of computer technology, more and more models are applied to life and work of people. Most models can be used for data processing where the models are relatively dependent on the input characteristic parameters. When the traditional model is used for data processing, the input characteristic parameters are different, and the obtained data processing results are different.
However, when the data processing result is obtained by performing data processing on the input characteristic parameters by using the model in the conventional method, the obtained data processing result is inaccurate.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a data processing method, apparatus, computer device, and storage medium, which can improve accuracy of a model output result.
A method of data, the method comprising:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by the operation of the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
And acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the method further comprises:
acquiring a road scene in the running process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene;
and preprocessing the laser radar point cloud data to obtain the first point cloud data.
In one embodiment, the inputting the obtained first point cloud data into the neural network model to obtain a first target detection result includes:
acquiring first point cloud features extracted from the first point cloud data;
and inputting the first point cloud characteristics into the neural network model to obtain the first target detection result.
In one embodiment, the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data includes:
when the operation instruction is a modification instruction, modifying the first point cloud characteristic in the first point cloud data according to modification data contained in the operation instruction to obtain a second point cloud characteristic;
And obtaining the second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data includes:
when the operation instruction is an addition instruction, acquiring addition point cloud data contained in the operation instruction; and obtaining the second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the method further comprises:
acquiring a reference target detection result corresponding to the first point cloud data;
comparing the second target detection result with the reference target detection result to obtain a difference index;
the determining the influence degree of the operation data on the target detection according to the difference data comprises the following steps:
and determining the influence degree of the operation data on target detection according to the difference data and the difference index.
A data processing apparatus, the apparatus comprising:
the first target detection result acquisition module is used for inputting the acquired first point cloud data into the neural network model to obtain a first target detection result;
An operation data acquisition module, configured to acquire operation data generated by an operation on the first point cloud data;
the second point cloud data acquisition module is used for processing the first point cloud data according to the operation data to obtain second point cloud data;
the second target detection result acquisition module is used for inputting the second point cloud data into the neural network model to obtain a second target detection result;
the difference data acquisition module is used for acquiring difference data between the second target detection result and the first target detection result and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the apparatus further comprises:
the laser radar point cloud data acquisition module is used for acquiring road scenes in the running process of the vehicle through a laser radar and acquiring laser radar point cloud data corresponding to the road scenes;
and the first point cloud data acquisition module is used for preprocessing the laser radar point cloud data to obtain the first point cloud data.
In one embodiment, the first target detection result obtaining module is further configured to obtain a first point cloud feature extracted from the first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain the first target detection result.
In one embodiment, when the operation instruction is a modification instruction, the second point cloud data acquisition module is further configured to modify the first point cloud feature in the first point cloud data according to modification data included in the operation instruction, so as to obtain a second point cloud feature; and obtaining the second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the second point cloud data obtaining module is further configured to obtain, when the operation instruction is an addition instruction, addition point cloud data included in the operation instruction; and obtaining the second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the difference data obtaining module is further configured to obtain a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on target detection according to the difference data and the difference index.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
Inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by the operation of the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by the operation of the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
And acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
According to the data processing method, the device, the computer equipment and the storage medium, the first target detection result is obtained by inputting the acquired first point cloud data into the neural network model, the operation data generated by the operation of the first point cloud data is acquired, the first point cloud data is processed according to the operation data to obtain the second point cloud data, the second point cloud data is input into the neural network model to obtain the second target detection result, the difference data between the second target detection result and the first target detection result is obtained, and the influence degree of the operation data on target detection is determined according to the difference data. The second target detection result is obtained according to the second point cloud data, and the second point cloud data is obtained by processing the first point cloud data according to the operation data by the computer equipment, so that the influence degree of the operation data on the target detection can be determined according to the difference data between the first target detection result and the second target detection result, and the computer equipment can adjust the neural network model according to the influence degree of the operation data on the target detection when the computer equipment uses the model to carry out the target detection, thereby improving the accuracy of the model output result.
Drawings
FIG. 1 is a diagram of an application environment for a data processing method in one embodiment;
FIG. 2 is a flow diagram of a data processing method in one embodiment;
FIG. 3 is a schematic diagram of modifying a first point cloud feature in one embodiment;
FIG. 4 is a schematic flow chart of adding point cloud data in one embodiment;
FIG. 5 is a block diagram of a data processing apparatus in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
It will be understood that the terms first, second, etc. as used herein may be used to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element. For example, the first point cloud data may be referred to as second point cloud data, and similarly, the second point cloud data may be referred to as first point cloud data, without departing from the scope of the application. Both the first point cloud data and the second point cloud data are point cloud data, but they are not the same point cloud data.
The data processing method provided by the embodiment of the application can be applied to an application environment shown in figure 1. As shown in FIG. 1, the application environment may include a computer device 110. The computer device 110 may input the obtained first point cloud data into the neural network model, to obtain a first target detection result. The computer device 110 may obtain operation data generated by the operation on the first point cloud data, and perform the operation on the first point cloud data according to the operation data to obtain second point cloud data. The computer device 110 may input the second point cloud data into the neural network model to obtain a second target detection result. The computer device 110 may obtain difference data between the second target detection result and the first target detection result, and determine a degree of influence of the operation data on the target detection according to the difference data. The computer device 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, etc., and is not limited herein.
In one embodiment, as shown in fig. 2, there is provided a data processing method including the steps of:
step 202, inputting the acquired first point cloud data into a neural network model to obtain a first target detection result.
The point cloud data may be a large amount of data acquired by a three-dimensional scanner, and the data may be recorded in the form of points, each of which may contain three-dimensional coordinate information, color information, reflection intensity information, and the like. The color information may be that a color image is obtained by a camera, and then color information of pixels at corresponding positions is given to corresponding points in the point cloud data; the reflected intensity information may be echo intensity information acquired by a laser scanner receiving device.
The neural network model is described based on a mathematical model of neurons. The neural network model may be classified into a convolutional neural network model, a deep neural network model, and the like, wherein the convolutional neural network model may be used for target detection. The first target detection result may be a detection result of a vehicle, a pedestrian, flowers, plants, animals, or the like.
After the computer device acquires the first point cloud data, the acquired first point cloud data can be input into the neural model, and the neural network model can output a first target detection result after target detection is performed on the first point cloud data. That is, the computer device may obtain the first target detection result output from the neural network model.
Step 204, obtaining operation data generated by the operation on the first point cloud data.
The operation on the first point cloud data may be a modify point cloud operation, an add point cloud operation, or the like. The operation on the first point cloud data may be used to process the first point cloud data. The operation data may include modifying point cloud data, adding point cloud data, etc., without limitation. The operation data may be generated by a user operating software on the computer device, for example, the user may operate software for editing the point cloud data in the computer device, so as to generate operation data, and the computer device may acquire operation data generated by the operation on the first point cloud data.
And 206, processing the first point cloud data according to the operation data to obtain second point cloud data.
The computer device may process the first point cloud data according to the operational data. For example, when the operation data acquired by the computer device is modified point cloud data, the computer device may modify the first point cloud data; when the operation data acquired by the computer device is the added point cloud data, the computer device may add the point cloud data on the basis of the first point cloud data. After the computer device processes the first point cloud data, second point cloud data can be obtained. For example, after the computer device modifies the first point cloud data, the modified first point cloud data is the second point cloud data.
And step 208, inputting the second point cloud data into the neural network model to obtain a second target detection result.
The second target detection result may be a detection result of a vehicle, a pedestrian, flowers, plants, animals, or the like.
After the computer device obtains the second point cloud data, the obtained second point cloud data may be input into the neural network model. It can be appreciated that the neural network model may output the second target detection result after performing target detection on the second point cloud data. That is, the computer device may obtain the second target detection result output from the neural network model.
Step 210, obtain the difference data between the second target detection result and the first target detection result, and determine the influence degree of the operation data on the target detection according to the difference data.
The difference data may be used to represent a difference between the first target detection result and the second target detection result. In particular, the difference data may be a specific value, for example, the difference data may be a specific value of 2%, 5%, 19%, etc. The larger the difference data is, the larger the difference between the first target detection result and the second target detection result is. For example, when the first target detection result is a small car and the second target detection result is a medium car, the computer device may acquire the difference data between the small car and the large car as 19%.
The computer device may compare the obtained first target detection result with the second target detection result and calculate difference data between the first target detection result and the second target detection result. The computer device may determine a degree of influence of the operational data on the target detection based on the difference data. Specifically, the larger the difference data obtained by the computer device, the greater the influence of the operation data on the target detection. Since the second point cloud data is obtained by processing the first point cloud data by using the operation data, and the second target detection result is obtained according to the second point cloud data, the difference data obtained by the computer device can determine the influence degree of the operation data on the target detection.
In this embodiment, the computer device obtains a first target detection result by inputting the obtained first point cloud data into the neural network model, obtains operation data generated by operation on the first point cloud data, processes the first point cloud data according to the operation data to obtain second point cloud data, inputs the second point cloud data into the neural network model to obtain a second target detection result, obtains difference data between the second target detection result and the first target detection result, and determines a degree of influence of the operation data on target detection according to the difference data. The second target detection result is obtained according to the second point cloud data, and the second point cloud data is obtained by processing the first point cloud data according to the operation data by the computer equipment, so that the influence degree of the operation data on the target detection can be determined according to the difference data between the first target detection result and the second target detection result, and the computer equipment can adjust the neural network model according to the influence degree of the operation data on the target detection when the computer equipment uses the model to carry out the target detection, thereby improving the accuracy of the model output result.
In one embodiment, the data processing method may further include a process of obtaining the first point cloud data, and the specific process includes: acquiring a road scene in the running process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene; and preprocessing the laser radar point cloud data to obtain first point cloud data.
The point cloud data can be acquired through a laser radar, and the computer equipment can acquire a road scene in the running process of the vehicle through the laser radar. Specifically, the computer device may scan a road scene during the running process of the vehicle using the lidar, so as to obtain lidar point cloud data corresponding to the road scene. For example, the computer device may scan vehicles, pedestrians, flowers and plants, buildings, and the like occurring during the running process of the vehicles using the lidar, so as to obtain lidar point cloud data corresponding to the vehicles, lidar point cloud data corresponding to the pedestrians, lidar point cloud data corresponding to the flowers and plants, and lidar point cloud data corresponding to the buildings. The computer equipment can also use the camera to collect the road scene in the running process of the vehicle, so as to obtain the point cloud data corresponding to the road scene. Wherein the computer device may use a binocular camera to capture road scenes during the travel of the vehicle.
The preprocessing of the lidar point cloud data may be point cloud filtering of the lidar point cloud data. That is, the collected laser radar point cloud data may include a large number of hash points and isolated points, which may be filtered by the computer device by preprocessing the laser radar point cloud data. The preprocessing of the laser radar point cloud data can also be three-dimensional matching of the laser radar point cloud data. That is, since the laser radar scanning beam is blocked by the object, the acquired point cloud data of the whole object cannot be acquired through one scan, and therefore, the object needs to be scanned from different positions and angles, so that the point cloud data of adjacent scans are spliced together.
After preprocessing the laser radar point cloud data, the computer device can obtain first point cloud data.
In this embodiment, the computer device acquires a road scene in the running process of the vehicle through the laser radar, acquires laser radar point cloud data corresponding to the road scene, and performs preprocessing on the laser radar point cloud data to obtain first point cloud data. The computer equipment can enable the obtained first point cloud data to be more accurate by preprocessing the collected laser radar point cloud data.
In one embodiment, the data processing method may further include a process of obtaining a first target detection result, and the specific process includes: acquiring first point cloud features extracted from first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
The first point cloud data may include a first point cloud feature, where the first point cloud feature may be a feature such as a point cloud height, a point cloud density, a point cloud reflection intensity, a normal vector of the point cloud, and the like, which is not limited herein.
The computer device may extract a first point cloud feature from the first point cloud data, and the computer device may input the extracted first point cloud feature into the neural network to obtain a first target detection result.
In this embodiment, the computer device obtains the first target detection result by acquiring the first point cloud feature extracted from the first point cloud data, and inputting the first point cloud feature into the neural network model. The computer equipment inputs the extracted first point cloud characteristics into the neural network model, so that the obtained first target detection result is more accurate.
In one embodiment, the data processing method may further include a process of obtaining second point cloud data, and the specific process includes: when the operation instruction is a modification instruction, modifying the first point cloud characteristics in the first point cloud data according to modification data contained in the operation instruction to obtain second point cloud characteristics; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
After the computer device obtains the operation instruction for the first point cloud data, it can be judged whether the obtained operation instruction is a modification instruction. When the computer device determines that the acquired operation instruction is a modification instruction, the computer device may extract modification data included in the modification instruction. The computer device may modify the first point cloud feature in the first point cloud data according to the modification data to obtain a second point cloud feature. When the first point cloud feature is modified, the modification mode may be a mode of rotating, shrinking, translating, doubling up, amplifying, and the like, which is not limited herein. For example, the computer device determines that the operation instruction is a modification instruction, the modification data included in the modification instruction extracted by the computer device is to increase the point cloud height by 3 cm, and the computer device can increase the point cloud height in the first point cloud data by 3 cm, where the point cloud height after 3 cm is the second point cloud feature.
When the computer device modifies the first point cloud feature in the first point cloud data according to the modification data, a plurality of modification modes can be adopted. For example, the computer device may modify all the first point cloud features in the first point cloud data, may also find obstacle point cloud features from the first point cloud features, modify the obstacle point cloud features, and so on.
The computer device may obtain second point cloud data according to the obtained second point cloud features and the first point cloud data. Specifically, the computer device may separate the modified first point cloud feature from the first point cloud data to obtain separated first point cloud data, and the computer device may obtain second point cloud data according to the obtained second point cloud feature and the separated first point cloud data. For example, the first point cloud features including the point cloud height, the point cloud density and the point cloud reflection intensity in the first point cloud data obtained by the computer device, the computer device modifies the point cloud height in the first point cloud features to obtain a modified point cloud height, the modified point cloud height is a second point cloud feature, and the computer device may obtain the second point cloud data according to the modified point cloud height, the point cloud density in the first point cloud feature and the point cloud reflection intensity in the first point cloud feature.
In this embodiment, when the operation instruction is a modification instruction, the computer device modifies the first point cloud feature in the first point cloud data according to modification data included in the operation instruction to obtain the second point cloud feature, and obtains the second point cloud data according to the second point cloud feature and the first point cloud data. The computer equipment can modify the first point cloud characteristics according to the modification data to obtain second point cloud characteristics so as to obtain second point cloud data, and the second point cloud characteristics are obtained by directly modifying the first point cloud characteristics, so that the generation of the second point cloud data is more convenient.
In one embodiment, as shown in FIG. 3, a schematic diagram is provided for modifying a first point cloud feature. When the computer equipment judges that the operation instruction is a modification instruction, the computer equipment can modify the first point cloud characteristics in the first point cloud data according to modification data contained in the operation instruction to obtain second point cloud characteristics. Taking the first point cloud feature as the point cloud height as an example, the point cloud height 312 of the first point cloud feature in the first point cloud data 310 acquired by the computer device is 2 centimeters, the modification data acquired by the computer device is to raise the point cloud height 312 by 3 centimeters, and the computer device can raise the point cloud height 312 of the first point cloud feature by 3 centimeters to obtain a second point cloud feature with the point cloud height 322 of 5 centimeters. The computer device may obtain second point cloud data 320.
In another embodiment, the data processing method may further include a process of obtaining the second point cloud data, and the specific process includes: when the operation instruction is an addition instruction, acquiring addition point cloud data contained in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
After the computer device obtains the operation instruction for the first point cloud data, it may be determined whether the obtained operation instruction is an addition instruction. When the computer device determines that the acquired operation instruction is an addition instruction, the computer device may acquire addition point cloud data included in the operation instruction. The adding point cloud data can be added by a user through operation software in the computer equipment. Specifically, the added point cloud data may be point cloud data corresponding to a specific object added by the user through operation software in the computer device. For example, a user adds a pedestrian through operation software in the computer device, and the added point cloud data may be point cloud data corresponding to the pedestrian.
The computer device may obtain second point cloud data according to the added point cloud data and the first point cloud data. Specifically, the computer device may superimpose the obtained added point cloud data with the first point cloud data to obtain second point cloud data. For example, the first point cloud data obtained by the computer device is point cloud data corresponding to a vehicle, a pedestrian is added by the user through operation software in the computer device, the added point cloud data is point cloud data corresponding to the pedestrian, and the second point cloud data obtained by the computer device can be point cloud data corresponding to the vehicle and point cloud data corresponding to the pedestrian.
In this embodiment, when the operation instruction is an addition instruction, the computer device obtains second point cloud data by obtaining the addition point cloud data included in the operation instruction according to the addition point cloud data and the first point cloud data. The computer equipment can superimpose the added point cloud data with the first point cloud data so as to obtain second point cloud data, and the first point cloud data is not required to be modified completely, so that the second point cloud data can be generated more conveniently.
As shown in fig. 4, in one embodiment, a schematic diagram of adding point cloud data is provided. When the computer equipment judges that the operation instruction is the addition instruction, the computer equipment can acquire the addition point cloud data and acquire second point cloud data according to the addition point cloud data and the first point cloud data. Taking the point cloud data added as the point cloud data corresponding to the pedestrian as an example, the first point cloud data 410 acquired by the computer device is the point cloud data corresponding to the vehicle, and the computer device may add the point cloud data 420 added as the point cloud data corresponding to the pedestrian to the first point cloud data 420 to obtain the second point cloud data 430.
In one embodiment, the data processing method may further include a process of obtaining a degree of influence of the operation data on the target detection, and the specific process includes: acquiring a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
The reference target detection result may be used to represent a detection result of a standard obtained by inputting the first point cloud data into the neural network model. The reference target detection result may be a target detection result of a vehicle, a pedestrian, flowers and plants, a building and the like, which is input by a user through computer equipment according to the first point cloud data. For example, a user may input a reference target detection result of a pedestrian through a computer device.
The difference indicator may be used to represent difference data between the second target detection result and the reference target detection result. The difference indicator may be an indicator of a recall rate of the result, and the difference indicator may be a specific value, where the recall rate of the result may be used to indicate that the second target detection result is in equal proportion to the reference target detection result. For example, the computer device may obtain a difference indicator of 20%. The computer device may compare the obtained reference target detection result with the second target detection result, thereby obtaining a difference indicator.
The computer device may determine a degree of influence of the operation data on the target detection based on the difference data between the first target detection result and the second target detection result, and the difference index between the second target detection result and the reference target detection result. When determining the influence degree of the operation data on the target detection, the computer equipment can determine the influence degree of the operation data on the target detection by preferentially according to the difference data between the first target detection result and the second target detection result, wherein the priority of the difference data between the first target detection result and the second target detection result is higher than the priority of the difference index between the second target detection result and the reference target detection result.
In this embodiment, the computer device compares the second target detection result with the reference target detection result by acquiring the reference target detection result corresponding to the first point cloud data, to obtain a difference index, and determines the influence degree of the operation data on the target detection according to the difference data and the difference index. The computer equipment determines the influence degree of the operation data on the target detection according to the expected difference index of the difference data, so that the accuracy of the influence degree of the operation data on the target detection can be improved.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In one embodiment, as shown in FIG. 5, there is provided a data processing apparatus comprising: a first target detection result acquisition module 510, an operation data acquisition module 520, a second point cloud data acquisition module 530, a second target detection result acquisition module 540, and a difference data acquisition module 550, wherein:
the first target detection result obtaining module 510 is configured to input the obtained first point cloud data into the neural network model, and obtain a first target detection result.
An operation data acquisition module 520 is configured to acquire operation data generated by an operation on the first point cloud data.
The second point cloud data obtaining module 530 is configured to process the first point cloud data according to the operation data to obtain second point cloud data.
The second target detection result obtaining module 540 is configured to input the second point cloud data into the neural network model to obtain a second target detection result.
The difference data obtaining module 550 is configured to obtain difference data between the second target detection result and the first target detection result, and determine a degree of influence of the operation data on the target detection according to the difference data.
In one embodiment, a data processing apparatus may further include: the laser radar point cloud data acquisition module and the first point cloud data acquisition module, wherein:
the laser radar point cloud data acquisition module is used for acquiring road scenes in the running process of the vehicle through a laser radar and acquiring laser radar point cloud data corresponding to the road scenes.
The first point cloud data acquisition module is used for preprocessing the laser radar point cloud data to obtain first point cloud data.
In one embodiment, the first target detection result obtaining module 510 is further configured to obtain a first point cloud feature extracted from the first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
In one embodiment, the second point cloud data obtaining module 530 is further configured to modify, when the operation instruction is a modification instruction, the first point cloud feature in the first point cloud data according to modification data included in the operation instruction, so as to obtain a second point cloud feature; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the second point cloud data obtaining module 530 is further configured to obtain, when the operation instruction is an add instruction, add point cloud data included in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the difference data obtaining module 550 is further configured to obtain a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
For specific limitations of the data processing apparatus, reference may be made to the above limitations of the data processing method, and no further description is given here. Each of the modules in the above-described data processing apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a data processing method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 6 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory and a processor, the memory having stored therein a computer program, the processor when executing the computer program performing the steps of:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by the operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring a road scene in the running process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene; and preprocessing the laser radar point cloud data to obtain first point cloud data.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring first point cloud features extracted from first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
In one embodiment, the processor when executing the computer program further performs the steps of: when the operation instruction is a modification instruction, modifying the first point cloud characteristics in the first point cloud data according to modification data contained in the operation instruction to obtain second point cloud characteristics; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the processor when executing the computer program further performs the steps of: when the operation instruction is an addition instruction, acquiring addition point cloud data contained in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the processor when executing the computer program further performs the steps of: acquiring a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by the operation on the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
and acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a road scene in the running process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene; and preprocessing the laser radar point cloud data to obtain first point cloud data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring first point cloud features extracted from first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain a first target detection result.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the operation instruction is a modification instruction, modifying the first point cloud characteristics in the first point cloud data according to modification data contained in the operation instruction to obtain second point cloud characteristics; and obtaining second point cloud data according to the second point cloud characteristics and the first point cloud data.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the operation instruction is an addition instruction, acquiring addition point cloud data contained in the operation instruction; and obtaining second point cloud data according to the added point cloud data and the first point cloud data.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a reference target detection result corresponding to the first point cloud data; comparing the second target detection result with the reference target detection result to obtain a difference index; and determining the influence degree of the operation data on the target detection according to the difference data and the difference index.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of data processing, the method comprising:
inputting the acquired first point cloud data into a neural network model to obtain a first target detection result;
acquiring operation data generated by the operation of the first point cloud data;
processing the first point cloud data according to the operation data to obtain second point cloud data;
inputting the second point cloud data into the neural network model to obtain a second target detection result;
Obtaining a reference target detection result corresponding to the first point cloud data, and comparing the second target detection result with the reference target detection result to obtain a difference index;
acquiring difference data between the second target detection result and the first target detection result, and determining the influence degree of the operation data on target detection according to the difference data and the difference index;
and adjusting the neural network model according to the influence degree of the operation data on target detection.
2. The method according to claim 1, wherein the method further comprises:
acquiring a road scene in the running process of a vehicle through a laser radar, and acquiring laser radar point cloud data corresponding to the road scene;
and preprocessing the laser radar point cloud data to obtain the first point cloud data.
3. The method of claim 1, wherein inputting the acquired first point cloud data into the neural network model to obtain a first target detection result comprises:
acquiring first point cloud features extracted from the first point cloud data;
and inputting the first point cloud characteristics into the neural network model to obtain the first target detection result.
4. The method of claim 3, wherein the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data includes:
when the operation instruction is a modification instruction, modifying the first point cloud characteristic in the first point cloud data according to modification data contained in the operation instruction to obtain a second point cloud characteristic;
and obtaining the second point cloud data according to the second point cloud characteristics and the first point cloud data.
5. The method according to claim 1, wherein the processing the first point cloud data according to the operation data included in the operation instruction to obtain second point cloud data includes:
when the operation instruction is an addition instruction, acquiring addition point cloud data contained in the operation instruction;
and obtaining the second point cloud data according to the added point cloud data and the first point cloud data.
6. A data processing apparatus, the apparatus comprising:
the first target detection result acquisition module is used for inputting the acquired first point cloud data into the neural network model to obtain a first target detection result;
An operation data acquisition module, configured to acquire operation data generated by an operation on the first point cloud data;
the second point cloud data acquisition module is used for processing the first point cloud data according to the operation data to obtain second point cloud data;
the second target detection result acquisition module is used for inputting the second point cloud data into the neural network model to obtain a second target detection result;
the difference index acquisition module is used for acquiring a reference target detection result corresponding to the first point cloud data, and comparing the second target detection result with the reference target detection result to obtain a difference index;
the difference data acquisition module is used for acquiring difference data between the second target detection result and the first target detection result and determining the influence degree of the operation data on target detection according to the difference data and the difference index;
and the adjusting module is used for adjusting the neural network model according to the influence degree of the operation data on target detection.
7. The apparatus of claim 6, wherein the first target detection result acquisition module is configured to acquire a first point cloud feature extracted from the first point cloud data; and inputting the first point cloud characteristics into the neural network model to obtain the first target detection result.
8. The apparatus of claim 7, wherein the apparatus further comprises:
the laser radar point cloud data acquisition module is used for acquiring road scenes in the running process of the vehicle through a laser radar and acquiring laser radar point cloud data corresponding to the road scenes;
and the first point cloud data acquisition module is used for preprocessing the laser radar point cloud data to obtain the first point cloud data.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 5 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN201910150131.0A 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium Active CN111626288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910150131.0A CN111626288B (en) 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910150131.0A CN111626288B (en) 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111626288A CN111626288A (en) 2020-09-04
CN111626288B true CN111626288B (en) 2023-12-01

Family

ID=72270721

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910150131.0A Active CN111626288B (en) 2019-02-28 2019-02-28 Data processing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111626288B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020466A1 (en) * 2015-08-04 2017-02-09 百度在线网络技术(北京)有限公司 Urban road recognition method, apparatus, storage medium and device based on laser point cloud
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation
CN108399609A (en) * 2018-03-06 2018-08-14 北京因时机器人科技有限公司 A kind of method for repairing and mending of three dimensional point cloud, device and robot
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10401866B2 (en) * 2017-05-03 2019-09-03 GM Global Technology Operations LLC Methods and systems for lidar point cloud anomalies

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017020466A1 (en) * 2015-08-04 2017-02-09 百度在线网络技术(北京)有限公司 Urban road recognition method, apparatus, storage medium and device based on laser point cloud
CN108090960A (en) * 2017-12-25 2018-05-29 北京航空航天大学 A kind of Object reconstruction method based on geometrical constraint
CN108198145A (en) * 2017-12-29 2018-06-22 百度在线网络技术(北京)有限公司 For the method and apparatus of point cloud data reparation
CN108399609A (en) * 2018-03-06 2018-08-14 北京因时机器人科技有限公司 A kind of method for repairing and mending of three dimensional point cloud, device and robot
CN109100741A (en) * 2018-06-11 2018-12-28 长安大学 A kind of object detection method based on 3D laser radar and image data
CN109360239A (en) * 2018-10-24 2019-02-19 长沙智能驾驶研究院有限公司 Obstacle detection method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111626288A (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN111950329B (en) Target detection and model training method, device, computer equipment and storage medium
CN110516665A (en) Identify the neural network model construction method and system of image superposition character area
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
CN112633152B (en) Parking space detection method and device, computer equipment and storage medium
KR101896357B1 (en) Method, device and program for detecting an object
CN110634153A (en) Target tracking template updating method and device, computer equipment and storage medium
CN111797650A (en) Obstacle identification method and device, computer equipment and storage medium
CN111753649B (en) Parking space detection method, device, computer equipment and storage medium
US11270519B2 (en) Method of processing point cloud data based on neural network
CN111144228B (en) Obstacle identification method based on 3D point cloud data and computer equipment
CN111292275B (en) Point cloud data filtering method and device based on complex ground and computer equipment
CN111160288A (en) Gesture key point detection method and device, computer equipment and storage medium
CN111292377B (en) Target detection method, device, computer equipment and storage medium
KR20220093187A (en) Positioning method and apparatus, electronic device, computer readable storage medium
CN111028212B (en) Key point detection method, device, computer equipment and storage medium
CN111161202A (en) Vehicle behavior information acquisition method and device, computer equipment and storage medium
CN112883983B (en) Feature extraction method, device and electronic system
CN112001983B (en) Method and device for generating occlusion image, computer equipment and storage medium
CN111242840A (en) Handwritten character generation method, apparatus, computer device and storage medium
CN115457492A (en) Target detection method and device, computer equipment and storage medium
CN114663598A (en) Three-dimensional modeling method, device and storage medium
CN111598088A (en) Target detection method and device, computer equipment and readable storage medium
CN113592881B (en) Picture designability segmentation method, device, computer equipment and storage medium
CN111721283B (en) Precision detection method and device for positioning algorithm, computer equipment and storage medium
CN111768406A (en) Cell image processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant