CN114398158A - Driving task processing method and device, electronic equipment and storage medium - Google Patents

Driving task processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114398158A
CN114398158A CN202111450552.9A CN202111450552A CN114398158A CN 114398158 A CN114398158 A CN 114398158A CN 202111450552 A CN202111450552 A CN 202111450552A CN 114398158 A CN114398158 A CN 114398158A
Authority
CN
China
Prior art keywords
task
image
perception
processing
judgment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111450552.9A
Other languages
Chinese (zh)
Inventor
安建平
何卓宇
王向韬
牟晓凡
李越
孙杰
吴向阳
杨潇睿
李紫微
陆强
郝雨萌
程新景
杨睿刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Network Technology Shanghai Co Ltd
Original Assignee
International Network Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Network Technology Shanghai Co Ltd filed Critical International Network Technology Shanghai Co Ltd
Priority to CN202111450552.9A priority Critical patent/CN114398158A/en
Publication of CN114398158A publication Critical patent/CN114398158A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention relates to the technical field of image recognition, and provides a driving task processing method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring image characteristics of an image to be detected; inputting the image characteristics of the image to be detected into the processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents the perception result of the task target under different perception tasks; the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output. According to the driving task processing method, the driving task processing system, the electronic device and the storage medium, the image to be detected is subjected to branch processing, so that the perception tasks share one processing model, multi-model establishment aiming at different perception tasks is reduced, the whole memory occupation of the system is reduced, and the workload is reduced.

Description

Driving task processing method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of intelligent vehicle driving, in particular to a driving task processing method, a driving task processing system, electronic equipment and a storage medium.
Background
Perception tasks in an automatic driving scene are complex and various and comprise 2D/3D target detection, general obstacle detection, lane line detection, traffic sign board detection, road surface sign detection, key point detection, drivable area segmentation, monocular depth estimation, optical flow estimation, Re-ID and the like. Therefore, the method has the defects of large number of models, large occupied memory, large consumed calculation amount, large deployment workload and the like.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a driving task processing method, a driving task processing system, electronic equipment and a storage medium.
In a first aspect, the present invention provides a driving task processing method, including:
acquiring image characteristics of an image to be detected;
inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
In one embodiment, inputting the image features of the image to be detected into a processing model, and obtaining the task attributes of the task target in the image to be detected output by the processing model, includes:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
inputting the judgment parameters of the task target corresponding to each perception task type into the processing layer, determining the judgment rules corresponding to each perception task type by the processing layer according to a plurality of prestored perception task types, and judging according to the plurality of determined judgment rules and the judgment parameters corresponding to each judgment rule to obtain the task attribute of the task target.
In one embodiment, the method further comprises: acquiring the relevance among perception task types;
correspondingly, inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model, wherein the task attributes comprise:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
the method comprises the steps of inputting judgment parameters of task targets corresponding to all perception task types into a processing layer, determining a first perception task type and a second perception task type by the processing layer according to relevance among the perception task types, determining corresponding judgment rules according to the first perception task type and the second perception task type, dividing the judgment parameters of the task targets corresponding to all perception task types to obtain new judgment parameters, and judging according to the determined judgment rules and the new judgment parameters corresponding to the judgment rules to obtain task attributes of the task targets.
In one embodiment, the distribution layer comprises a division layer and a plurality of sub-processing layers;
correspondingly, the dividing layer determines a first type of perception task type and a second type of perception task type according to the relevance between the perception task types, determines respective corresponding judgment rules according to the first type of perception task type and the second type of perception task type, and divides the judgment parameters of the task target corresponding to each perception task type to obtain new judgment parameters;
and judging by the sub-processing layer according to the distributed judgment rule and the corresponding new judgment parameter to obtain the task attribute of the task target.
In one embodiment, the perception tasks include 2D/3D object detection, obstacle detection, lane line detection, traffic sign detection, pavement marker detection, keypoint detection, drivable region segmentation, monocular depth estimation, and optical flow estimation.
In a second aspect, the present invention provides a driving task processing device including:
the acquisition module is used for acquiring the image characteristics of the image to be detected;
the processing module is used for inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
In a third aspect, the present invention provides an electronic device comprising a memory and a memory storing a computer program, wherein the processor implements the steps of the driving task processing method according to the first aspect when executing the program.
In a fourth aspect, the present invention provides a processor-readable storage medium storing a computer program for causing a processor to execute the steps of the driving task processing method of the first aspect.
According to the driving task processing method, the driving task processing system, the electronic equipment and the storage medium, the image to be detected is subjected to branch processing, so that the perception tasks share one processing model, multi-model establishment aiming at different perception tasks is reduced, the whole memory occupation of the system is reduced, the workload is reduced, and the purposes of strong expansibility and easiness in maintenance are achieved.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a driving task processing method provided by the present invention;
FIG. 2 is a schematic structural diagram of a driving task processing device provided by the present invention;
fig. 3 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The driving task processing method, apparatus, electronic device, and storage medium of the present invention are described below with reference to fig. 1 to 3.
Fig. 1 shows a flow chart of a driving task processing method of the invention, which, with reference to fig. 1, comprises:
11. acquiring image characteristics of an image to be detected;
12. inputting the image characteristics of the image to be detected into the processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents the perception result of the task target under different perception tasks; the processing model is a model which is obtained by machine learning training and used for judging task attributes in an image, wherein the processing model takes image features in an image sample as input, and sensing results of task targets in the image sample under various sensing tasks as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
In step 11 and step 12, in the driving process, the intelligent vehicle with the automatic driving function needs to sense the driving conditions in front of and around to obtain a sensing result, and then the intelligent vehicle can be driven and controlled according to the sensing result, so that the safety is enhanced. Therefore, in the automatic driving process of the vehicle, a plurality of sensing tasks need to be added to obtain a more comprehensive sensing result. The perception tasks may include, but are not limited to, 2D/3D object detection, obstacle detection, lane line detection, traffic sign detection, road sign detection, keypoint detection, drivable area segmentation, monocular depth estimation, and optical flow estimation.
In the invention, for the additional perception tasks, the images in front of the intelligent vehicle can be obtained, and the images are analyzed and processed to obtain perception results under each perception task.
Firstly, analyzing the acquired image to obtain the image characteristics. The pixels of the image may be characterized.
In the invention, the image characteristics of the image to be detected are input into a processing model, and the task attribute of the task target in the image to be detected output by the processing model is obtained; and the task attribute represents a perception result of the task target under different perception tasks.
The perception results obtained under all perception tasks need to be analyzed and processed on the same image to be detected, and therefore task targets under different perception tasks may be different.
For example, the perception task is obstacle detection, and the task target is an obstacle such as another vehicle in front of the vehicle, a pedestrian, a bicycle, a collapsed tree, or the like.
For example, the perception task is lane line detection, and the task target is a lane line, such as a single line or a double line on a road.
For example, the perception task is traffic sign detection, the task target of which is a traffic sign, such as a traffic sign standing in front of a vehicle.
Therefore, in the invention, the image characteristics of the image to be detected are input into the processing model, the processing model carries out multi-path analysis on the image characteristics of the image to be detected according to the types of different perception tasks, each path corresponds to one perception task, and the detection of each perception task can be carried out in parallel so as to obtain the perception result corresponding to each perception task.
In the invention, the processing model is obtained by taking the image characteristics in the image sample as input, taking the perception result of the task target in the image sample under each perception task as output and training through machine learning and is used for judging the task attribute in the image.
It should be noted that, the driving conditions in front of different vehicles during driving can be selected, and then the images of the driving conditions are acquired by using the camera device, so as to obtain a large number of image samples. Labeling the task targets under each perception task in the image sample, acquiring the image characteristics of each task target, taking the image characteristics as input, taking the perception result corresponding to the task target as output, and training in an initial neural network model to obtain a processing model.
In the invention, the processing model can be updated and upgraded.
According to the further method, the task targets of all perception tasks are simultaneously identified through the model obtained through training based on the image characteristics, and the possibility marking of the task attributes in the image is facilitated to be rapidly carried out.
The processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in an image to be detected according to the image characteristics of the image to be detected; the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter,
in this regard, it should be noted that, in the present invention, to acquire the sensing result under each sensing task, the same image to be detected needs to be analyzed, and therefore, the task targets under different sensing tasks may have differences, and also the determination parameters of the task targets may have differences. Therefore, the image characteristics of the image to be detected are input into the detection layer, and the detection layer is used for analyzing the image characteristics of the image to be detected under the constraint of different perception tasks so as to obtain corresponding judgment parameters.
For example, if the sensing task is whether the vehicle changes lane, the parameters generated by the sensing task may be the position of the landing point and lane line of the vehicle.
After the judgment parameters of the task targets under each perception task are obtained, the judgment parameters are input into the processing layer, and the processing layer judges the judgment parameters corresponding to each perception task respectively to determine the task attributes of the task targets.
For example, the position relationship and/or distance length of the landing point and the lane line of the vehicle are determined to determine one or more of whether the vehicle has lane change tendency, whether the lane is changed and whether the line is pressed. Here, whether the vehicle has a lane change tendency, whether a lane change is underway, and whether a line has been pressed are task attributes of the vehicle.
According to the driving task processing method provided by the invention, the image to be detected is subjected to branch processing, so that each perception task shares one processing model, the establishment of multiple models aiming at different perception tasks is reduced, the whole memory occupation of the system is reduced, the workload is reduced, and the purposes of strong expansibility and easiness in maintenance are achieved.
In the further explanation of the above method, the processing procedure of inputting the image features of the image to be measured into the processing model and obtaining the task attributes of the task target in the image to be measured output by the processing model is mainly explained as follows:
inputting the image characteristics of the image to be detected into a detection layer, determining detection rules corresponding to the sensing task types by the detection layer according to a plurality of pre-stored sensing task types, and determining judgment parameters of task targets corresponding to the sensing task types according to the plurality of determined detection rules and the image characteristics of the image to be detected;
and inputting the judgment parameters of the task target corresponding to each perception task type into the processing layer, determining the judgment rules corresponding to each perception task type by the processing layer according to the plurality of prestored perception task types, and judging according to the plurality of determined judgment rules and the judgment parameters corresponding to each judgment rule to obtain the task attribute of the task target.
In this regard, it should be noted that, in the present invention, to acquire the sensing results under each sensing task, the same image to be detected needs to be analyzed, and therefore, there may be a difference in task targets under different sensing tasks, a difference in determination parameters of the task targets, and a difference in rules for analyzing and detecting image features used in the process of acquiring the determination parameters. Therefore, the image characteristics of the image to be detected are input into the detection layer, and the type of the perception task exists under the constraint of different perception tasks. Different perception task types correspond to different detection rules, and the detection rules define the analysis and calculation process of the image features.
For example, the sensing task is whether the vehicle changes lane, the corresponding detection rule is a recognition rule of a lane line, and a determination rule of a landing point of the vehicle. Such as pixel features of white areas on the roadway, the lowest point at the intersection of the vehicle and the ground, etc.
In the invention, the detection rules corresponding to different perception task types are pre-configured in the model and obtained based on the corresponding relation between the perception task types and the detection rules.
And determining the judgment parameters of the task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected.
And after the judgment parameters of the task target corresponding to each perception task type are obtained, the judgment parameters of the task target corresponding to each perception task type are input into the processing layer. Different perception task types can also correspond to different judgment rules.
For example, the sensing task is whether the vehicle changes lanes, and the judging parameters are the positions of the landing points and lane lines of the vehicle. In this case, the determination rule may be a determination condition regarding a distance, an angle, or the like between the landing point of the vehicle and the lane line.
In the invention, the processing layer determines a judgment rule corresponding to each sensing task type according to a plurality of pre-stored sensing task types, and judges according to the plurality of determined judgment rules and judgment parameters corresponding to the plurality of judgment rules to obtain the task attribute of the task target.
According to the driving task processing method provided by the invention, the detection layer and the processing layer are adopted to carry out work division processing cooperation on the image characteristics of the image to be detected, so that the processing speed can be increased.
In the further description of the above method, there may be an association between some of the perceptual tasks in each perceptual task. Namely: in the process of judging the task attribute of a certain perception task, the judgment parameter of another perception task needs to be used. It is necessary to obtain the association between the perceptual task types.
Inputting the image characteristics of the image to be detected into a detection layer, determining detection rules corresponding to the sensing task types by the detection layer according to a plurality of pre-stored sensing task types, and determining judgment parameters of task targets corresponding to the sensing task types according to the plurality of determined detection rules and the image characteristics of the image to be detected.
The method comprises the steps of inputting judgment parameters of task targets corresponding to all perception task types into a processing layer, and determining a first class of perception task types and a second class of perception task types by the processing layer according to the relevance between the perception task types, wherein the first class of perception task types are perception task types with relevance, and the second class of perception task types are perception task types without relevance. For example, the task attribute of the sensing task a needs a judgment parameter of the sensing task B in the judgment process, at this time, the sensing task a is a first type of sensing task type, the sensing task B is a second type of sensing task type, and in addition, other sensing task types without correlation also belong to the second type of sensing task type.
And determining respective corresponding judgment rules according to the first type of sensing task type and the second type of sensing task type. Namely: because the first type of sensing task type needs to be judged by means of the judgment parameters of other sensing tasks, the first type of sensing task type needs a corresponding rule which can be judged by combining the judgment parameters of the two types of sensing tasks. The second type of sensing task type is a type of judging parameters that do not require other sensing tasks to be held, and for this reason, it is still a rule for judging only the own judging parameters.
Because of the existence of the associated perception tasks, the judgment parameters of the task targets corresponding to the perception task types are divided to obtain new judgment parameters. Here, the new judgment parameter may be a parameter set combining two or more judgment parameters.
And judging according to the plurality of determined judgment rules and the new judgment parameters corresponding to the plurality of judgment rules respectively to obtain the task attributes of the task target.
In a further illustration, the distribution layer includes a partition layer and a plurality of sub-process layers;
correspondingly, the dividing layer determines a first type of perception task type and a second type of perception task type according to the relevance between the perception task types, determines respective corresponding judgment rules according to the first type of perception task type and the second type of perception task type, and divides the judgment parameters of the task target corresponding to each perception task type to obtain new judgment parameters;
and judging by the sub-processing layer according to the distributed judgment rule and the corresponding new judgment parameter to obtain the task attribute of the task target.
In this regard, it should be noted that the judgment rule, the sensing task type and the repartitioning of the judgment parameters of the present application all require accurate partition and configuration of the partition layer.
In the application, the image to be detected needs to be subjected to branch processing, and therefore, a plurality of sub-processing layers can be arranged, and the judgment parameters corresponding to the perception tasks are executed in parallel and separately, so that the perception tasks share one processing model, multi-model establishment aiming at different perception tasks is reduced, the whole memory occupation of the system is reduced, the workload is reduced, and the purposes of strong expansibility and easiness in maintenance are achieved.
The following describes the driving task processing device provided by the present invention, and the driving task processing device described below and the driving task processing method described above may be referred to in correspondence with each other.
Fig. 2 shows a schematic structural diagram of a driving task processing device provided by the present invention, referring to fig. 2, the device includes an acquisition module 21 and a processing module 22, wherein:
the acquisition module 21 is used for acquiring image characteristics of an image to be detected;
the processing module 22 is configured to input the image characteristics of the image to be detected into a processing model, and obtain a task attribute of a task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
In a further description of the above apparatus, the processing module is specifically configured to:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
inputting the judgment parameters of the task target corresponding to each perception task type into the processing layer, determining the judgment rules corresponding to each perception task type by the processing layer according to a plurality of prestored perception task types, and judging according to the plurality of determined judgment rules and the judgment parameters corresponding to each judgment rule to obtain the task attribute of the task target.
In a further description of the above apparatus, the acquisition module is further configured to: acquiring the relevance among perception task types;
correspondingly, the processing module is specifically configured to:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
the method comprises the steps of inputting judgment parameters of task targets corresponding to all perception task types into a processing layer, determining a first perception task type and a second perception task type by the processing layer according to relevance among the perception task types, determining corresponding judgment rules according to the first perception task type and the second perception task type, dividing the judgment parameters of the task targets corresponding to all perception task types to obtain new judgment parameters, and judging according to the determined judgment rules and the new judgment parameters corresponding to the judgment rules to obtain task attributes of the task targets.
In a further description of the above apparatus, the distribution layer comprises a division layer and a plurality of sub-processing layers;
correspondingly, the dividing layer determines a first type of perception task type and a second type of perception task type according to the relevance between the perception task types, determines respective corresponding judgment rules according to the first type of perception task type and the second type of perception task type, and divides the judgment parameters of the task target corresponding to each perception task type to obtain new judgment parameters;
and judging by the sub-processing layer according to the distributed judgment rule and the corresponding new judgment parameter to obtain the task attribute of the task target.
Since the principle of the apparatus according to the embodiment of the present invention is the same as that of the method according to the above embodiment, further details are not described herein for further explanation.
It should be noted that, in the embodiment of the present invention, the relevant functional module may be implemented by a hardware processor (hardware processor).
The driving task processing device provided by the invention realizes that each perception task shares one processing model by performing branch processing on the image to be detected, reduces the establishment of multiple models aiming at different perception tasks, reduces the whole memory occupation of the system, reduces the workload, and achieves the purposes of strong expansibility and easy maintenance.
Fig. 3 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 3: a processor (processor)31, a Communication Interface (Communication Interface)32, a memory (memory)33 and a Communication bus 34, wherein the processor 31, the Communication Interface 32 and the memory 33 are communicated with each other via the Communication bus 34. The processor 31 may call the computer program in the memory 33 to perform the steps of the driving task processing method, including, for example: acquiring image characteristics of an image to be detected; inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
In addition, the logic instructions in the memory 33 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform a driving task processing method provided by the above methods, the method comprising: acquiring image characteristics of an image to be detected; inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
On the other hand, an embodiment of the present application further provides a processor-readable storage medium, where the processor-readable storage medium stores a computer program, where the computer program is configured to enable the processor to execute the driving task processing method provided in each of the above embodiments, for example, the method includes: acquiring image characteristics of an image to be detected; inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
The processor-readable storage medium can be any available medium or data storage device that can be accessed by a processor, including, but not limited to, magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A driving task processing method, characterized by comprising:
acquiring image characteristics of an image to be detected;
inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
2. The driving task processing method according to claim 1, wherein inputting the image features of the image to be measured into a processing model, and obtaining task attributes of a task target in the image to be measured output by the processing model, includes:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
inputting the judgment parameters of the task target corresponding to each perception task type into the processing layer, determining the judgment rules corresponding to each perception task type by the processing layer according to a plurality of prestored perception task types, and judging according to the plurality of determined judgment rules and the judgment parameters corresponding to each judgment rule to obtain the task attribute of the task target.
3. The driving task processing method according to claim 1, characterized by further comprising: acquiring the relevance among perception task types;
correspondingly, inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model, wherein the task attributes comprise:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
the method comprises the steps of inputting judgment parameters of task targets corresponding to all perception task types into a processing layer, determining a first perception task type and a second perception task type by the processing layer according to relevance among the perception task types, determining corresponding judgment rules according to the first perception task type and the second perception task type, dividing the judgment parameters of the task targets corresponding to all perception task types to obtain new judgment parameters, and judging according to the determined judgment rules and the new judgment parameters corresponding to the judgment rules to obtain task attributes of the task targets.
4. The driving task processing method according to claim 3, wherein the distribution layer includes one division layer and a plurality of sub-processing layers;
correspondingly, the dividing layer determines a first type of perception task type and a second type of perception task type according to the relevance between the perception task types, determines respective corresponding judgment rules according to the first type of perception task type and the second type of perception task type, and divides the judgment parameters of the task target corresponding to each perception task type to obtain new judgment parameters;
and judging by the sub-processing layer according to the distributed judgment rule and the corresponding new judgment parameter to obtain the task attribute of the task target.
5. The driving task processing method according to any one of claims 1 to 4, wherein the perception task includes 2D/3D object detection, obstacle detection, lane line detection, traffic sign detection, road surface sign detection, key point detection, drivable region segmentation, monocular depth estimation, and optical flow estimation.
6. A driving task processing device characterized by comprising:
the acquisition module is used for acquiring the image characteristics of the image to be detected;
the processing module is used for inputting the image characteristics of the image to be detected into a processing model to obtain the task attributes of the task target in the image to be detected output by the processing model; the task attribute represents a perception result of the task target under different perception tasks;
the processing model is a model which is obtained by machine learning training and used for judging task attributes in the image, wherein the processing model takes the image characteristics in the image sample as input, and the perception results of the task target in the image sample under each perception task as output;
the processing model comprises a detection layer and a processing layer, wherein the detection layer is used for determining a judgment parameter of a task target in the image to be detected according to the image characteristics of the image to be detected; and the processing layer is used for determining the task attribute of the task target according to the allocated judgment parameter.
7. The driving task processing device according to claim 6, wherein the processing module is specifically configured to:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
inputting the judgment parameters of the task target corresponding to each perception task type into the processing layer, determining the judgment rules corresponding to each perception task type by the processing layer according to a plurality of prestored perception task types, and judging according to the plurality of determined judgment rules and the judgment parameters corresponding to each judgment rule to obtain the task attribute of the task target.
8. The driving task processing device of claim 6, wherein the acquisition module is further configured to: acquiring the relevance among perception task types;
correspondingly, the processing module is specifically configured to:
inputting the image characteristics of an image to be detected into a detection layer, determining a detection rule corresponding to each perception task type by the detection layer according to a plurality of prestored perception task types, and determining a judgment parameter of a task target corresponding to each perception task type according to the plurality of determined detection rules and the image characteristics of the image to be detected;
the method comprises the steps of inputting judgment parameters of task targets corresponding to all perception task types into a processing layer, determining a first perception task type and a second perception task type by the processing layer according to relevance among the perception task types, determining corresponding judgment rules according to the first perception task type and the second perception task type, dividing the judgment parameters of the task targets corresponding to all perception task types to obtain new judgment parameters, and judging according to the determined judgment rules and the new judgment parameters corresponding to the judgment rules to obtain task attributes of the task targets.
9. An electronic device comprising a processor and a memory storing a computer program, wherein the steps of the driving task processing method according to any one of claims 1 to 5 are implemented when the processor executes the computer program.
10. A processor-readable storage medium, characterized in that the processor-readable storage medium stores a computer program for causing a processor to execute the steps of the driving task processing method according to any one of claims 1 to 5.
CN202111450552.9A 2021-11-30 2021-11-30 Driving task processing method and device, electronic equipment and storage medium Pending CN114398158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111450552.9A CN114398158A (en) 2021-11-30 2021-11-30 Driving task processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111450552.9A CN114398158A (en) 2021-11-30 2021-11-30 Driving task processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114398158A true CN114398158A (en) 2022-04-26

Family

ID=81226040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111450552.9A Pending CN114398158A (en) 2021-11-30 2021-11-30 Driving task processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114398158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228834A (en) * 2022-12-20 2023-06-06 阿波罗智联(北京)科技有限公司 Image depth acquisition method and device, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228834A (en) * 2022-12-20 2023-06-06 阿波罗智联(北京)科技有限公司 Image depth acquisition method and device, electronic equipment and storage medium
CN116228834B (en) * 2022-12-20 2023-11-03 阿波罗智联(北京)科技有限公司 Image depth acquisition method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
JP6565967B2 (en) Road obstacle detection device, method, and program
US10133941B2 (en) Method, apparatus and device for detecting lane boundary
CN109284674B (en) Method and device for determining lane line
JP6519262B2 (en) Three-dimensional object detection device, three-dimensional object detection method, three-dimensional object detection program, and mobile device control system
CN110674705B (en) Small-sized obstacle detection method and device based on multi-line laser radar
EP3620945A1 (en) Obstacle distribution simulation method, device and terminal based on multiple models
CN115066708A (en) Point cloud data motion segmentation method and device, computer equipment and storage medium
CN113383283B (en) Perceptual information processing method, apparatus, computer device, and storage medium
CN111046973A (en) Multitask detection method and device and storage medium
CN112825192A (en) Object identification system and method based on machine learning
CN114398158A (en) Driving task processing method and device, electronic equipment and storage medium
CN113888568A (en) Multilane optimization method, multilane optimization device, electronic apparatus, storage medium, and program
CN113255444A (en) Training method of image recognition model, image recognition method and device
CN114120266A (en) Vehicle lane change detection method and device, electronic equipment and storage medium
CN113887306A (en) Driving scene perception detection method and device, electronic equipment and storage medium
CN113297939B (en) Obstacle detection method, obstacle detection system, terminal device and storage medium
WO2021199584A1 (en) Detecting debris in a vehicle path
EP3832525A1 (en) Vehicles, systems, and methods for determining an entry of an occupancy map of a vicinity of a vehicle
CN117197796A (en) Vehicle shielding recognition method and related device
CN116863124A (en) Vehicle attitude determination method, controller and storage medium
CN116343085A (en) Method, system, storage medium and terminal for detecting obstacle on highway
WO2016207749A1 (en) A device and method of detecting potholes
CN115311522A (en) Target detection method and device for automatic driving, electronic equipment and medium
JP6918425B2 (en) Devices, methods, and programs for detecting boundaries contained in captured images
CN113435350A (en) Traffic marking detection method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination