CN114523985B - Unmanned vehicle motion decision method and device based on sensing result of sensor - Google Patents

Unmanned vehicle motion decision method and device based on sensing result of sensor Download PDF

Info

Publication number
CN114523985B
CN114523985B CN202210432891.2A CN202210432891A CN114523985B CN 114523985 B CN114523985 B CN 114523985B CN 202210432891 A CN202210432891 A CN 202210432891A CN 114523985 B CN114523985 B CN 114523985B
Authority
CN
China
Prior art keywords
target
quality evaluation
model
fusion result
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210432891.2A
Other languages
Chinese (zh)
Other versions
CN114523985A (en
Inventor
张馨元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neolix Technologies Co Ltd
Original Assignee
Neolix Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neolix Technologies Co Ltd filed Critical Neolix Technologies Co Ltd
Priority to CN202210432891.2A priority Critical patent/CN114523985B/en
Publication of CN114523985A publication Critical patent/CN114523985A/en
Application granted granted Critical
Publication of CN114523985B publication Critical patent/CN114523985B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0097Predicting future conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/0098Details of control systems ensuring comfort, safety or stability not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0043Signal treatments, identification of variables or parameters, parameter estimation or state estimation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0062Adapting control system settings
    • B60W2050/0075Automatic parameter input, automatic initialising or calibrating means

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure relates to the technical field of unmanned driving, and provides a sensor-based unmanned vehicle motion decision method and device based on a sensing result. The method comprises the following steps: acquiring multiple sensing results of the same target at the same moment through multiple sensors; fusing various sensing results to obtain a target fusion result; inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model, and outputting target quality evaluation and/or target confidence corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and confidence of the sensing result; when the target quality evaluation meets a preset condition or the target confidence coefficient is greater than a preset threshold value, performing motion decision by using the target fusion result; and when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold value, referring to the fusion result of the previous frame for motion decision.

Description

Unmanned vehicle motion decision method and device based on sensing result of sensor
Technical Field
The disclosure relates to the technical field of unmanned driving, in particular to a method and a device for unmanned vehicle motion decision based on a sensing result of a sensor.
Background
At present, when an unmanned vehicle intelligently runs, the unmanned vehicle needs to be controlled according to information sensed by a sensor on the unmanned vehicle. For example, if a sensor on the unmanned vehicle senses that a pedestrian is in front of the unmanned vehicle, the unmanned vehicle should slow down or stop. In order to realize more accurate motion decision of the unmanned vehicle, the information sensed by the sensor should be verified, so that potential safety hazards caused by information leakage or errors are avoided, but the related art does not have a method for verifying the information sensed by the sensor.
In the process of implementing the disclosed concept, the inventors found that at least the following technical problems exist in the related art: the problem that the information sensed by the sensor on the unmanned vehicle cannot be verified.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a method and an apparatus for unmanned vehicle motion decision-making based on a sensing result of a sensor, an electronic device, and a computer-readable storage medium, so as to solve a problem in the prior art that information sensed by the sensor on the unmanned vehicle cannot be verified.
In a first aspect of the embodiments of the present disclosure, there is provided a method for determining unmanned vehicle motion based on a sensing result of a sensor, including: obtain the multiple perception result of same target at the same moment through multiple sensor, wherein, multiple sensor sets up on unmanned car, and multiple sensor includes: laser radar, camera, millimeter wave radar and/or ultrasonic radar; fusing various sensing results to obtain a target fusion result, wherein the various sensing results comprise sensing results obtained by a camera; inputting a sensing result and a target fusion result obtained by a camera into a quality supervision model, and outputting target quality evaluation and/or a target confidence coefficient corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and confidence coefficient of the sensing result; when the target quality evaluation meets a preset condition or the target confidence coefficient is greater than a preset threshold value, performing motion decision by using a target fusion result; and when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold value, referring to the fusion result of the previous frame for motion decision.
In a second aspect of the embodiments of the present disclosure, there is provided an unmanned vehicle motion decision device based on a sensing result of a sensor, including: the acquisition module is configured to acquire multiple sensing results of the same target at the same time through multiple sensors, wherein the multiple sensors are arranged on the unmanned vehicle, and the multiple sensors comprise: laser radar, camera, millimeter wave radar and/or ultrasonic radar; the fusion module is configured to fuse a plurality of sensing results to obtain a target fusion result, wherein the plurality of sensing results comprise sensing results obtained by the camera; the model module is configured to input the sensing result and the target fusion result acquired by the camera into the quality supervision model and output target quality evaluation and/or target confidence corresponding to the target quality evaluation, and the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and confidence of the sensing result; the first decision module is configured to perform motion decision by using a target fusion result when the target quality evaluation meets a preset condition or the target confidence is greater than a preset threshold; and the second decision module is configured to refer to the fusion result of the previous frame for motion decision when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to a preset threshold.
In a third aspect of the embodiments of the present disclosure, an electronic device is provided, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor, implements the steps of the above-mentioned method.
Compared with the prior art, the embodiment of the disclosure has the following beneficial effects: obtain the multiple perception result of same target at the same moment through multiple sensor, wherein, multiple sensor sets up on unmanned car, and multiple sensor includes: laser radar, camera, millimeter wave radar and/or ultrasonic radar; fusing multiple sensing results to obtain a target fusion result, wherein the multiple sensing results comprise sensing results obtained by a camera; inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model, and outputting target quality evaluation and/or target confidence corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and confidence of the sensing result; when the target quality evaluation meets a preset condition or the target confidence coefficient is greater than a preset threshold value, performing motion decision by using the target fusion result; and when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold, performing motion decision by referring to the fusion result of the previous frame. By adopting the technical means, the problem that information sensed by a sensor on the unmanned vehicle cannot be verified in the prior art can be solved, and further more accurate control over the unmanned vehicle is realized.
Drawings
To more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a scenario diagram of an application scenario of an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of an unmanned vehicle motion decision method based on a sensing result of a sensor according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an unmanned vehicle motion decision device based on a sensing result of a sensor according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
An unmanned vehicle motion decision method and device based on sensing results of sensors according to an embodiment of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a scene schematic diagram of an application scenario of an embodiment of the present disclosure. The application scenario may include terminal devices 1 and 3, unmanned vehicle 2, server 4, and network 5.
The devices 1 and 3 may be hardware or software. When the terminal devices 1 and 3 are hardware, they may be various electronic devices having a display screen and supporting communication with the server 4, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal devices 1 and 3 are software, they may be installed in the electronic device as above. The terminal devices 1 and 3 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, which is not limited by the embodiments of the present disclosure. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search type application, a shopping type application, etc., may be installed on the terminal devices 1 and 3.
The server 4 may be a server that provides various services, for example, a backend server that receives a request sent by a terminal device that establishes a communication connection with the server, and the backend server may receive and analyze the request sent by the terminal device, and generate a processing result. The server 4 may be one server, may also be a server cluster composed of a plurality of servers, or may also be a cloud computing service center, which is not limited in this disclosure.
The server 4 may be hardware or software. When the server 4 is hardware, it may be various electronic devices that provide various services to the terminal devices 1 and 3, and the unmanned vehicle 2. When the server 4 is software, it may be a plurality of software or software modules that provide various services for the terminal devices 1 and 3 and the unmanned vehicle 2, or may be a single software or software module that provides various services for the terminal devices 1 and 3 and the unmanned vehicle 2, which is not limited by the embodiment of the present disclosure.
The network 5 may be a wired network connected by a coaxial cable, a twisted pair and an optical fiber, or may be a wireless network that can interconnect various Communication devices without wiring, for example, bluetooth (Bluetooth), near Field Communication (NFC), infrared (Infrared), and the like, which is not limited in the embodiment of the present disclosure.
The user can establish a communication connection with the server 4 via the terminal devices 1 and 3, and the unmanned vehicle 2 via the network 5 to receive or transmit information or the like. It should be noted that specific types, numbers, and combinations of the terminal devices 1 and 3, the unmanned vehicle 2, the server 4, and the network 5 may be adjusted according to actual requirements of an application scenario, which is not limited in the embodiment of the present disclosure.
Fig. 2 is a schematic flow chart of an unmanned vehicle motion decision method based on a sensing result of a sensor according to an embodiment of the present disclosure. The unmanned vehicle motion decision method based on the sensing result of the sensor of fig. 2 may be performed by the terminal device of fig. 1, or the unmanned vehicle or the server. As shown in fig. 2, the unmanned vehicle motion decision method based on the sensing result of the sensor includes:
s201, obtain the multiple perception result of same target at the same moment through multiple sensor, wherein, multiple sensor sets up on unmanned car, and multiple sensor includes: laser radar, camera, millimeter wave radar and/or ultrasonic radar; sensing the result
S202, fusing multiple sensing results to obtain a target fusion result, wherein the multiple sensing results comprise sensing results obtained by a camera;
s203, inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model, and outputting target quality evaluation and/or target confidence corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and carries out training on the sensing result by using limited labeled pictures and the quality evaluation and confidence of the sensing result;
s204, when the target quality evaluation meets a preset condition or the target confidence coefficient is larger than a preset threshold value, performing motion decision by using the target fusion result;
and S205, when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold, referring to the fusion result of the previous frame for motion decision.
The quality supervision model and the labeling model in the following text may be any common neural network model, such as a Convolutional Neural Network (CNN) and a fully-connected neural network (FCN), etc. (the quality supervision model and the labeling model may be one model or two models). The plurality of perceptual results may include: the sensing result of the laser radar obtained by the laser radar, the sensing result obtained by the camera (the sensing result obtained by the camera is a picture), the sensing result of the millimeter wave radar obtained by the millimeter wave radar and the sensing result of the ultrasonic radar obtained by the ultrasonic radar. The target fusion result includes information expressed by various sensing results, for example, the target fusion result includes types of target objects (types of target objects including pedestrians, motor vehicles, non-motor vehicles, and the like) acquired by the camera, and distances and speeds between the target objects and the unmanned vehicle acquired by the laser radar, the millimeter wave radar, and the ultrasonic radar.
And the sensing result and the fusion result acquired by the camera are used as the input of the quality supervision model, and the quality evaluation and the confidence coefficient are used as the output of the quality supervision model. The marked picture can be understood as a sensing result obtained by the marked camera. The quality evaluation may be understood as a quality score, and the target quality evaluation satisfies a preset condition, and may be understood as a target quality score greater than a certain value. In the present disclosure, the "target" in the target fusion result, the target quality evaluation, the target confidence and the like is to be distinguished from the fusion result, the quality evaluation and the confidence involved in the model training, and does not have other meanings. For example, the target mass score is 9 (full score is 10, and a certain value is 8), or the target confidence is 95% (preset threshold is 90%), and the target mass score is 9 and 8 according to the target confidence being greater than the preset threshold. At this time, the motion decision is made by using the target fusion result. And when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold value, referring to the fusion result of the previous frame for motion decision. The fusion result of the previous frame is the previous frame of the target fusion result, that is, the fusion result obtained on the target fusion result. The motion decision can be understood as planning a driving path of the unmanned vehicle, for example, if the target fusion result shows that the left front of the unmanned vehicle is a school and the right front of the unmanned vehicle is provided with a barrier, the motion decision is carried out, namely the unmanned vehicle is controlled to decelerate in front of the school and the barrier is avoided.
According to the technical scheme that this disclosed embodiment provided, obtain the multiple perception result of same target at the same moment through multiple sensor, wherein, multiple sensor sets up on unmanned car, and multiple sensor includes: laser radar, camera, millimeter wave radar and/or ultrasonic radar; fusing multiple sensing results to obtain a target fusion result, wherein the multiple sensing results comprise sensing results obtained by a camera; inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model, and outputting target quality evaluation and/or target confidence corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and confidence of the sensing result; when the target quality evaluation meets a preset condition or the target confidence coefficient is greater than a preset threshold value, performing motion decision by using the target fusion result; and when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold value, referring to the fusion result of the previous frame for motion decision. By adopting the technical means, the problem that information sensed by a sensor on the unmanned vehicle cannot be verified in the prior art can be solved, and further more accurate control over the unmanned vehicle is realized.
Before step S203 is executed, that is, before the sensing result and the target fusion result obtained by the camera are input into the quality supervision model and the target quality evaluation and/or the target confidence corresponding to the target quality evaluation are output, the method further includes: obtaining a training data set, wherein the training data set comprises: the method comprises the steps that a plurality of pictures and fusion results corresponding to each picture under various scenes are marked with corresponding quality evaluation and confidence; and training the quality supervision model by using a semi-supervised learning training method based on the training data set.
The semi-supervised learning training method can be self-training algorithm, and of course, other common semi-supervised learning methods can also be adopted. In actual operation, only one picture is needed for one scene, so that the default of the embodiment of the present disclosure is that one picture is in one scene, and certainly, several pictures can be obtained in one scene. The scene is a place where the unmanned vehicle can reach, and for example, the unmanned vehicle can go straight, the unmanned vehicle can pass through a crossroad and a school, and the like. One fusion result actually includes the corresponding picture, but since the picture contains more information than other perception results, the picture needs to be separately used as one piece of information (the picture is the perception result acquired by the camera).
Before the training data set is acquired, the method further comprises: acquiring multiple sensing results under multiple scenes, wherein the multiple sensing results under each scene comprise pictures corresponding to each scene; fusing various perception results under each scene to obtain a fusion result corresponding to each scene; and generating a training data set based on the plurality of pictures and the fusion result corresponding to each picture.
Each scene has multiple perception results, wherein the perception results comprise pictures of the scene, each scene corresponds to one fusion result, and the pictures of one scene and the fusion results corresponding to the scene also have a corresponding relation.
Generating a training data set based on a plurality of pictures and a fusion result corresponding to each picture, including: performing timestamp alignment operation on a plurality of pictures and a plurality of fusion results to obtain a corresponding relation between each picture and each fusion result, wherein the plurality of sensing results in each scene and the fusion result corresponding to each scene all carry corresponding timestamps; and generating a training data set based on the plurality of pictures and the fusion result corresponding to each picture.
Because the multiple sensing results in each scene all carry the corresponding time stamp, the fusion result corresponding to each scene carries the corresponding time stamp. The multiple sensing results in each scene can also be acquired at the same moment, so that the multiple sensing results in each scene and the fusion result corresponding to each scene can carry the same timestamp. The multiple sensing results in multiple scenes may be obtained by the unmanned vehicle at different times during driving, where one time corresponds to one scene (because the unmanned vehicle is moving, the scenes faced by the unmanned vehicle at different times are different). And performing timestamp alignment operation on a plurality of pictures and a plurality of fusion results to find one picture and one fusion result in each scene.
Based on a training data set, training a quality supervision model by using a semi-supervised learning training method, comprising the following steps of: training a labeling model by utilizing a plurality of pictures labeled with corresponding quality evaluation and confidence degrees in a training data set and a fusion result corresponding to each picture; marking a plurality of pictures which are not marked with corresponding quality evaluation and confidence in the training data set and a fusion result corresponding to each picture by using the trained marking model; and training the quality supervision model by using the training data set after the labeling processing.
Each picture and the fusion result corresponding to each picture may be regarded as a sample, and the quality evaluation and the confidence corresponding to each picture and the fusion result corresponding to each picture may be regarded as a label of the sample. Each picture and the fusion result corresponding to each picture, to which the corresponding quality evaluation and confidence degree are labeled in the training data set, may be regarded as a sample with a label, and each picture and the fusion result corresponding to each picture, to which the corresponding quality evaluation and confidence degree are not labeled in the training data set, may be regarded as a sample without a label. And labeling each picture which is not labeled with the corresponding quality evaluation and confidence degree in the training data set and the corresponding fusion result of each picture by using the trained labeling model, wherein the step of labeling each picture which is not labeled with the corresponding quality evaluation and confidence degree in the training data set and the corresponding fusion result of each picture can be understood as obtaining the pseudo label of each picture which is not labeled with the corresponding quality evaluation and confidence degree and the corresponding fusion result of each picture by using the trained labeling model. The quality supervision model is then trained using the labeled exemplars and the labeled pseudo-labeled exemplars. The number of the plurality of pictures labeled with the corresponding quality evaluation and confidence degrees and the number of the fusion results corresponding to each picture are preset proportions of the total number of the samples in the training data set.
In step S204, before inputting the sensing result and the target fusion result obtained by the camera into the quality supervision model and outputting the target quality evaluation and/or the target confidence corresponding to the target quality evaluation, the method further includes: if the quality supervision model is set on the cloud: carrying out model distillation treatment on the quality supervision model to obtain a distillation model; the distillation model was downloaded to an unmanned vehicle.
In order to make the service provided by the quality supervision model more stable and avoid the problem of service interruption caused by network disconnection, the embodiment of the disclosure obtains the distillation model with smaller model scale corresponding to the scene recognition model through model distillation processing, downloads the distillation model with smaller model scale to the unmanned vehicle, and provides service for the unmanned vehicle by using the distillation model with smaller model scale.
In step S204, before inputting the sensing result and the target fusion result obtained by the camera into the quality supervision model and outputting the target quality evaluation and/or the target confidence corresponding to the target quality evaluation, the method further includes: if the quality supervision model is set on the cloud: loading a model calling interface corresponding to the quality supervision model on the cloud side to the unmanned vehicle; and calling the quality monitoring model by using the model calling interface.
In the embodiment of the disclosure, the service provided by the quality supervision model on the cloud can be obtained by the unmanned vehicle by calling the model calling interface corresponding to the quality supervision model.
In an alternative embodiment, the target fusion result is determined to be reliable according to the target quality score and the target confidence level (that is, when the target quality evaluation meets the preset condition or the target confidence level is greater than the preset threshold value), the target fusion result indicates that the pedestrian walks along the road edge in front of the left side of the unmanned vehicle, the motor vehicle parks on the right side of the road edge, and the motor vehicle drives in front of the right side of the unmanned vehicle at the speed of 60 km/h. The motion decision should satisfy the requirement that the unmanned vehicle needs to avoid the motor vehicle on the right side of the road, the pedestrian in front of the left side and meet the motor vehicle in front of the right side.
All the above optional technical solutions may be combined arbitrarily to form optional embodiments of the present application, and are not described herein again.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods. For details not disclosed in the embodiments of the apparatus of the present disclosure, refer to the embodiments of the method of the present disclosure.
Fig. 3 is a schematic diagram of an unmanned vehicle motion decision device based on a sensing result of a sensor according to an embodiment of the present disclosure. As shown in fig. 3, the unmanned vehicle motion decision apparatus based on the sensing result of the sensor includes:
an obtaining module 301 configured to obtain multiple sensing results of the same target at the same time through multiple sensors, where the multiple sensors are disposed on an unmanned vehicle, and the multiple sensors include: laser radar, camera, millimeter wave radar and/or ultrasonic radar;
the fusion module 302 is configured to fuse multiple sensing results to obtain a target fusion result, where the multiple sensing results include a sensing result obtained by a camera;
a model module 303 configured to input the sensing result and the target fusion result obtained by the camera into a quality supervision model, and output a target quality evaluation and/or a target confidence corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model, and is trained by using limited labeled pictures and the quality evaluation and confidence of the sensing result;
a first decision module 304 configured to perform a motion decision with the target fusion result when the target quality evaluation satisfies a preset condition or the target confidence is greater than a preset threshold;
and a second decision module 305 configured to perform motion decision with reference to the fusion result of the previous frame when the target quality evaluation does not satisfy the preset condition or the target confidence is less than or equal to a preset threshold.
The quality supervision model and the labeling model in the following text may be any common neural network model, such as a Convolutional Neural Network (CNN) and a fully-connected neural network (FCN), etc. (the quality supervision model and the labeling model may be one model or two models). The plurality of perceptual results may include: the sensing result of the laser radar obtained by the laser radar, the sensing result obtained by the camera (the sensing result obtained by the camera is a picture), the sensing result of the millimeter wave radar obtained by the millimeter wave radar and the sensing result of the ultrasonic radar obtained by the ultrasonic radar. The target fusion result includes information expressed by various sensing results, for example, the target fusion result includes types of target objects (types of target objects including pedestrians, motor vehicles, non-motor vehicles, and the like) acquired by the camera, and distances and speeds between the target objects and the unmanned vehicle acquired by the laser radar, the millimeter wave radar, and the ultrasonic radar.
And the sensing result and the fusion result acquired by the camera are used as the input of the quality supervision model, and the quality evaluation and the confidence coefficient are used as the output of the quality supervision model. The marked picture can be understood as a sensing result obtained by the marked camera. The quality evaluation may be understood as a quality score, and the target quality evaluation satisfies a preset condition, and may be understood as a target quality score greater than a certain value. In the present disclosure, the "target" in the target fusion result, the target quality evaluation, the target confidence and the like is to be distinguished from the fusion result, the quality evaluation and the confidence involved in the model training, and does not have other meanings. For example, the target mass score is 9 (full score is 10, and a certain value is 8), or the target confidence is 95% (preset threshold is 90%), and according to the target confidence being greater than the preset threshold, the target mass score is 9 and also greater than 8. At this time, the motion decision is made by using the target fusion result. And when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold value, referring to the fusion result of the previous frame for motion decision. The fusion result of the previous frame is the previous frame of the target fusion result, that is, the fusion result obtained on the target fusion result. The motion decision can be understood as planning the driving path of the unmanned vehicle, for example, if the target fusion result shows that the left front of the unmanned vehicle is a school and the right front of the unmanned vehicle is a barrier, the motion decision is carried out, namely the unmanned vehicle is controlled to decelerate in front of the school and the barrier is avoided.
According to the technical scheme that this disclosed embodiment provided, acquire the multiple perception result of same target at the same moment through multiple sensor, wherein, multiple sensor sets up on unmanned car, and multiple sensor includes: laser radar, camera, millimeter wave radar and/or ultrasonic radar; fusing multiple sensing results to obtain a target fusion result, wherein the multiple sensing results comprise sensing results obtained by a camera; inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model, and outputting target quality evaluation and/or target confidence corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and confidence of the sensing result; when the target quality evaluation meets a preset condition or the target confidence coefficient is greater than a preset threshold value, performing motion decision by using a target fusion result; and when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold, performing motion decision by referring to the fusion result of the previous frame. The sensing result adopts above-mentioned technical means, can solve prior art, can't carry out the problem verified to the information that the sensor perception on the unmanned car was arrived, and then realizes more accurate control to the unmanned car.
Optionally, the model module 303 is further configured to obtain a training data set, wherein the training data set comprises: the method comprises the steps that a plurality of pictures and fusion results corresponding to each picture under various scenes are marked with corresponding quality evaluation and confidence; and training the quality supervision model by using a semi-supervised learning training method based on the training data set.
The semi-supervised learning training method can be self-training algorithm, and of course, other common semi-supervised learning methods can also be adopted. In actual operation, only one picture is needed for one scene, so that the default of the embodiment of the present disclosure is that one picture is in one scene, and certainly, several pictures can be obtained in one scene. The scene is a place where the unmanned vehicle can reach, and the scene can be the scene such as the unmanned vehicle going straight, the unmanned vehicle passing through a crossroad, a school and the like. One fusion result actually includes the corresponding picture, but since the picture contains more information than other perception results, the picture needs to be separately used as one piece of information (the picture is the perception result obtained by the camera). Before the training data set is acquired, the method further comprises: acquiring multiple sensing results under multiple scenes, wherein the multiple sensing results under each scene comprise pictures corresponding to each scene; fusing various perception results under each scene to obtain a fusion result corresponding to each scene; and generating a training data set based on the plurality of pictures and the fusion result corresponding to each picture.
Each scene has multiple perception results, wherein the perception results comprise pictures of the scene, each scene corresponds to one fusion result, and the pictures of one scene and the fusion results corresponding to the scene also have a corresponding relation.
Optionally, the model module 303 is further configured to perform timestamp alignment operation on the multiple pictures and the multiple fusion results to obtain a corresponding relationship between each picture and each fusion result, where the multiple sensing results in each scene and the fusion result corresponding to each scene all carry corresponding timestamps; and generating a training data set based on the plurality of pictures and the fusion result corresponding to each picture.
Because the multiple sensing results in each scene all carry the corresponding time stamp, the fusion result corresponding to each scene carries the corresponding time stamp. The multiple sensing results in each scene can also be acquired at the same moment, so that the multiple sensing results in each scene and the fusion result corresponding to each scene can carry the same timestamp. The multiple sensing results under multiple scenes can be obtained by the unmanned vehicle at different moments during driving, and one moment corresponds to one scene (because the unmanned vehicle moves, the scenes faced by the unmanned vehicle at different moments are different). And performing timestamp alignment operation on a plurality of pictures and a plurality of fusion results to find one picture and one fusion result in each scene.
Optionally, the model module 303 is further configured to train a labeling model by using a plurality of pictures labeled with corresponding quality evaluations and confidence degrees in the training data set and a fusion result corresponding to each picture; marking a plurality of pictures which are not marked with corresponding quality evaluation and confidence in the training data set and a fusion result corresponding to each picture by using the trained marking model; and training the quality supervision model by using the labeled training data set.
Each picture and the fusion result corresponding to each picture may be regarded as a sample, and the quality evaluation and the confidence corresponding to each picture and the fusion result corresponding to each picture may be regarded as a label of the sample. Each picture and the fusion result corresponding to each picture, to which the corresponding quality evaluation and confidence degree are labeled in the training data set, may be regarded as a sample with a label, and each picture and the fusion result corresponding to each picture, to which the corresponding quality evaluation and confidence degree are not labeled in the training data set, may be regarded as a sample without a label. And labeling each picture which is not labeled with the corresponding quality evaluation and confidence degree in the training data set and the corresponding fusion result of each picture by using the trained labeling model, wherein the method can be understood as acquiring the pseudo label of each picture which is not labeled with the corresponding quality evaluation and confidence degree and the corresponding fusion result of each picture by using the trained labeling model. The quality supervision model is then trained using the labeled exemplars and the labeled pseudo-labeled exemplars. The number of the plurality of pictures labeled with the corresponding quality evaluation and confidence degrees and the number of the fusion results corresponding to each picture are preset proportions of the total number of the samples in the training data set.
Optionally, the model module 303 is further configured to, if the quality supervision model is set on the cloud: carrying out model distillation treatment on the quality supervision model to obtain a distillation model; the distillation model was downloaded to an unmanned vehicle.
In order to make the service provided by the quality supervision model more stable and avoid the problem of service interruption caused by network disconnection, the embodiment of the disclosure obtains the distillation model with smaller model scale corresponding to the scene recognition model through model distillation processing, downloads the distillation model with smaller model scale to the unmanned vehicle, and provides service for the unmanned vehicle by using the distillation model with smaller model scale.
Optionally, the model module 303 is further configured to, if the quality supervision model is set on the cloud: loading a model calling interface corresponding to the quality supervision model on the cloud side to the unmanned vehicle; and calling the quality supervision model by using the model calling interface.
In the embodiment of the disclosure, the unmanned vehicle can obtain the service provided by the quality supervision model on the cloud terminal by calling the model calling interface corresponding to the quality supervision model.
In an alternative embodiment, the target fusion result is determined to be reliable according to the target quality score and the target confidence level (that is, when the target quality evaluation meets the preset condition or the target confidence level is greater than the preset threshold value), the target fusion result indicates that the pedestrian walks along the road edge in front of the left side of the unmanned vehicle, the motor vehicle parks on the right side of the road edge, and the motor vehicle drives in front of the right side of the unmanned vehicle at the speed of 60 km/h. Then, the motion decision should meet the requirement that the unmanned vehicle needs to avoid the motor vehicles on the right roadside, the pedestrians in the front of the left side and does not need to meet the motor vehicles in the front of the right side.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present disclosure.
Fig. 4 is a schematic diagram of an electronic device 4 provided by the embodiment of the present disclosure. As shown in fig. 4, the electronic apparatus 4 of this embodiment includes: a processor 401, a memory 402, and a computer program 403 stored in the memory 402 and operable on the processor 401. The steps in the various method embodiments described above are implemented when the processor 401 executes the computer program 403. Alternatively, the processor 401 implements the functions of the respective modules/units in the above-described respective apparatus embodiments when executing the computer program 403.
Illustratively, the computer program 403 may be partitioned into one or more modules/units, which are stored in the memory 402 and executed by the processor 401 to accomplish the present disclosure. One or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 403 in the electronic device 4.
The electronic device 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other electronic devices. The electronic device 4 may include, but is not limited to, a processor 401 and a memory 402. Those skilled in the art will appreciate that fig. 4 is merely an example of the electronic device 4, and does not constitute a limitation of the electronic device 4, and may include more or fewer components than shown, or some of the components may be combined, or different components, e.g., the electronic device may also include an input-output device, a network access device, a bus, etc.
The Processor 401 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 402 may be an internal storage unit of the electronic device 4, for example, a hard disk or a memory of the electronic device 4. The memory 402 may also be an external storage device of the electronic device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the electronic device 4. Further, the memory 402 may also include both internal storage units of the electronic device 4 and external storage devices. The memory 402 is used for storing computer programs and other programs and data required by the electronic device. The memory 402 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules, so as to perform all or part of the functions described above. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other ways. For example, the above-described apparatus/electronic device embodiments are merely illustrative, and for example, a module or a unit may be divided into only one logical function, and may be implemented in other ways, and multiple units or components may be combined or integrated into another system, or some features may be omitted or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the above embodiments may be realized by the present disclosure, and the computer program may be stored in a computer readable storage medium to instruct related hardware, and when the computer program is executed by a processor, the steps of the above method embodiments may be realized. The computer program may comprise computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic diskette, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signal, telecommunications signal, software distribution medium, etc. It should be noted that the computer readable medium may contain suitable additions or additions that may be required in accordance with legislative and patent practices within the jurisdiction, for example, in some jurisdictions, computer readable media may not include electrical carrier signals or telecommunications signals in accordance with legislative and patent practices.
The above examples are only intended to illustrate the technical solutions of the present disclosure, not to limit them; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and they should be construed as being included in the scope of the present disclosure.

Claims (9)

1. An unmanned vehicle motion decision method based on a sensing result of a sensor is characterized by comprising the following steps:
obtain the multiple perception result of same target at the same moment through multiple sensor, wherein, multiple sensor sets up on unmanned car, multiple sensor includes: laser radar, camera, millimeter wave radar and/or ultrasonic radar;
fusing the multiple sensing results to obtain a target fusion result, wherein the multiple sensing results comprise sensing results obtained by a camera and sensing results obtained by a laser radar, a millimeter wave radar and/or an ultrasonic radar laser radar, the sensing results obtained by the camera are type information of a target object, and the sensing results obtained by the laser radar, the millimeter wave radar and/or the ultrasonic radar are the distance and the speed between the target object and the unmanned vehicle;
inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model, and outputting target quality evaluation and/or target confidence corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and confidence of the sensing result;
when the target quality evaluation meets a preset condition or the target confidence coefficient is greater than a preset threshold value, performing motion decision by using the target fusion result;
when the target quality evaluation does not meet the preset condition or the target confidence coefficient is less than or equal to the preset threshold, the motion decision is made by referring to the fusion result of the previous frame;
before the sensing result and the target fusion result obtained by the camera are input into a quality supervision model and a target quality evaluation and/or a target confidence corresponding to the target quality evaluation are/is output, the method further comprises:
obtaining a training data set, wherein the training data set comprises: the method comprises the steps that a plurality of pictures and a fusion result corresponding to each picture under various scenes are marked with corresponding quality evaluation and confidence;
and training the quality supervision model by utilizing a semi-supervised learning training method based on the training data set.
2. The method of claim 1, wherein prior to said obtaining a training data set, the method further comprises:
acquiring multiple sensing results under multiple scenes, wherein the multiple sensing results under each scene comprise pictures corresponding to each scene;
fusing various perception results under each scene to obtain a fusion result corresponding to each scene;
and generating the training data set based on the plurality of pictures and the fusion result corresponding to each picture.
3. The method according to claim 2, wherein the generating the training data set based on the plurality of pictures and the fusion result corresponding to each picture comprises:
performing timestamp alignment operation on a plurality of pictures and a plurality of fusion results to obtain a corresponding relation between each picture and each fusion result, wherein the plurality of sensing results in each scene and the fusion result corresponding to each scene all carry corresponding timestamps;
and generating the training data set based on the plurality of pictures and the fusion result corresponding to each picture.
4. The method of claim 1, wherein training the quality supervision model using a semi-supervised learning training method based on the training data set comprises:
training a labeling model by utilizing a plurality of pictures labeled with corresponding quality evaluation and confidence degrees in the training data set and a fusion result corresponding to each picture;
marking a plurality of pictures which are not marked with corresponding quality evaluation and confidence in the training data set and a fusion result corresponding to each picture by using the trained marking model;
and training the quality supervision model by using the training data set subjected to the labeling processing.
5. The method according to claim 1, wherein before inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model and outputting a target quality evaluation and/or a target confidence corresponding to the target quality evaluation, the method further comprises:
if the quality supervision model is set on the cloud: carrying out model distillation treatment on the quality supervision model to obtain a distillation model;
downloading the distillation model onto the unmanned vehicle.
6. The method according to claim 1, wherein before inputting the sensing result and the target fusion result obtained by the camera into a quality supervision model and outputting a target quality evaluation and/or a target confidence corresponding to the target quality evaluation, the method further comprises:
if the quality supervision model is set on the cloud:
loading a model calling interface corresponding to the quality supervision model on the cloud onto the unmanned vehicle;
and calling the quality supervision model by using the model calling interface.
7. An unmanned vehicle motion decision device based on a sensing result of a sensor, comprising:
the acquisition module is configured to acquire multiple sensing results of the same target at the same time through multiple sensors, wherein the multiple sensors are arranged on an unmanned vehicle, and the multiple sensors comprise: laser radar, camera, millimeter wave radar and/or ultrasonic radar;
the fusion module is configured to fuse the multiple sensing results to obtain a target fusion result, wherein the multiple sensing results comprise sensing results obtained by a camera and sensing results obtained by a laser radar, a millimeter wave radar and/or an ultrasonic radar laser radar, the sensing results obtained by the camera are type information of a target object, and the sensing results obtained by the laser radar, the millimeter wave radar and/or the ultrasonic radar are the distance and the speed between the target object and the unmanned vehicle;
the model module is configured to input the sensing result and the target fusion result acquired by the camera into a quality supervision model and output a target quality evaluation and/or a target confidence degree corresponding to the target quality evaluation, wherein the quality supervision model is a semi-supervision model and is trained by using limited labeled pictures and the quality evaluation and the confidence degree of the sensing result;
a first decision module configured to perform a motion decision with the target fusion result when the target quality evaluation satisfies a preset condition or the target confidence is greater than a preset threshold;
a second decision module configured to refer to a fusion result of a previous frame for the motion decision when the target quality evaluation does not satisfy the preset condition or the target confidence is less than or equal to the preset threshold;
the model module is further configured to: obtaining a training data set, wherein the training data set comprises: the method comprises the steps that a plurality of pictures and a fusion result corresponding to each picture under various scenes are marked with corresponding quality evaluation and confidence; and training the quality supervision model by using a semi-supervised learning training method based on the training data set.
8. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
CN202210432891.2A 2022-04-24 2022-04-24 Unmanned vehicle motion decision method and device based on sensing result of sensor Active CN114523985B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210432891.2A CN114523985B (en) 2022-04-24 2022-04-24 Unmanned vehicle motion decision method and device based on sensing result of sensor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210432891.2A CN114523985B (en) 2022-04-24 2022-04-24 Unmanned vehicle motion decision method and device based on sensing result of sensor

Publications (2)

Publication Number Publication Date
CN114523985A CN114523985A (en) 2022-05-24
CN114523985B true CN114523985B (en) 2023-01-06

Family

ID=81627852

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210432891.2A Active CN114523985B (en) 2022-04-24 2022-04-24 Unmanned vehicle motion decision method and device based on sensing result of sensor

Country Status (1)

Country Link
CN (1) CN114523985B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10535138B2 (en) * 2017-11-21 2020-01-14 Zoox, Inc. Sensor data segmentation
CN109633621A (en) * 2018-12-26 2019-04-16 杭州奥腾电子股份有限公司 A kind of vehicle environment sensory perceptual system data processing method
CN110018470A (en) * 2019-03-01 2019-07-16 北京纵目安驰智能科技有限公司 Based on example mask method, model, terminal and the storage medium merged before multisensor
US11436743B2 (en) * 2019-07-06 2022-09-06 Toyota Research Institute, Inc. Systems and methods for semi-supervised depth estimation according to an arbitrary camera
CN111753874A (en) * 2020-05-15 2020-10-09 江苏大学 Image scene classification method and system combined with semi-supervised clustering

Also Published As

Publication number Publication date
CN114523985A (en) 2022-05-24

Similar Documents

Publication Publication Date Title
JP7075366B2 (en) Methods, devices, equipment and media for classifying driving scene data
CN110942629A (en) Road traffic accident management method and device and terminal equipment
WO2022078077A1 (en) Driving risk early warning method and apparatus, and computing device and storage medium
Nieto et al. On creating vision‐based advanced driver assistance systems
CN115240157B (en) Method, apparatus, device and computer readable medium for persistence of road scene data
CN115616937B (en) Automatic driving simulation test method, device, equipment and computer readable medium
CN111626219A (en) Trajectory prediction model generation method and device, readable storage medium and electronic equipment
CN113326826A (en) Network model training method and device, electronic equipment and storage medium
CN110287817B (en) Target recognition and target recognition model training method and device and electronic equipment
CN116686028A (en) Driving assistance method and related equipment
CN114523985B (en) Unmanned vehicle motion decision method and device based on sensing result of sensor
CN115631482B (en) Driving perception information acquisition method and device, electronic equipment and readable medium
CN115576990A (en) Method, device, equipment and medium for evaluating visual truth value data and perception data
CN115061386A (en) Intelligent driving automatic simulation test system and related equipment
CN111310858B (en) Method and device for generating information
CN112216133B (en) Information pushing method, device, equipment and medium
CN114550143A (en) Scene recognition method and device during driving of unmanned vehicle
CN110334763B (en) Model data file generation method, model data file generation device, model data file identification device, model data file generation apparatus, model data file identification apparatus, and model data file identification medium
CN113222050A (en) Image classification method and device, readable medium and electronic equipment
CN112560324B (en) Method and device for outputting information
CN114572252B (en) Unmanned vehicle control method and device based on driving authority authentication
CN112668371A (en) Method and apparatus for outputting information
CN114596707B (en) Traffic control method, traffic control device, traffic control equipment, traffic control system and traffic control medium
CN114511044B (en) Unmanned vehicle passing control method and device
CN114565197B (en) Method and device for generating operation path of unmanned vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant