CN115631482A - Driving perception information acquisition method and device, electronic equipment and readable medium - Google Patents

Driving perception information acquisition method and device, electronic equipment and readable medium Download PDF

Info

Publication number
CN115631482A
CN115631482A CN202211517967.8A CN202211517967A CN115631482A CN 115631482 A CN115631482 A CN 115631482A CN 202211517967 A CN202211517967 A CN 202211517967A CN 115631482 A CN115631482 A CN 115631482A
Authority
CN
China
Prior art keywords
driving
information
perception
data
obstacle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211517967.8A
Other languages
Chinese (zh)
Other versions
CN115631482B (en
Inventor
龙文
李敏
洪炽杰
秦明博
王倩
艾永军
刘智睿
陶武康
申苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Aion New Energy Automobile Co Ltd
Original Assignee
GAC Aion New Energy Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Aion New Energy Automobile Co Ltd filed Critical GAC Aion New Energy Automobile Co Ltd
Priority to CN202211517967.8A priority Critical patent/CN115631482B/en
Publication of CN115631482A publication Critical patent/CN115631482A/en
Application granted granted Critical
Publication of CN115631482B publication Critical patent/CN115631482B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0808Diagnosing performance data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a driving perception information acquisition method and device, electronic equipment and a readable medium. One embodiment of the method comprises: collecting a driving perception information sequence of a target vehicle; inputting at least one driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance; inputting driving data frame images included in driving perception information into a driving scene analysis model trained in advance; the method comprises the steps that at least one driving scene label and driving behavior label included in driving data information are subjected to combined processing, and at least one driving obstacle label included in the driving data information is combined into a driving obstacle text file; inputting driving data frame images included in driving perception information into a driving perception recognition model trained in advance; and collecting a driving perception information sequence. This embodiment reduces the time to determine whether the obstacle identified by the driving sensation data identification model is correct.

Description

Driving perception information acquisition method and device, electronic equipment and readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a driving perception information acquisition method and device, electronic equipment and a readable medium.
Background
When a vehicle is automatically driven, a driving perception data recognition model is generally required to recognize obstacles around the vehicle, so as to ensure the driving safety of automatic driving. Currently, when determining whether an obstacle identified by a driving perception data identification model is correct, a common method is as follows: and marking the collected driving perception data by a worker, and comparing the marked result with the result output by the driving perception data recognition model to determine whether the barrier recognized by the driving perception data recognition model is correct or not.
However, when determining whether the obstacle identified by the driving sensation data identification model is correct in the above manner, there are often technical problems as follows:
firstly, when the collected driving perception data is more, the collected driving perception data is marked by a worker, so that the driving perception data needs to be marked for a longer time, and whether the obstacle identified by the driving perception data identification model is correct or not needs to be determined for a longer time;
secondly, when determining that the output result of the driving perception data recognition model is abnormal (incorrect), further training the driving perception data recognition model is needed, but the driving scene and the driving behavior of the abnormal output result cannot be determined, and the driving perception data corresponding to all the driving scenes and the driving behaviors needs to be collected, so that long time is consumed for training the driving perception data recognition model;
thirdly, when a single pre-trained classifier model is used for labeling the driving scenes and obstacles in the driving perception data, all the driving scenes and obstacles in the driving perception data cannot be labeled due to the fact that the labeling capacity of the single classifier model is not considered to be limited, and therefore the driving perception data recognition model cannot be completed in time when pedestrians around the vehicle are not recognized by the driving perception data recognition model, and traffic accidents and casualties are caused.
Disclosure of Invention
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose driving perception information collecting methods, apparatuses, electronic devices, and computer readable media to solve one or more of the technical problems set forth in the background section above.
In a first aspect, some embodiments of the present disclosure provide a driving perception information collecting method, including: collecting a driving perception information sequence of a target vehicle, wherein the driving perception information in the driving perception information sequence comprises driving behavior information and a driving data frame image, and the driving behavior information comprises at least one driving behavior data; for each driving perception information in the driving perception information sequence, the following processing steps are executed: inputting at least one driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain a driving behavior label; inputting a driving data frame image included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information, wherein the driving data information includes at least one driving scene label and at least one driving obstacle label; combining at least one driving scene label and the driving behavior label included in the driving data information to generate a driving label text file, and combining at least one driving obstacle label included in the driving data information into a driving obstacle text file; inputting the driving data frame image included in the driving perception information into a driving perception identification model trained in advance to obtain the driving perception identification information; and acquiring a driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving label text files and the obtained driving obstacle text files.
In a second aspect, some embodiments of the present disclosure provide a driving perception information collecting apparatus, the apparatus including: the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is configured to acquire a driving perception information sequence of a target vehicle, the driving perception information in the driving perception information sequence comprises driving behavior information and a driving data frame image, and the driving behavior information comprises at least one driving behavior data; a processing unit configured to execute, for each driving perception information in the driving perception information sequence, the following processing steps: inputting at least one driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain a driving behavior label; inputting a driving data frame image included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information, wherein the driving data information includes at least one driving scene label and at least one driving obstacle label; combining at least one driving scene label and the driving behavior label included in the driving data information to generate a driving label text file, and combining at least one driving obstacle label included in the driving data information into a driving obstacle text file; inputting the driving data frame image included in the driving perception information into a driving perception recognition model trained in advance to obtain the driving perception recognition information; and the second acquisition unit is configured to acquire a driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving annotation text files and the obtained driving obstacle text files.
In a third aspect, some embodiments of the present disclosure provide an electronic device, comprising: one or more processors; a storage device having one or more programs stored thereon, which when executed by one or more processors, cause the one or more processors to implement the method described in any of the implementations of the first aspect.
In a fourth aspect, some embodiments of the present disclosure provide a computer readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the method described in any of the implementations of the first aspect.
The above embodiments of the present disclosure have the following beneficial effects: by the driving perception information acquisition method of some embodiments of the present disclosure, the time for determining whether the obstacle identified by the driving perception data identification model is correct may be reduced. Specifically, the reason why it takes a long time to determine whether the obstacle recognized by the driving sensation data recognition model is correct is: when the collected driving perception data are more, the collected driving perception data are marked by the staff, so that the driving perception data need to be marked in a long time, and whether the obstacles identified by the driving perception data identification model are correct or not needs to be determined in a long time. Based on this, the driving perception information collecting method of some embodiments of the present disclosure first collects a driving perception information sequence of a target vehicle. Thereby, the driving feeling data of the target vehicle over a period of time can be obtained. Next, for each driving perception information in the driving perception information sequence, executing the following processing steps: firstly, at least one piece of driving behavior data in the driving behavior information included in the driving perception information is input into a driving behavior data analysis model trained in advance to obtain a driving behavior label. Thereby, the current driving behavior of the target vehicle can be determined. And secondly, inputting the driving data frame images included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information. Thus, obstacles in the image can be labeled using a pre-trained driving scene analysis model. Therefore, when the driving perception data are more, the driving scene analysis model is used for marking the obstacles in the driving perception data, and the time for determining whether the obstacles identified by the driving perception data identification model are correct is reduced. Then, at least one driving scene label and the driving behavior label included in the driving data information are combined to generate a driving label text file, and at least one driving obstacle label included in the driving data information is combined to form the driving obstacle text file. Thus, a text file for determining whether the obstacle identified by the data identification model is correct may be generated. And then inputting the driving data frame images included in the driving perception information into a driving perception identification model trained in advance to obtain the driving perception identification information. Thus, obstacles in the driving sensation data can be determined using a driving sensation recognition model trained in advance. And finally, acquiring a driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving label text files and the obtained driving obstacle text files. Therefore, whether the obstacle identified by the driving perception data identification model is correct or not can be determined according to the comparison between the output result of the driving perception identification model and the obtained text file. The time to determine whether the obstacle identified by the driving perception data identification model is correct is reduced.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a flow diagram of some embodiments of a driving awareness information collection method according to the present disclosure;
FIG. 2 is a schematic structural diagram of some embodiments of a driving awareness information gathering device according to the present disclosure;
FIG. 3 is a schematic block diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a" or "an" in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will appreciate that references to "one or more" are intended to be exemplary and not limiting unless the context clearly indicates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a driving awareness information collection method according to the present disclosure. The driving perception information acquisition method comprises the following steps:
step 101, collecting a driving perception information sequence of a target vehicle.
In some embodiments, the executing subject of the driving awareness information collecting method (e.g., the computing device 101 shown in fig. 1) may collect a driving awareness information sequence of the target vehicle by an associated awareness data collecting device. The driving perception information in the driving perception information sequence comprises driving behavior information and a driving data frame image, and the driving behavior information comprises at least one piece of driving behavior data. The target vehicle may be a vehicle that requires driving awareness information collection. The driving behavior data of the at least one driving behavior data may include, but is not limited to, one of: steering wheel rotation angle, vehicle speed of travel. The driving data frame image may be an image captured by a perception data collecting device associated with the target vehicle. The sensing data acquisition equipment can be shooting equipment in wired connection or wireless connection with the target vehicle. For example, the sensing data acquisition device may be a camera or a video camera.
102, executing the following processing steps for each driving perception information in the driving perception information sequence:
step 1021, inputting at least one driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain a driving behavior label.
In some embodiments, the executing subject may input at least one driving behavior data of the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain the driving behavior label. The driving behavior data analysis model may be a neural network model trained in advance, which takes at least one driving behavior data as an input and takes a driving behavior label as an output. For example, the driving behavior data analysis model may be a convolutional neural network model. The driving behavior tag may represent a driving state of the target vehicle. For example, the driving behavior tag may include, but is not limited to, one of: cutting in, cutting out, getting on the ramp, overtaking and changing the track.
In some optional implementations of some embodiments, the driving behavior data analysis model may include a preset driving behavior data correspondence table. The driving behavior data correspondence table may be a correspondence table based on correspondence between a large amount of at least one driving behavior data and a driving behavior tag by a person skilled in the art. In this way, the at least one driving behavior data is sequentially compared with the at least one driving behavior data in the driving behavior data correspondence table, and if the at least one driving behavior data in the driving behavior data correspondence table is the same as or similar to the at least one driving behavior data in the driving behavior information, the driving behavior label corresponding to the at least one driving behavior data in the driving behavior data correspondence table is used as the driving behavior label indicated by the at least one driving behavior data in the driving behavior information.
Step 1022, inputting the driving data frame image included in the driving perception information into a driving scene analysis model trained in advance to obtain the driving data information.
In some embodiments, the executing subject may input the driving data frame image included in the driving perception information into a driving scene analysis model trained in advance, so as to obtain the driving data information. Wherein the driving data information includes at least one driving scene tag and at least one driving obstacle tag. The driving scene analysis model may be a neural network model trained in advance, which takes the driving data frame image as input and takes at least one driving scene tag and at least one driving obstacle tag as output. For example, the driving scenario analysis model may be a pre-trained convolutional neural network model or a deep learning network model. The driving scenario tag of the at least one driving scenario tag may include, but is not limited to, one of the following: weather, time, illumination, road surface material. The driving obstacle label of the at least one driving obstacle label may include, but is not limited to, one of: minicar, pedestrian, bicycle, tricycle, water horse, crashproof post.
In some optional implementations of some embodiments, the driving scenario analysis model may be a custom model of a driving scenario analysis chip deployed at a vehicle end. The driving scene analysis chip is connected with an illumination sensor and a distance sensor. The illumination sensor and the distance sensor are in communication connection with the driving scene analysis chip. The illumination sensor is used for sensing the lumen value of the environment where the vehicle is located. The distance sensor is used for receiving a distance value between the target vehicle and the obstacle. The driving scene analysis model can comprise a weather sub-model, an illumination sub-model, a road surface material sub-model, a background crowding state sub-model and an obstacle identification sub-model. The weather sub-model may include a neural layer, a feature extraction layer, a pooling layer, and an output layer. The above nerve layer Dropout =0.4. The characteristic extraction layer is used for extracting weather characteristics of the driving data frame image. The feature extraction layer may include three convolutional layers. The pooling layer is used for fusing the extracted same weather features and reducing feature dimensions. The number of the above-mentioned pooling layers may be a preset pooling layer number. For example, the number of the predetermined pooling layers may be 3. The output layer is used for outputting the weather condition in the driving data frame image. The illumination submodel may include an illumination intensity relationship table. The above-mentioned illumination intensity relation table may be a correspondence table based on correspondence of a large number of lumen values and illumination intensity values by those skilled in the art. In this way, the lumen value is sequentially compared with a plurality of lumen values in the lumen intensity relationship table, and if a certain lumen value in the correspondence relationship table is the same as or close to the lumen value, the lumen intensity value corresponding to the lumen value in the correspondence relationship table is used as the lumen intensity value indicated by the lumen value. The road surface material submodel comprises a characteristic extraction layer, a characteristic identification layer and an output layer. The characteristic extraction layer is used for extracting road surface material characteristics in the driving data frame image. For example, the feature extraction layer may include four convolutional layers, and the convolutional kernel of the convolutional layers is 3*3. The feature recognition layer may be configured to perform feature recognition on the feature map output by the feature extraction layer. For example, the feature recognition layer may include four convolutional layers, and the convolutional core of the convolutional layer is 3*3. The output layer is used for outputting the road surface material of the driving data frame image. The background congestion status submodel may include a table of correspondence between the number of obstacles within a preset distance value and the background congestion status. The preset distance value may be a preset distance value between the target vehicle and the obstacle. Here, the distance value between the target vehicle and the obstacle may be determined by a distance sensor communicatively connected to the driving scene analysis chip. As an example, the distance sensor may be a laser radar. The correspondence table may be a correspondence table based on the correspondence between the number of obstacles in a large number of preset distance values and the background congestion state by those skilled in the art. In this way, the number of obstacles is sequentially compared with the number of obstacles in the correspondence table, and if the number of obstacles in the correspondence table is the same as the number of obstacles, the background congestion state corresponding to the number of obstacles in the correspondence table is set as the background congestion state indicated by the number of obstacles. The background congestion state may include one of: the background is crowded and the background is not crowded. The obstacle recognition submodel may be a model trained in advance for recognizing an obstacle in the driving data frame image. The obstacle identification submodel may include a feature extraction layer, a pooling layer, and an output layer. The feature extraction layer can extract the obstacle features in the driving data frame image and output an obstacle feature map. The feature extraction layer may include 16 convolutional layers. The pooling layer can fuse the extracted same obstacle features and reduce feature dimensions. The pooling layer may be a 2*2 pooling layer. The output layer may output at least one obstacle label.
The optional technical content in the step 1022 is used as an invention point of the present disclosure, and solves the technical problems mentioned in the background art, namely, when a single pre-trained classifier model is used to label driving scenes and obstacles in driving perception data, since the labeling capability of the single classifier model is not considered to be limited, all the driving scenes and obstacles in the driving perception data cannot be labeled, so that when pedestrians around a vehicle are not identified by the driving perception data identification model, the driving perception data identification model cannot be timely completed, and traffic accidents and casualties are caused. The factors that cause traffic accidents and casualties are often as follows: when a single pre-trained classifier model is used for labeling driving scenes and obstacles in driving perception data, all the driving scenes and the obstacles in the driving perception data cannot be labeled due to the fact that the labeling capacity of the single classifier model is not considered to be limited, and therefore the driving perception data recognition model cannot be completed in time when pedestrians around a vehicle are not recognized by the driving perception data recognition model, and traffic accidents and casualties are caused. If the factors are solved, the effects of avoiding traffic accidents and casualties can be achieved. In order to achieve the effect, the method for marking by using the multiple models in the combined mode marks all driving scenes and obstacles in the driving perception data through different models, so that each driving scene and obstacle can be accurately marked, the situation that the driving perception data recognition model does not recognize traveling people can be timely found, and traffic accidents and casualties are avoided when the driving perception data recognition model which is not perfect is used.
And 1023, combining at least one driving scene label and driving behavior label included in the driving data information to generate a driving label text file, and combining at least one driving obstacle label included in the driving data information into a driving obstacle text file.
In some embodiments, the execution subject may perform a combination process on at least one driving scene tag included in the driving data information and the driving behavior tag to generate a driving annotation text file, and combine at least one driving obstacle tag included in the driving data information into a driving obstacle text file. The combination process may be writing the at least one driving scenario tag and the driving behavior tag into a pre-generated empty text file to generate a driving obstacle text file.
And step 1024, inputting the driving data frame image included in the driving perception information into a driving perception identification model trained in advance to obtain the driving perception identification information.
In some embodiments, the executing subject may input the driving data frame image included in the driving perception information into a driving perception recognition model trained in advance, so as to obtain the driving perception recognition information. The driving perception identification model may be a neural network model trained in advance, which takes the driving data frame image as input and the driving perception identification information as output. For example, the driving scenario analysis model may be a pre-trained convolutional neural network model or a deep learning network model. The driving perception identification information may represent each driving obstacle included in the data frame image.
Alternatively, the driving perception recognition model may be trained by:
in a first step, a sample set is obtained.
In some embodiments, the execution subject may obtain a set of samples. The samples in the sample set comprise sample driving data frame images and sample driving perception identification information corresponding to the sample driving data frame images.
In some embodiments, the execution entity may select a sample from the set of samples. Here, the execution body may randomly select a sample from the sample set.
And thirdly, inputting the samples into an initial network model to obtain driving perception identification information corresponding to the samples.
In some embodiments, the execution subject may input the sample to an initial network model, and obtain driving perception identification information corresponding to the sample. The initial neural network may be any of various neural networks capable of obtaining driving perception identification information from the driving data frame image. For example, the initial neural network may be a convolutional neural network. The initial neural network may also be a deep neural network.
And fourthly, determining a loss value between the driving perception identification information and the sample driving perception identification information included in the sample based on a preset loss function.
In some embodiments, the execution subject may determine a loss value between the driving sensation identification information and the sample driving sensation identification information included in the sample based on a preset loss function. For example, the preset loss function may be a cross entropy loss function.
And fifthly, responding to the loss value being more than or equal to a preset threshold value, and adjusting the network parameters of the initial network model.
In some embodiments, the executing entity may adjust the network parameter of the initial network model in response to the loss value being greater than or equal to a preset threshold. Here, the setting of the preset threshold is not limited. For example, the loss value and a preset threshold may be differenced to obtain a loss difference. On the basis, the error value is transmitted from the last layer of the model to the front by using methods such as back propagation, random gradient descent and the like so as to adjust the parameter of each layer. Of course, according to the requirement, a network freezing (dropout) method may also be adopted, and network parameters of some layers are kept unchanged and are not adjusted, which is not limited in any way.
Optionally, in response to the loss value being smaller than the preset threshold, the initial network model is determined as a driving perception identification model.
In some embodiments, the executing entity may determine the initial network model as the driving sensation identification model in response to the loss value being less than the preset threshold value.
And 103, acquiring a driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving label text files and the obtained driving obstacle text files.
In some embodiments, the driving perception information sequence of the target vehicle is collected according to the obtained driving perception identification information, the obtained driving annotation text files and the obtained driving obstacle text files.
In practice, the executing body may collect the driving perception information sequence of the target vehicle by:
firstly, analyzing each piece of driving perception identification information in each piece of driving perception identification information to generate analysis information, and obtaining an analysis information set. The analysis process may be analysis of each driving obstacle included in the driving sensation identification information.
Secondly, for each analysis information in the analysis information set, the following selection steps are executed:
and a first selection step of selecting the driving obstacle text file corresponding to the analysis information from the obtained driving obstacle text files as a target driving obstacle text file.
And a second selection step of determining whether the analysis information is abnormal analysis information according to the target driving obstacle text file and the analysis information. In practice, firstly, in response to the target driving obstacle text file including a driving obstacle label representing a pedestrian and the analytic information being free of a driving obstacle representing a pedestrian, the analytic information is determined as abnormal analytic information. Secondly, in response to the fact that the driving obstacle label representing the pedestrian does not exist in the target driving obstacle text file and the driving obstacle representing the pedestrian exists in the analysis information, determining the analysis information as abnormal analysis information.
And a third selection step, wherein in response to the fact that the analysis information is determined to be abnormal analysis information, an abnormal identifier corresponding to the analysis information is generated. The anomaly identifier can represent analysis information as anomaly analysis information. For example, the above-described abnormality flag may be 1.
And thirdly, determining the sensing abnormal rate according to the generated abnormal marks. In practice, the executing body may determine, as the perceived abnormality rate, a ratio of the number of the generated respective abnormality flags to the number of the driving sensation information included in the driving sensation information sequence.
And fourthly, controlling the associated perception data acquisition equipment to stop acquiring the driving perception information in response to the perception abnormal rate being less than or equal to the preset perception abnormal rate.
And fifthly, in response to the sensing abnormal rate being larger than or equal to a preset sensing abnormal rate, determining the driving behavior label corresponding to each piece of abnormal analysis information as an abnormal behavior label to obtain an abnormal behavior label set, and determining each driving scene label corresponding to each piece of determined abnormal analysis information as an abnormal scene label set.
And sixthly, controlling associated perception data acquisition equipment to acquire a driving perception information sequence of the target vehicle in response to detecting that the current behavior tag of the target vehicle is the same as any abnormal behavior tag in the abnormal behavior tag set and/or detecting that the current driving scene tag of the target vehicle is the same as any abnormal scene tag in the abnormal scene tag set.
The technical contents in the fifth step to the sixth step serve as an invention point of the present disclosure, and the technical problems mentioned in the background art are solved, namely, when it is determined that the result output by the driving perception data recognition model is abnormal (incorrect), the driving perception data recognition model needs to be further trained, the driving scene and the driving behavior of the abnormal output result cannot be determined, and the driving perception data corresponding to all the driving scenes and the driving behaviors need to be collected, so that a long time needs to be consumed to train the driving perception data recognition model. The factors that cause the training of the driving perception data recognition model to take a long time are often as follows: when it is determined that the result output by the driving perception data recognition model is abnormal (incorrect), the driving perception data recognition model needs to be further trained, but the driving scene and the driving behavior of the abnormal output result cannot be determined, and the driving perception data corresponding to all the driving scenes and the driving behaviors need to be collected, so that a long time needs to be consumed for training the driving perception data recognition model. If the above factors are solved, the effect of reducing the time for training the driving perception data recognition model can be achieved. In order to achieve the effect, firstly, in response to the sensing abnormal rate being greater than or equal to a preset sensing abnormal rate, the driving behavior tag corresponding to each determined abnormal analysis information is determined as an abnormal behavior tag, an abnormal behavior tag set is obtained, and each driving scene tag corresponding to each determined abnormal analysis information is determined as an abnormal scene tag set. Thereby, the driving scene and the driving behavior of the abnormal output result can be determined. Secondly, in response to the fact that the current behavior tag of the target vehicle is detected to be the same as any abnormal behavior tag in the abnormal behavior tag set and/or the current driving scene tag of the target vehicle is detected to be the same as any abnormal scene tag in the abnormal scene tag set, controlling the associated perception data acquisition equipment to acquire the driving perception information sequence of the target vehicle. Thus, the driving perception data can be collected again when the target vehicle is in the abnormal driving scene and/or the abnormal driving behavior. Therefore, the driving perception data recognition model can be trained by only collecting the driving perception data in the abnormal driving scene and/or abnormal driving behaviors, and the time for training the driving perception data recognition model can be reduced.
The above embodiments of the present disclosure have the following beneficial effects: by the driving perception information acquisition method of some embodiments of the present disclosure, the time for determining whether the obstacle identified by the driving perception data identification model is correct may be reduced. Specifically, the reason why it takes a long time to determine whether the obstacle recognized by the driving sensation data recognition model is correct is: when the collected driving perception data are more, the collected driving perception data are marked by the staff, so that the driving perception data need to be marked in a long time, and whether the obstacles identified by the driving perception data identification model are correct or not needs to be determined in a long time. Based on this, the driving perception information collecting method of some embodiments of the present disclosure first collects a driving perception information sequence of a target vehicle. Thereby, the driving feeling data of the target vehicle over a period of time can be obtained. Secondly, for each driving perception information in the driving perception information sequence, executing the following processing steps: firstly, at least one piece of driving behavior data in the driving behavior information included in the driving perception information is input into a driving behavior data analysis model trained in advance to obtain a driving behavior label. Thereby, the current running behavior of the target vehicle can be determined. And secondly, inputting the driving data frame images included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information. Thus, obstacles in the image can be labeled using a pre-trained driving scene analysis model. Therefore, when the driving perception data are more, the driving scene analysis model is used for marking the obstacles in the driving perception data, and the time for determining whether the obstacles identified by the driving perception data identification model are correct is reduced. Then, at least one driving scene label and the driving behavior label included in the driving data information are combined to generate a driving label text file, and at least one driving obstacle label included in the driving data information is combined to form the driving obstacle text file. Thus, a text file for determining whether the obstacle recognized by the data recognition model is correct can be generated. And then, inputting the driving data frame images included in the driving perception information into a driving perception recognition model trained in advance to obtain the driving perception recognition information. Thus, obstacles in the driving sensation data can be determined using a driving sensation recognition model trained in advance. And finally, acquiring a driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving label text files and the obtained driving obstacle text files. Therefore, whether the obstacle identified by the driving perception data identification model is correct or not can be determined according to the comparison between the result output by the driving perception identification model and the obtained text file. The time to determine whether the obstacle identified by the driving sensation data identification model is correct is reduced.
With further reference to fig. 2, as an implementation of the methods illustrated in the above figures, the present disclosure provides some embodiments of a driving awareness information collecting device, which correspond to those of the method embodiments illustrated in fig. 1, and which may be particularly applied in various electronic devices.
As shown in fig. 2, the driving perception information collecting apparatus 200 of some embodiments includes: a first acquisition unit 201, a processing unit 202 and a second acquisition unit 203. The first acquisition unit 201 is configured to acquire a driving perception information sequence of a target vehicle, wherein the driving perception information in the driving perception information sequence comprises driving behavior information and a driving data frame image, and the driving behavior information comprises at least one driving behavior data; the processing unit 202 is configured to, for each driving sensation information in the above-described driving sensation information sequence, perform the following processing steps: inputting at least one driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain a driving behavior label; inputting a driving data frame image included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information, wherein the driving data information includes at least one driving scene label and at least one driving obstacle label; combining at least one driving scene label and the driving behavior label included in the driving data information to generate a driving label text file, and combining at least one driving obstacle label included in the driving data information into a driving obstacle text file; inputting the driving data frame image included in the driving perception information into a driving perception identification model trained in advance to obtain the driving perception identification information; the second collecting unit 203 is configured to collect a driving perception information sequence of the target vehicle according to the obtained respective driving perception identification information, the obtained respective driving annotation text file, and the obtained respective driving obstacle text file.
It is to be understood that the units described in the driving perception information collecting apparatus 200 correspond to the respective steps in the method described with reference to fig. 1. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 200 and the units included therein, and are not described herein again.
Referring now to FIG. 3, a block diagram of an electronic device 300 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 3, the electronic device 300 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 302 or a program loaded from a storage means 308 into a Random Access Memory (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 305 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate with other devices, wireless or wired, to exchange data. While fig. 3 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 3 may represent one device or may represent multiple devices, as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network through the communication device 309, or installed from the storage device 308, or installed from the ROM 302. The computer program, when executed by the processing apparatus 301, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the method comprises the steps of collecting a driving perception information sequence of a target vehicle, wherein the driving perception information in the driving perception information sequence comprises driving behavior information and a driving data frame image, and the driving behavior information comprises at least one piece of driving behavior data. For each driving perception information in the driving perception information sequence, executing the following processing steps: inputting at least one piece of driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain a driving behavior label; inputting a driving data frame image included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information, wherein the driving data information includes at least one driving scene label and at least one driving obstacle label; performing combined processing on at least one driving scene label and the driving behavior label included in the driving data information to generate a driving label text file, and combining at least one driving obstacle label included in the driving data information into a driving obstacle text file; and inputting the driving data frame image included in the driving perception information into a driving perception identification model trained in advance to obtain the driving perception identification information. And acquiring a driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving label text files and the obtained driving obstacle text files.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, which may be described as: a processor includes a first acquisition unit, a processing unit, and a second acquisition unit. Where the names of these units do not in some cases constitute a limitation on the unit itself, for example, the first acquisition unit may also be described as a "unit that acquires a sequence of driving sensation information of the target vehicle".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems on a chip (SOCs), complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (7)

1. A driving awareness information collecting method, comprising:
acquiring a driving perception information sequence of a target vehicle, wherein the driving perception information in the driving perception information sequence comprises driving behavior information and a driving data frame image, and the driving behavior information comprises at least one driving behavior data;
for each driving perception information in the sequence of driving perception information, performing the following processing steps:
inputting at least one piece of driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain a driving behavior label;
inputting a driving data frame image included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information, wherein the driving data information includes at least one driving scene label and at least one driving obstacle label;
performing combined processing on at least one driving scene label and the driving behavior label included in the driving data information to generate a driving label text file, and combining at least one driving obstacle label included in the driving data information into a driving obstacle text file;
inputting the driving data frame image included in the driving perception information into a driving perception recognition model trained in advance to obtain the driving perception recognition information;
and acquiring a driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving label text files and the obtained driving obstacle text files.
2. The method of claim 1, wherein the collecting of the driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving annotation text file and the obtained driving obstacle text file comprises:
analyzing each piece of driving perception identification information in each piece of driving perception identification information to generate analysis information to obtain an analysis information set;
for each analysis information in the analysis information set, the following selection steps are executed:
selecting a driving obstacle text file corresponding to the analysis information from the obtained driving obstacle text files as a target driving obstacle text file;
determining whether the analysis information is abnormal analysis information or not according to the target driving obstacle text file and the analysis information;
in response to determining that the analysis information is abnormal analysis information, generating an abnormal identifier corresponding to the analysis information;
determining a perception abnormal rate according to each generated abnormal identifier;
and controlling the associated sensing data acquisition equipment to stop acquiring the driving sensing information in response to the sensing abnormal rate being less than or equal to the preset sensing abnormal rate.
3. The method of claim 1, wherein the driving perception recognition model is trained by:
acquiring a sample set, wherein samples in the sample set comprise sample driving data frame images and sample driving perception identification information corresponding to the sample driving data frame images;
selecting a sample from the set of samples;
inputting the sample into an initial network model to obtain driving perception identification information corresponding to the sample;
determining a loss value between the driving perception identification information and sample driving perception identification information included in the sample based on a preset loss function;
and adjusting the network parameters of the initial network model in response to the loss value being greater than or equal to a preset threshold value.
4. The method of claim 3, wherein the method further comprises:
and determining the initial network model as a driving perception recognition model in response to the loss value being smaller than the preset threshold value.
5. A driving perception information collecting apparatus comprising:
the system comprises a first acquisition unit, a second acquisition unit and a third acquisition unit, wherein the first acquisition unit is configured to acquire a driving perception information sequence of a target vehicle, wherein the driving perception information in the driving perception information sequence comprises driving behavior information and a driving data frame image, and the driving behavior information comprises at least one driving behavior data;
a processing unit configured to perform, for each driving sensation information in the driving sensation information sequence, the following processing steps: inputting at least one piece of driving behavior data in the driving behavior information included in the driving perception information into a driving behavior data analysis model trained in advance to obtain a driving behavior label; inputting a driving data frame image included in the driving perception information into a driving scene analysis model trained in advance to obtain driving data information, wherein the driving data information includes at least one driving scene label and at least one driving obstacle label; combining at least one driving scene label and the driving behavior label included in the driving data information to generate a driving label text file, and combining at least one driving obstacle label included in the driving data information into a driving obstacle text file; inputting the driving data frame image included in the driving perception information into a driving perception recognition model trained in advance to obtain the driving perception recognition information;
and the second acquisition unit is configured to acquire the driving perception information sequence of the target vehicle according to the obtained driving perception identification information, the obtained driving annotation text files and the obtained driving obstacle text files.
6. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-4.
7. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
CN202211517967.8A 2022-11-30 2022-11-30 Driving perception information acquisition method and device, electronic equipment and readable medium Active CN115631482B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211517967.8A CN115631482B (en) 2022-11-30 2022-11-30 Driving perception information acquisition method and device, electronic equipment and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211517967.8A CN115631482B (en) 2022-11-30 2022-11-30 Driving perception information acquisition method and device, electronic equipment and readable medium

Publications (2)

Publication Number Publication Date
CN115631482A true CN115631482A (en) 2023-01-20
CN115631482B CN115631482B (en) 2023-04-04

Family

ID=84909920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211517967.8A Active CN115631482B (en) 2022-11-30 2022-11-30 Driving perception information acquisition method and device, electronic equipment and readable medium

Country Status (1)

Country Link
CN (1) CN115631482B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894225A (en) * 2023-09-08 2023-10-17 国汽(北京)智能网联汽车研究院有限公司 Driving behavior abnormality analysis method, device, equipment and medium thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921013A (en) * 2018-05-16 2018-11-30 浙江零跑科技有限公司 A kind of visual scene identifying system and method based on deep neural network
CN109520744A (en) * 2018-11-12 2019-03-26 百度在线网络技术(北京)有限公司 The driving performance test method and device of automatic driving vehicle
CN109887373A (en) * 2019-01-30 2019-06-14 北京津发科技股份有限公司 Driving behavior collecting method, assessment method and device based on vehicle drive
CN112650825A (en) * 2020-12-30 2021-04-13 北京嘀嘀无限科技发展有限公司 Method and device for determining abnormal drive receiving behavior, storage medium and electronic equipment
CN114445803A (en) * 2022-02-07 2022-05-06 苏州挚途科技有限公司 Driving data processing method and device and electronic equipment
CN114756505A (en) * 2022-04-22 2022-07-15 重庆长安汽车股份有限公司 Automatic driving scene self-recognition method and storage medium
CN114756700A (en) * 2022-06-17 2022-07-15 小米汽车科技有限公司 Scene library establishing method and device, vehicle, storage medium and chip
CN115170863A (en) * 2022-06-08 2022-10-11 东软睿驰汽车技术(沈阳)有限公司 Method, device and equipment for estimating risk based on multi-source data and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921013A (en) * 2018-05-16 2018-11-30 浙江零跑科技有限公司 A kind of visual scene identifying system and method based on deep neural network
CN109520744A (en) * 2018-11-12 2019-03-26 百度在线网络技术(北京)有限公司 The driving performance test method and device of automatic driving vehicle
CN109887373A (en) * 2019-01-30 2019-06-14 北京津发科技股份有限公司 Driving behavior collecting method, assessment method and device based on vehicle drive
CN112650825A (en) * 2020-12-30 2021-04-13 北京嘀嘀无限科技发展有限公司 Method and device for determining abnormal drive receiving behavior, storage medium and electronic equipment
CN114445803A (en) * 2022-02-07 2022-05-06 苏州挚途科技有限公司 Driving data processing method and device and electronic equipment
CN114756505A (en) * 2022-04-22 2022-07-15 重庆长安汽车股份有限公司 Automatic driving scene self-recognition method and storage medium
CN115170863A (en) * 2022-06-08 2022-10-11 东软睿驰汽车技术(沈阳)有限公司 Method, device and equipment for estimating risk based on multi-source data and storage medium
CN114756700A (en) * 2022-06-17 2022-07-15 小米汽车科技有限公司 Scene library establishing method and device, vehicle, storage medium and chip

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116894225A (en) * 2023-09-08 2023-10-17 国汽(北京)智能网联汽车研究院有限公司 Driving behavior abnormality analysis method, device, equipment and medium thereof
CN116894225B (en) * 2023-09-08 2024-03-01 国汽(北京)智能网联汽车研究院有限公司 Driving behavior abnormality analysis method, device, equipment and medium thereof

Also Published As

Publication number Publication date
CN115631482B (en) 2023-04-04

Similar Documents

Publication Publication Date Title
JP7262503B2 (en) Method and apparatus, electronic device, computer readable storage medium and computer program for detecting small targets
EP4152204A1 (en) Lane line detection method, and related apparatus
CN108388834A (en) The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
CN112307978B (en) Target detection method and device, electronic equipment and readable storage medium
CN114117740A (en) Simulation test scene generation method and device based on automatic driving
CN115631482B (en) Driving perception information acquisition method and device, electronic equipment and readable medium
CN112200142A (en) Method, device, equipment and storage medium for identifying lane line
CN115240157B (en) Method, apparatus, device and computer readable medium for persistence of road scene data
CN115339453B (en) Vehicle lane change decision information generation method, device, equipment and computer medium
CN113392793A (en) Method, device, equipment, storage medium and unmanned vehicle for identifying lane line
CN113205088A (en) Obstacle image presentation method, electronic device, and computer-readable medium
CN114492022A (en) Road condition sensing data processing method, device, equipment, program and storage medium
CN111951548A (en) Vehicle driving risk determination method, device, system and medium
CN115761702A (en) Vehicle track generation method and device, electronic equipment and computer readable medium
CN116680601A (en) Edge traffic object prediction method, device, equipment and storage medium
CN115165398A (en) Vehicle driving function test method and device, computing equipment and medium
CN111191607A (en) Method, apparatus, and storage medium for determining steering information of vehicle
US20210374598A1 (en) A System and Method for Using Knowledge Gathered by a Vehicle
CN111881121B (en) Automatic driving data filling method and device
CN113409393B (en) Method and device for identifying traffic sign
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
CN113609956B (en) Training method, recognition device, electronic equipment and storage medium
CN115782919A (en) Information sensing method and device and electronic equipment
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant