CN113743456B - Scene positioning method and system based on unsupervised learning - Google Patents

Scene positioning method and system based on unsupervised learning Download PDF

Info

Publication number
CN113743456B
CN113743456B CN202110855228.9A CN202110855228A CN113743456B CN 113743456 B CN113743456 B CN 113743456B CN 202110855228 A CN202110855228 A CN 202110855228A CN 113743456 B CN113743456 B CN 113743456B
Authority
CN
China
Prior art keywords
scene
data
vehicle
unsupervised learning
reference sampling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110855228.9A
Other languages
Chinese (zh)
Other versions
CN113743456A (en
Inventor
程德心
周风明
郝江波
谢赤天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Kotei Informatics Co Ltd
Original Assignee
Wuhan Kotei Informatics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Kotei Informatics Co Ltd filed Critical Wuhan Kotei Informatics Co Ltd
Priority to CN202110855228.9A priority Critical patent/CN113743456B/en
Publication of CN113743456A publication Critical patent/CN113743456A/en
Application granted granted Critical
Publication of CN113743456B publication Critical patent/CN113743456B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a scene positioning method and a system based on unsupervised learning, wherein the method comprises the following steps: fusing a plurality of sensors of the own vehicle with data generated by the CAN; selecting characteristic data related to the behavior of the own vehicle and the target vehicle from the fused data; automatically extracting the characteristic data by using a self-encoder; and performing scene division on the automatically extracted result by using a clustering algorithm, and forming a plurality of scene clusters. The invention uses the self-encoder to automatically extract hidden features from sensor data, classifies the scenes according to the extracted features by utilizing kMeans clustering algorithm to form scene clusters, and then defines the artificial scenes of the scene clusters to finish scene extraction. Compared with the traditional method, the method reduces the professional requirements of operators and improves the accuracy and efficiency of scene positioning.

Description

Scene positioning method and system based on unsupervised learning
Technical Field
The invention belongs to the technical field of automatic driving of vehicles, and particularly relates to a scene positioning method and system based on unsupervised learning.
Background
With the great development of intelligent network-connected automobiles, the data volume of automatic driving is increasingly huge, and the mining of automatic driving scenes from a large amount of data for data recycling is a necessary trend. The traditional scene positioning method is characterized in that scene association characteristics are summarized through experience, characteristic combination operation is carried out on the scene association characteristics, operation parameters are adjusted for multiple times to form a certain rule to finish scene extraction, the scene and sensor data are required to be known deeply, the operation specificity is high, operation rules and characteristic values are required to be adjusted for different data sources for multiple times, and the manual quantity is large and the precision is low.
Disclosure of Invention
In order to reduce the dependence on manual experience in the scene positioning or extraction process and improve the extraction efficiency, the first aspect of the invention provides a scene positioning method based on unsupervised learning, which comprises the following steps: fusing a plurality of sensors of the own vehicle with data generated by the CAN; selecting characteristic data related to the behavior of the own vehicle and the target vehicle from the fused data; automatically extracting the characteristic data by using a self-encoder; and performing scene division on the automatically extracted result by using a clustering algorithm, and forming a plurality of scene clusters.
In some embodiments of the invention, the fusing of the plurality of sensors of the host vehicle with the CAN-generated data includes the steps of:
determining a reference sampling sensor according to sampling frequencies of the plurality of sensors;
and performing time fusion on the rest multiple sensors by using the reference sampling sensor.
Further, the reference sampling sensor performs time fusion on the rest of the plurality of sensors, and the method comprises the following steps: traversing a data frame of each non-reference sampling sensor; time matching is carried out on the data frames of each non-reference sampling sensor by using the reference sampling sensor time stamp; and selecting a data frame with a time interval lower than a threshold value from the current time stamp as non-reference sampling sensor data under the current time stamp, and fusing the non-reference sampling sensor data with the data under the current time stamp of the reference sampling sensor.
In some embodiments of the invention, the self-encoder includes at least one input layer, two concealment layers, and one output layer.
In some embodiments of the invention, the clustering algorithm is a Kmeans algorithm.
In the above embodiment, the method further includes dividing the plurality of scene clusters into different scene sets, so as to complete scene positioning of the data.
The second aspect of the invention provides a scene positioning system based on unsupervised learning, which comprises a fusion module, a selection module, an extraction module and a division module,
The fusion module is used for fusing a plurality of sensors of the vehicle with data generated by the CAN;
the selecting module is used for selecting characteristic data related to the behavior of the self vehicle and the target vehicle from the fused data;
the extraction module is used for automatically extracting the characteristic data by utilizing a self-encoder;
The division module is used for dividing the automatically extracted result into scenes by using a clustering algorithm and forming a plurality of scene clusters.
Further, the fusion module comprises a determination unit and a fusion unit,
The determining unit determines a reference sampling sensor according to sampling frequencies of the plurality of sensors;
And the fusion unit is used for carrying out time fusion on the rest multiple sensors by using the reference sampling sensor.
In a third aspect of the present invention, there is provided an electronic apparatus comprising: one or more processors; and the storage device is used for storing one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors are enabled to realize the scene positioning method based on the unsupervised learning provided by the first aspect of the invention.
In a fourth aspect of the present invention, a computer readable medium is provided, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method for positioning a scene based on unsupervised learning provided in the first aspect of the present invention.
The beneficial effects of the invention are as follows:
1. The invention can solve the problems of high professional requirement, large labor capacity and low positioning precision in the traditional automatic driving scene positioning method, can automatically extract characteristic values in a large amount of sensor data by using an unsupervised learning method, and performs clustering division on scenes, thereby improving the precision and efficiency of scene positioning and reducing the labor cost and the professional requirement;
2. The invention uses the self-encoder to automatically extract hidden features from sensor data, classifies the scenes according to the extracted features by utilizing kMeans clustering algorithm to form scene clusters, and then defines the artificial scenes of the scene clusters to finish scene extraction. Compared with the traditional method, the method reduces the professional requirements of operators and improves the accuracy and efficiency of scene positioning.
Drawings
FIG. 1 is a basic flow diagram of an unsupervised learning-based scene locating method in some implementations of the invention;
FIG. 2 is a schematic flow diagram of an unsupervised learning-based scene locating method in some embodiments of the present invention;
FIG. 3 is a schematic diagram of a scene locating system based on unsupervised learning in some implementations of the invention;
Fig. 4 is a schematic structural diagram of an electronic device in some implementations of the invention.
Detailed Description
The principles and features of the present invention are described below with reference to the drawings, the examples are illustrated for the purpose of illustrating the invention and are not to be construed as limiting the scope of the invention.
Referring to fig. 1, in a first aspect of the present invention, there is provided a scene locating method based on unsupervised learning, including: s100, fusing a plurality of sensors of the vehicle with data generated by CAN; s200, selecting characteristic data related to the behavior of the own vehicle and the target vehicle from the fused data; s300, automatically extracting the characteristic data by using a self-encoder; s400, performing scene division on the automatically extracted result by using a clustering algorithm, and forming a plurality of scene clusters.
It is understood that the plurality of sensors of the own vehicle include data generated by the CAN (Controller Area Network ) based on an intelligent camera, a laser radar, a millimeter wave radar, an IMU inertial navigation system, etc., and the data output by the sensors or meters of the own vehicle after being transmitted through the CAN cable.
In S100 of some embodiments of the present invention, the fusing the plurality of sensors of the own vehicle with the CAN generated data includes the steps of: s101, determining a reference sampling sensor according to sampling frequencies of the plurality of sensors; s102, performing time fusion on the rest multiple sensors by using the reference sampling sensor.
Further, in step S102, time-fusing the remaining plurality of sensors with the reference sampling sensor includes the steps of: traversing a data frame of each non-reference sampling sensor; time matching is carried out on the data frames of each non-reference sampling sensor by using the reference sampling sensor time stamp; and selecting a data frame with a time interval lower than a threshold value from the current time stamp as non-reference sampling sensor data under the current time stamp, and fusing the non-reference sampling sensor data with the data under the current time stamp of the reference sampling sensor.
Optionally, the above-mentioned fusion mode is time fusion performed by using the maximum sampling frequency of the plurality of sensors as a reference sensor; similarly, measurements from multiple sensors may be converted into a coordinate system to achieve spatial synchronization of the multiple sensors.
In some embodiments of the invention, the self-encoder (Autoencoder, AE) includes at least one input layer, two concealment layers, and one output layer. Optionally, the self-encoder comprises at least one of a stacked self-encoder, an under-complete self-encoder, a sparse self-encoder, or a de-noised self-encoder.
In some embodiments of the present invention, the clustering algorithm includes at least one of hierarchical clustering, partition-based clustering, network-based clustering, model-based clustering, fuzzy-based clustering, or constraint-based clustering. Preferably, the clustering algorithm is a Kmeans algorithm.
Referring to fig. 2, in the above embodiment, the method further includes dividing the plurality of scene clusters into different scene sets to complete scene positioning of the data. The method comprises the following steps:
1. Acquiring intelligent cameras, laser radars, millimeter wave radars, a GPS/IMU inertial navigation system and vehicle body CAN data, taking the data with the highest sampling frequency as a reference sampling sensor according to the sampling frequency of each sensor, traversing other sensor data frames, performing time matching with the data frames by using the time stamp of the reference sampling sensor, downloading sensor data by using the data frame with the closest time as the current time stamp, and fusing the sensor data with the current time stamp data of the reference sampling sensor;
2. The characteristic value data related to the behavior of the own vehicle and the target vehicle, such as the speed of the own vehicle, the relative speed of the target vehicle, the steering angle of the own vehicle and the like, are selected from the data after time fusion, so that the characteristic dimension of the data can be greatly reduced, and the characteristic extraction efficiency is improved;
3. The target vehicle data extracted in step2 are input into a self-encoder to automatically extract hidden features. An input layer, two hidden layers and an output layer are arranged on a network of the self-encoder;
4. generating a plurality of scene clusters by using kMeans clustering algorithm based on the feature extraction result in the step 3;
5. and (4) manually dividing based on the scene clusters in the step 4, and completing scene positioning operation.
Example 2
Referring to fig. 3, in a second aspect of the present invention, there is provided a scene locating system 1 based on unsupervised learning, including a fusion module 11, a selection module 12, an extraction module 13, and a division module 14, where the fusion module 11 is configured to fuse a plurality of sensors of a vehicle with data generated by a CAN; the selecting module 12 is configured to select feature data related to the behavior of the own vehicle and the target vehicle from the fused data; the extracting module 13 is configured to automatically extract the feature data by using a self-encoder; the dividing module 14 is configured to divide the automatically extracted result into a plurality of scene clusters by using a clustering algorithm.
Further, the fusion module 11 includes a determining unit, a fusion unit, where the determining unit determines a reference sampling sensor according to sampling frequencies of the plurality of sensors; and the fusion unit is used for carrying out time fusion on the rest multiple sensors by using the reference sampling sensor.
Example 3
In a third aspect of the present invention, there is provided an electronic apparatus comprising: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method provided by the first aspect of the present invention.
Referring to fig. 4, an electronic device 500 may include a processing means (e.g., a central processor, a graphic processor, etc.) 501, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
The following devices may be connected to the I/O interface 505 in general: input devices 506 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 507 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 508 including, for example, a hard disk; and communication means 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 4 shows an electronic device 500 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 4 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or from the storage means 508, or from the ROM 502. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 501. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more computer programs which, when executed by the electronic device, cause the electronic device to:
computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++, python and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (9)

1. A scene locating method based on unsupervised learning, comprising:
Fusing a plurality of sensors of the own vehicle with data generated by the CAN;
Selecting characteristic data related to the behavior of the own vehicle and the target vehicle from the fused data, wherein the characteristic data comprises the speed of the own vehicle, the relative speed of the target vehicle and the steering angle of the steering wheel of the own vehicle;
automatically extracting the characteristic data by using a self-encoder;
Performing scene division on the automatically extracted result by using a clustering algorithm, and forming a plurality of scene clusters; and dividing the scene clusters into different scene sets to finish the scene positioning of the data.
2. The unsupervised learning based scene locating method according to claim 1, wherein the fusing the plurality of sensors of the own vehicle with CAN generated data comprises the steps of:
determining a reference sampling sensor according to sampling frequencies of the plurality of sensors;
and performing time fusion on the rest multiple sensors by using the reference sampling sensor.
3. The unsupervised learning based scene locating method according to claim 2, wherein the reference sampling sensor time-fuses the remaining plurality of sensors comprising the steps of:
traversing a data frame of each non-reference sampling sensor;
Time matching is carried out on the data frames of each non-reference sampling sensor by using the reference sampling sensor time stamp;
and selecting a data frame with a time interval lower than a threshold value from the current time stamp as non-reference sampling sensor data under the current time stamp, and fusing the non-reference sampling sensor data with the data under the current time stamp of the reference sampling sensor.
4. The unsupervised learning based scene localization method according to claim 1, wherein the self-encoder comprises at least one input layer, two hidden layers, and one output layer.
5. The unsupervised learning-based scene localization method according to claim 1, wherein the clustering algorithm is Kmeans algorithm.
6. A scene positioning system based on unsupervised learning is characterized by comprising a fusion module, a selection module, an extraction module and a division module,
The fusion module is used for fusing a plurality of sensors of the vehicle with data generated by the CAN;
the selecting module is used for selecting characteristic data related to the behavior of the self vehicle and the target vehicle from the fused data; the characteristic data comprise the speed of the vehicle, the relative speed of the object vehicle and the steering wheel angle of the vehicle;
the extraction module is used for automatically extracting the characteristic data by utilizing a self-encoder;
The division module is used for dividing the automatically extracted result into scenes by using a clustering algorithm and forming a plurality of scene clusters; the scene clusters are used for being manually divided into different scene sets to finish scene positioning of data.
7. The unsupervised learning based scene locating system according to claim 6, wherein the fusion module comprises a determination unit, a fusion unit,
The determining unit determines a reference sampling sensor according to sampling frequencies of the plurality of sensors;
And the fusion unit is used for carrying out time fusion on the rest multiple sensors by using the reference sampling sensor.
8. An electronic device, comprising: one or more processors; storage means for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the unsupervised learning-based scene localization method of any one of claims 1 to 5.
9. A computer readable medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the unsupervised learning based scene localization method according to any one of claims 1 to 5.
CN202110855228.9A 2021-07-27 2021-07-27 Scene positioning method and system based on unsupervised learning Active CN113743456B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110855228.9A CN113743456B (en) 2021-07-27 2021-07-27 Scene positioning method and system based on unsupervised learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110855228.9A CN113743456B (en) 2021-07-27 2021-07-27 Scene positioning method and system based on unsupervised learning

Publications (2)

Publication Number Publication Date
CN113743456A CN113743456A (en) 2021-12-03
CN113743456B true CN113743456B (en) 2024-05-10

Family

ID=78729299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110855228.9A Active CN113743456B (en) 2021-07-27 2021-07-27 Scene positioning method and system based on unsupervised learning

Country Status (1)

Country Link
CN (1) CN113743456B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112036297A (en) * 2020-08-28 2020-12-04 长安大学 Typical and extreme scene division and extraction method based on internet vehicle driving data
CN112465065A (en) * 2020-12-11 2021-03-09 中国第一汽车股份有限公司 Sensor data association method, device, equipment and storage medium
CN112541527A (en) * 2020-11-26 2021-03-23 深兰科技(上海)有限公司 Multi-sensor synchronization method and device, electronic equipment and storage medium
CN112560253A (en) * 2020-12-08 2021-03-26 中国第一汽车股份有限公司 Method, device and equipment for reconstructing driving scene and storage medium
DE102019217951A1 (en) * 2019-11-21 2021-05-27 Volkswagen Aktiengesellschaft Method and apparatus for determining a domain distance between at least two data domains
CN113112061A (en) * 2021-04-06 2021-07-13 深圳市汉德网络科技有限公司 Method and device for predicting vehicle oil consumption

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11199839B2 (en) * 2018-07-23 2021-12-14 Hrl Laboratories, Llc Method of real time vehicle recognition with neuromorphic computing network for autonomous driving
US11554785B2 (en) * 2019-05-07 2023-01-17 Foresight Ai Inc. Driving scenario machine learning network and driving environment simulation
US11203348B2 (en) * 2019-10-28 2021-12-21 Denso International America, Inc. System and method for predicting and interpreting driving behavior
CN111144015A (en) * 2019-12-30 2020-05-12 吉林大学 Method for constructing virtual scene library of automatic driving automobile

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102019217951A1 (en) * 2019-11-21 2021-05-27 Volkswagen Aktiengesellschaft Method and apparatus for determining a domain distance between at least two data domains
CN112036297A (en) * 2020-08-28 2020-12-04 长安大学 Typical and extreme scene division and extraction method based on internet vehicle driving data
CN112541527A (en) * 2020-11-26 2021-03-23 深兰科技(上海)有限公司 Multi-sensor synchronization method and device, electronic equipment and storage medium
CN112560253A (en) * 2020-12-08 2021-03-26 中国第一汽车股份有限公司 Method, device and equipment for reconstructing driving scene and storage medium
CN112465065A (en) * 2020-12-11 2021-03-09 中国第一汽车股份有限公司 Sensor data association method, device, equipment and storage medium
CN113112061A (en) * 2021-04-06 2021-07-13 深圳市汉德网络科技有限公司 Method and device for predicting vehicle oil consumption

Also Published As

Publication number Publication date
CN113743456A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN112590813B (en) Method, device, electronic device and medium for generating information of automatic driving vehicle
CN115540896B (en) Path planning method and device, electronic equipment and computer readable medium
CN112348029B (en) Local map adjusting method, device, equipment and computer readable medium
CN112258519B (en) Automatic extraction method and device for way-giving line of road in high-precision map making
CN115326099B (en) Local path planning method and device, electronic equipment and computer readable medium
CN115339453B (en) Vehicle lane change decision information generation method, device, equipment and computer medium
CN111401255B (en) Method and device for identifying bifurcation junctions
CN111098842B (en) Vehicle speed control method and related equipment
CN115540894A (en) Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN113190613A (en) Vehicle route information display method and device, electronic equipment and readable medium
CN110287817B (en) Target recognition and target recognition model training method and device and electronic equipment
CN113269168B (en) Obstacle data processing method and device, electronic equipment and computer readable medium
CN113119999B (en) Method, device, equipment, medium and program product for determining automatic driving characteristics
CN114550116A (en) Object identification method and device
CN116088537B (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
CN116022130B (en) Vehicle parking method, device, electronic equipment and computer readable medium
CN112649011A (en) Vehicle obstacle avoidance method, device, equipment and computer readable medium
CN113743456B (en) Scene positioning method and system based on unsupervised learning
CN115326079B (en) Vehicle lane level positioning method, device, equipment and computer readable medium
CN112558036A (en) Method and apparatus for outputting information
CN115061386B (en) Intelligent driving automatic simulation test system and related equipment
CN113804208B (en) Unmanned vehicle path optimization method and related equipment
CN115032672A (en) Fusion positioning method and system based on positioning subsystem
CN113970754A (en) Positioning method and device of autonomous travelable equipment
CN114154510A (en) Control method and device for automatic driving vehicle, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant