CN112329254A - Automatic driving method for butting simulation environment image and real environment image - Google Patents

Automatic driving method for butting simulation environment image and real environment image Download PDF

Info

Publication number
CN112329254A
CN112329254A CN202011266665.9A CN202011266665A CN112329254A CN 112329254 A CN112329254 A CN 112329254A CN 202011266665 A CN202011266665 A CN 202011266665A CN 112329254 A CN112329254 A CN 112329254A
Authority
CN
China
Prior art keywords
image
vehicle
semantic segmentation
segmentation network
automatic driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011266665.9A
Other languages
Chinese (zh)
Inventor
董舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dilu Technology Co Ltd
Original Assignee
Dilu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dilu Technology Co Ltd filed Critical Dilu Technology Co Ltd
Priority to CN202011266665.9A priority Critical patent/CN112329254A/en
Publication of CN112329254A publication Critical patent/CN112329254A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic driving method for butting a simulation environment image and a real environment image. Firstly, acquiring a vehicle camera image in a simulation platform, and training a deep learning semantic segmentation network 1; secondly, performing image segmentation on the simulation platform by adopting a trained deep learning semantic segmentation network 1, and extracting a characteristic value; then, training a deep reinforcement learning algorithm model by adopting the extracted characteristic values to realize that the vehicle runs according to an appointed lane in a simulation environment; and finally, acquiring a vehicle camera image in a real environment, training the deep learning semantic segmentation network 2, and realizing automatic driving of the vehicle on an actual lane. The invention obviously reduces the difficulty of transferring the trained algorithm model in the simulation environment to the real environment, so that the reinforcement learning algorithm trained in the simulation environment can be directly used on a real vehicle without additional reinforcement learning training.

Description

Automatic driving method for butting simulation environment image and real environment image
Technical Field
The invention belongs to the field of automatic driving, and particularly relates to an automatic driving method applying a deep reinforcement learning algorithm.
Background
With the continuous development of the artificial intelligence technology, the problem of automatic driving is solved more realistically by applying the artificial intelligence technology in the automobile industry. At present, automatic driving is mainly realized in two directions, wherein one direction is to decompose an automatic driving task and use different algorithms to solve the decomposed problem, for example, deep learning is used for detecting and identifying road information; the other direction is to realize automatic driving by directly using an end-to-end algorithm without subdivision solution of automatic driving, such as reinforcement learning and imitation learning. The end-to-end algorithm can simplify the design of automatic driving theoretically, after all, automatic driving is realized from a sensing section to a control end, the work content is reduced by a mode of decomposing problems, meanwhile, the method also conforms to the driving mode of human, information is obtained from the sensing end image, control is realized from the control end, accurate surrounding information does not need to be known, and therefore, a plurality of enterprises and students are engaged in end-to-end automatic driving research. The reinforcement learning also belongs to an end-to-end automatic driving algorithm in automatic driving, and the principle of the reinforcement learning also conforms to the evolution mode of human beings, namely punishment is obtained from interaction with the environment, and knowledge is learned. After the reinforcement learning is combined with the deep learning, the deep reinforcement learning is generated, and the performance is obviously improved.
Although the deep reinforcement learning algorithm can learn new knowledge in the environment, the current development of the algorithm is still insufficient to enable the algorithm to learn the wanted knowledge in the complex vehicle-mounted camera original image, and the algorithm depends on a manually set reward function, and the setting of the function limits the learning effect of the reinforcement learning algorithm. Meanwhile, the deep reinforcement learning algorithm has the problem of low sample utilization rate, and a large amount of repeated training is needed to learn the knowledge desired by people. In a real environment, the time cost and the vehicle cost are high and a great safety risk exists when a real vehicle is used for deep reinforcement learning training.
In order to reduce the training cost and risk of the reinforcement learning algorithm, researchers generally adopt a mode of training in a simulation environment to reduce the training cost and risk, and after a model is trained, the model is migrated to a real environment. In the field of automatic driving, an obvious problem exists in doing so, namely, an image in a simulation environment is obviously different from a real environment, so that a result of training by using an original image in the simulation environment by a deep reinforcement learning algorithm is difficult to transfer to the real environment, and the deep reinforcement learning algorithm needs to be retrained again.
At present, researchers reduce training differences between a simulation environment and a real environment mainly by establishing a vivid simulation environment and designing a conversion algorithm of images in the simulation environment and the real environment, the methods can reduce the differences between the two environments, but the reduction is limited, and the methods all need to spend a large amount of time.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides an automatic driving method for butting a simulation environment image and a real environment image, and the difficulty of transferring a trained algorithm model in a simulation environment to the real environment is obviously reduced.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
an automatic driving method for docking a simulation environment image and a real environment image is characterized by comprising the following steps:
(1) acquiring a vehicle camera image in a simulation platform, labeling a region capable of driving in the image, and training a deep learning semantic segmentation network 1 by adopting a labeled image set;
(2) segmenting a real-time image acquired by a vehicle camera in a simulation platform by adopting the deep learning semantic segmentation network 1 trained in the step (1), post-processing the segmented image, and extracting a characteristic value;
(3) training a deep reinforcement learning algorithm model by adopting the characteristic values extracted in the step (2) to realize that the vehicle runs according to a specified lane in a simulation environment;
(4) and (3) acquiring a vehicle camera image in a real environment, training the deep learning semantic segmentation network 2, and realizing automatic driving of the vehicle on a practical lane by adopting the characteristic value of the image in the real environment extracted by the deep learning semantic segmentation network 2 and the deep reinforcement learning algorithm model trained in the step (3).
Further, the specific process of step (1) is as follows:
(1-1) in a simulation platform, a vehicle runs along a correct lane, and a forward-looking image is acquired by using a vehicle-mounted camera in the running process of the vehicle;
(1-2) labeling areas capable of driving in the image acquired in the step (1-1) and storing the areas in a data set form;
and (1-3) selecting a proper deep learning semantic segmentation network, and training the deep learning semantic segmentation network 1 by utilizing the data set obtained in the step (1-2) until the accuracy of the network meets the requirement.
Further, in the step (1-2), the label is marked in a manual mode.
Further, in the step (1-3), the deep learning semantic segmentation network is selected according to the real-time performance of the network.
Further, in step (1-3), the selected deep learning semantic segmentation network includes, but is not limited to, Bisenet.
Further, in the step (1-3), training the deep learning semantic segmentation network until the accuracy of the network exceeds a set threshold value.
Further, the specific process of step (2) is as follows:
(2-1) inputting real-time images acquired by a vehicle camera in the simulation platform into a trained deep learning semantic segmentation network 1, and outputting segmented semantic images by the deep learning semantic segmentation network 1;
and (2-2) carrying out post-processing on the segmented semantic image to obtain a characteristic value.
Further, in the step (2-2), the post-processing includes down-sampling to reduce the computation amount of the network, filtering inefficient feature information, and image cropping to delete inefficient information and ineffective information; the low-efficiency information is information which has no obvious effect on the running of the vehicle on the road and comprises a green belt, a sidewalk and objects on the sidewalk; the invalid information is completely useless information when the vehicle runs on the road, and comprises background information and interference information, wherein the background information comprises sky, buildings and trees outside the road, and the interference information comprises fallen leaves of the road surface and the difference of light rays.
Further, the specific process of step (3) is as follows:
(3-1) butting the deep reinforcement learning algorithm model with the deep learning semantic segmentation network 1 and a simulation platform, and training the deep reinforcement learning algorithm model through the characteristic values extracted in the step (2);
(3-2) continuously adjusting and optimizing the model to enable the training result to reach the aim that the vehicle can run along the specified lane;
and (3-3) outputting a control signal of automatic driving of the vehicle to the simulation platform by the deep reinforcement learning algorithm, and driving the vehicle in the simulation platform according to the received control signal.
Further, the specific process of step (4) is as follows:
(4-1) collecting a vehicle camera image in a real environment, marking the image in the same way as the step (1), and training a deep learning semantic segmentation network 2 by using the marked image set;
(4-2) extracting the characteristic value of the image in the real environment in the same way as the step (2) based on the deep learning semantic segmentation network 2 trained in the step (4-1);
and (4-3) using the deep reinforcement learning algorithm model trained in the step (3) and the characteristic value extracted in the step (4-2) to realize automatic driving of the vehicle on the actual lane.
Adopt the beneficial effect that above-mentioned technical scheme brought:
the method has the advantages that the key point of the method is the butt joint of the simulation environment and the real environment, the key point of the method is that the deep reinforcement learning algorithm is trained by using the characteristic values extracted by the semantic segmentation network, and the efficiency of realizing automatic driving by the deep reinforcement learning algorithm is improved through the two aspects. The invention greatly reduces the difficulty of transferring the trained algorithm model in the simulation environment to the real environment by designing the images used in the algorithm training: the conventional image of the camera used in the deep reinforcement learning training is changed into the semantic segmentation image, invalid and inefficient image information in the reinforcement learning algorithm training automatic driving is shielded, only road information required by the vehicle is reserved, and the information is processed, so that the difficulty of transferring the trained algorithm in the simulation environment to the real environment can be obviously reduced, the process of carrying out massive retraining on the actual vehicle by the deep reinforcement learning algorithm is avoided, and a large amount of time and money cost are saved. Through actual tests, the algorithm trained in the simulation environment can realize automatic driving without retraining in a real environment.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a schematic diagram of the feature extraction results of the straight-driving lane simulation environment and the real environment in the present invention;
fig. 3 is a schematic diagram of the feature extraction results of the turning lane simulation environment and the real environment in the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below are exemplary embodiments with reference to the drawings, are only for illustrating the technical idea of the present invention, and the scope of the present invention should not be limited thereby, and any modifications made on the technical solution based on the technical idea of the present invention are within the scope of the present invention
As used herein, the singular forms "a", "an", "the" and "the" may include the plural forms as well, unless expressly stated otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or coupled. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The invention provides an automatic driving method for butting a simulation environment image and a real environment image, which comprises the following steps as shown in figure 1:
step 1: acquiring a vehicle camera image in a simulation platform, labeling a region capable of driving in the image, and training a deep learning semantic segmentation network 1 by adopting a labeled image set;
step 2: adopting the deep learning semantic segmentation network 1 trained in the step 1 to segment real-time images acquired by a vehicle camera in a simulation platform, carrying out post-processing on the segmented images, and extracting characteristic values;
and step 3: training a deep reinforcement learning algorithm model by adopting the characteristic values extracted in the step (2) to realize that the vehicle runs according to an appointed lane in a simulation environment;
and 4, step 4: and (3) acquiring a vehicle camera image in a real environment, training the deep learning semantic segmentation network 2, and realizing automatic driving of the vehicle on a practical lane by adopting the characteristic value of the image in the real environment extracted by the deep learning semantic segmentation network 2 and the deep reinforcement learning algorithm model trained in the step 3.
In this embodiment, the step 1 can be implemented by the following preferred scheme:
step 1-1: in the simulation platform, a vehicle runs along a correct lane, and a forward-looking image is acquired by using a vehicle-mounted camera in the running process of the vehicle;
step 1-2: labeling areas capable of driving in the image acquired in the step 1-1, and storing the areas into a data set form;
step 1-3: and (3) selecting a proper deep learning semantic segmentation network 1, and training the deep learning semantic segmentation network 1 by utilizing the data set obtained in the step (1-2) until the accuracy of the network meets the requirement.
Further, in step 1-2, the label is manually applied.
Further, in step 1-3, a deep learning semantic segmentation network, such as Bisenet, is selected according to the real-time performance of the network.
At present, more than ten public algorithms of a semantic segmentation network in deep learning used for segmenting images can be rapidly realized. The criteria for selecting a network is that the real-time performance is good, for example: under the condition of not using an acceleration tool TensorRT provided by NVIDIA official, the time for processing a single 1024 x 1024 color image on NVIDIA GeForce RTX 2080 is not more than 30ms, the segmentation accuracy is not lower than the level of the current mainstream algorithm, and the calculation cost of the algorithm is low.
Further, in step 1-3, training the deep learning semantic segmentation network until the accuracy of the network exceeds a set threshold, which is adjusted according to the actual situation.
In this embodiment, the step 2 can be implemented by the following preferred scheme:
step 2-1: inputting real-time images acquired by a vehicle camera in a simulation platform into a trained deep learning semantic segmentation network 1, and outputting segmented semantic images by the deep learning semantic segmentation network 1;
step 2-2: and carrying out post-processing on the segmented semantic image to obtain a characteristic value.
Further, in step 2-2, the post-processing includes down-sampling to reduce the computation amount of the network, filtering inefficient feature information, and image cropping to delete inefficient information and ineffective information; the low-efficiency information is information which has no obvious effect on the running of the vehicle on the road and comprises a green belt, a sidewalk and objects on the sidewalk; the invalid information is completely useless information when the vehicle runs on the road, and comprises background information and interference information, wherein the background information comprises sky, buildings and trees outside the road, and the interference information comprises fallen leaves of the road surface and the difference of light rays.
The purpose of downsampling is as follows: the length and width of the segmented image usually exceed 1024 × 1024, which means that the segmented image contains a large amount of repeated information, on one hand, the large amount of repeated information increases the calculation times/calculation amount of the network, meanwhile, the repeated information may cause adverse effects, and the deep reinforcement learning algorithm may explore whether the repeated information has special meanings. The length and width after down sampling is less than 100 multiplied by 100, the number grade is reduced by more than 100 times, and the training complexity and the time consumption can be obviously reduced.
After downsampling and filtering of ineffective and invalid information, the remaining characteristic value is useful information for deep reinforcement learning algorithm training, on one hand, the training amount and the calculation amount can be reduced by reducing the number, so that the efficiency is improved, on the other hand, the learning and exploring range is greatly reduced during deep reinforcement learning algorithm training after the ineffective and invalid information is reduced, and the training efficiency can also be improved.
In this embodiment, the step 3 can be implemented by the following preferred scheme:
step 3-1: the deep reinforcement learning algorithm model is in butt joint with the deep learning semantic segmentation network 1 and the simulation platform, and the deep reinforcement learning algorithm model is trained through the characteristic values extracted in the step 2;
step 3-2: continuously adjusting and optimizing the deep reinforcement learning algorithm model to enable the training result to reach the aim that the vehicle can run along the specified lane;
step 3-3: and outputting a control signal of automatic driving of the vehicle to the simulation platform by the deep reinforcement learning algorithm, and driving the vehicle in the simulation platform according to the received control signal.
In this embodiment, the step 4 can be implemented by the following preferred scheme:
step 4-1: collecting vehicle camera images in a real environment, marking the images in the same way as the step 1, and training a deep learning semantic segmentation network 2 by using the marked image set;
step 4-2: extracting the characteristic value of the image in the real environment by adopting the same mode as the step 2 based on the deep learning semantic segmentation network 2 trained in the step 4-1;
step 4-3: and 3, using the deep reinforcement learning algorithm model trained in the step 3 and the characteristic value extracted in the step 4-2 to realize automatic driving of the vehicle on the actual lane.
As shown in fig. 2 and 3, the results of extracting the features of the straight lane and the turning lane are respectively shown in the schematic diagrams, where (a) in fig. 2 and 3 is the result of extracting the feature value of the lane in the simulation environment, (b) is the result of extracting the feature value of the lane in the real environment, the white area in the diagram is the extracted feature information of the lane, and the black area is the information unrelated to the lane. It can be seen that the lane characteristic value extracted in the simulation environment is almost the same as the characteristic value extracted in the actual environment, that is, the deep reinforcement learning algorithm model trained by using the image characteristic value in the simulation environment can directly use the characteristic value extracted by using the image in the real environment to complete the image butt joint work in the simulation environment and the real environment, thereby realizing automatic driving.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in the present application can be interchanged, modified, combined, or eliminated. Further, various operations, methods, steps in the flows, which have been discussed in the present application, may be interchanged, modified, rearranged, decomposed, combined, or eliminated. Further, steps, measures, schemes in the various operations, methods, procedures disclosed in the prior art and the present invention can also be alternated, changed, rearranged, decomposed, combined, or deleted.

Claims (10)

1. An automatic driving method for docking a simulation environment image and a real environment image is characterized by comprising the following steps:
(1) acquiring a vehicle camera image in a simulation platform, labeling a region capable of driving in the image, and training a deep learning semantic segmentation network 1 by adopting a labeled image set;
(2) segmenting a real-time image acquired by a vehicle camera in a simulation platform by adopting the deep learning semantic segmentation network 1 trained in the step (1), post-processing the segmented image, and extracting a characteristic value;
(3) training a deep reinforcement learning algorithm model by adopting the characteristic values extracted in the step (2) to realize that the vehicle runs according to a specified lane in a simulation environment;
(4) and (3) acquiring a vehicle camera image in a real environment, training the deep learning semantic segmentation network 2, and realizing automatic driving of the vehicle on a practical lane by adopting the characteristic value of the image in the real environment extracted by the deep learning semantic segmentation network 2 and the deep reinforcement learning algorithm model trained in the step (3).
2. The automatic driving method for docking simulation environment image and real environment image according to claim 1, wherein the specific process of step (1) is as follows:
(1-1) in a simulation platform, a vehicle runs along a correct lane, and a forward-looking image is acquired by using a vehicle-mounted camera in the running process of the vehicle;
(1-2) labeling areas capable of driving in the image acquired in the step (1-1) and storing the areas in a data set form;
and (1-3) selecting a proper deep learning semantic segmentation network, and training the deep learning semantic segmentation network 1 by utilizing the data set obtained in the step (1-2) until the accuracy of the network meets the requirement.
3. The automatic driving method for docking simulation environment image with real environment image according to claim 2, wherein in the step (1-2), labeling is performed manually.
4. The automated driving method for interfacing an image of a simulated environment with an image of a real environment according to claim 2, wherein in step (1-3), a deep learning semantic segmentation network is selected according to the real-time nature of the network.
5. The method for automated driving of an interface between an image of a simulated environment and an image of a real environment according to claim 4, wherein in step (1-3) the selected deep learning semantic segmentation network includes, but is not limited to, Bisenet.
6. The automatic driving method for docking simulation environment image and real environment image according to claim 2, wherein in the step (1-3), the deep learning semantic segmentation network is trained until the accuracy of the network exceeds a set threshold.
7. The automatic driving method for docking simulation environment image and real environment image according to claim 2, wherein the specific process of step (2) is as follows:
(2-1) inputting real-time images acquired by a vehicle camera in the simulation platform into a trained deep learning semantic segmentation network 1, and outputting segmented semantic images by the deep learning semantic segmentation network 1;
and (2-2) carrying out post-processing on the segmented semantic image to obtain a characteristic value.
8. The automatic driving method for docking simulation environment image with real environment image according to claim 7, wherein in the step (2-2), the post-processing comprises down-sampling to reduce the amount of computation of the network, filtering inefficient feature information, and image cropping to delete inefficient information and ineffective information; the low-efficiency information is information which has no obvious effect on the running of the vehicle on the road and comprises a green belt, a sidewalk and objects on the sidewalk; the invalid information is completely useless information when the vehicle runs on the road, and comprises background information and interference information, wherein the background information comprises sky, buildings and trees outside the road, and the interference information comprises fallen leaves of the road surface and the difference of light rays.
9. The automatic driving method for docking simulation environment image and real environment image according to claim 2, wherein the specific process of step (3) is as follows:
(3-1) butting the deep reinforcement learning algorithm model with the deep learning semantic segmentation network 1 and a simulation platform, and training the deep reinforcement learning algorithm model through the characteristic values extracted in the step (2);
(3-2) continuously adjusting and optimizing the model to enable the training result to reach the aim that the vehicle can run along the specified lane;
and (3-3) outputting a control signal of automatic driving of the vehicle to the simulation platform by the deep reinforcement learning algorithm, and driving the vehicle in the simulation platform according to the received control signal.
10. The automatic driving method for docking simulation environment image and real environment image according to claim 2, wherein the specific process of step (4) is as follows:
(4-1) collecting a vehicle camera image in a real environment, marking the image in the same way as the step (1), and training a deep learning semantic segmentation network 2 by using the marked image set;
(4-2) extracting the characteristic value of the image in the real environment in the same way as the step (2) based on the deep learning semantic segmentation network 2 trained in the step (4-1);
and (4-3) using the deep reinforcement learning algorithm model trained in the step (3) and the characteristic value extracted in the step (4-2) to realize automatic driving of the vehicle on the actual lane.
CN202011266665.9A 2020-11-13 2020-11-13 Automatic driving method for butting simulation environment image and real environment image Pending CN112329254A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011266665.9A CN112329254A (en) 2020-11-13 2020-11-13 Automatic driving method for butting simulation environment image and real environment image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011266665.9A CN112329254A (en) 2020-11-13 2020-11-13 Automatic driving method for butting simulation environment image and real environment image

Publications (1)

Publication Number Publication Date
CN112329254A true CN112329254A (en) 2021-02-05

Family

ID=74317695

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011266665.9A Pending CN112329254A (en) 2020-11-13 2020-11-13 Automatic driving method for butting simulation environment image and real environment image

Country Status (1)

Country Link
CN (1) CN112329254A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101907A (en) * 2018-07-28 2018-12-28 华中科技大学 A kind of vehicle-mounted image, semantic segmenting system based on bilateral segmentation network
CN110647839A (en) * 2019-09-18 2020-01-03 深圳信息职业技术学院 Method and device for generating automatic driving strategy and computer readable storage medium
CN110795821A (en) * 2019-09-25 2020-02-14 的卢技术有限公司 Deep reinforcement learning training method and system based on scene differentiation
CN110837811A (en) * 2019-11-12 2020-02-25 腾讯科技(深圳)有限公司 Method, device and equipment for generating semantic segmentation network structure and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101907A (en) * 2018-07-28 2018-12-28 华中科技大学 A kind of vehicle-mounted image, semantic segmenting system based on bilateral segmentation network
CN110647839A (en) * 2019-09-18 2020-01-03 深圳信息职业技术学院 Method and device for generating automatic driving strategy and computer readable storage medium
CN110795821A (en) * 2019-09-25 2020-02-14 的卢技术有限公司 Deep reinforcement learning training method and system based on scene differentiation
CN110837811A (en) * 2019-11-12 2020-02-25 腾讯科技(深圳)有限公司 Method, device and equipment for generating semantic segmentation network structure and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周苏;易然;郑淼;: "基于知识蒸馏的车辆可行驶区域分割算法研究", 汽车技术, no. 01 *

Similar Documents

Publication Publication Date Title
CN110210551B (en) Visual target tracking method based on adaptive subject sensitivity
CN109800736A (en) A kind of method for extracting roads based on remote sensing image and deep learning
CN109635744A (en) A kind of method for detecting lane lines based on depth segmentation network
CN105069779B (en) A kind of architectural pottery surface detail pattern quality detection method
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN112837344B (en) Target tracking method for generating twin network based on condition countermeasure
CN104392212A (en) Method for detecting road information and identifying forward vehicles based on vision
CN110009095A (en) Road driving area efficient dividing method based on depth characteristic compression convolutional network
CN113902915A (en) Semantic segmentation method and system based on low-illumination complex road scene
CN112488025B (en) Double-temporal remote sensing image semantic change detection method based on multi-modal feature fusion
CN111414954B (en) Rock image retrieval method and system
CN108304786A (en) A kind of pedestrian detection method based on binaryzation convolutional neural networks
CN110009648A (en) Trackside image Method of Vehicle Segmentation based on depth Fusion Features convolutional neural networks
CN112950780B (en) Intelligent network map generation method and system based on remote sensing image
CN110503613A (en) Based on the empty convolutional neural networks of cascade towards removing rain based on single image method
CN111476285B (en) Training method of image classification model, image classification method and storage medium
CN114048822A (en) Attention mechanism feature fusion segmentation method for image
CN109766823A (en) A kind of high-definition remote sensing ship detecting method based on deep layer convolutional neural networks
CN110276378A (en) The improved method that example is divided based on unmanned technology
CN110490189A (en) A kind of detection method of the conspicuousness object based on two-way news link convolutional network
CN112419333A (en) Remote sensing image self-adaptive feature selection segmentation method and system
CN111046723B (en) Lane line detection method based on deep learning
CN112598684A (en) Open-pit area ground feature segmentation method based on semantic segmentation technology
CN108961270B (en) Bridge crack image segmentation model based on semantic segmentation
CN105893941A (en) Facial expression identifying method based on regional images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination