CN112070064A - Image recognition system based on convolutional network - Google Patents

Image recognition system based on convolutional network Download PDF

Info

Publication number
CN112070064A
CN112070064A CN202011062850.6A CN202011062850A CN112070064A CN 112070064 A CN112070064 A CN 112070064A CN 202011062850 A CN202011062850 A CN 202011062850A CN 112070064 A CN112070064 A CN 112070064A
Authority
CN
China
Prior art keywords
neural network
image
convolutional neural
network model
deep convolutional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011062850.6A
Other languages
Chinese (zh)
Inventor
张欢
刘茂金
何灏
王明亮
贺龙钊
马康
辛育
王磊
邹鲁
郭富强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Landau Zhitong Technology Co ltd
Original Assignee
Shenzhen Landau Zhitong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Landau Zhitong Technology Co ltd filed Critical Shenzhen Landau Zhitong Technology Co ltd
Priority to CN202011062850.6A priority Critical patent/CN112070064A/en
Publication of CN112070064A publication Critical patent/CN112070064A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an image recognition system based on a convolutional network, which relates to the technical field of unmanned driving and comprises a collection module, a recognition module and a presentation module, wherein the recognition module comprises an extraction unit and a deep convolutional neural network model, the collection module is connected with the extraction unit, the extraction unit is connected with the deep convolutional neural network model, and the deep convolutional neural network model is connected with the presentation module. The invention predicts the image semantic data of the corresponding visual scene, reconstructs the environmental semantic information around the vehicle, reduces the loss of local information, increases the feeling, contains more front and back pixel information, improves the accuracy of environmental illumination identification and analysis, can effectively help the vehicle to master the surrounding environment under the condition that the vehicle is blinded due to far-reaching illumination or the illumination of the road surface is poor, and effectively avoids the occurrence of traffic accidents, and has high adaptability.

Description

Image recognition system based on convolutional network
Technical Field
The invention relates to the technical field of unmanned driving, in particular to an image recognition system based on a convolutional network.
Background
The unmanned automobile is an intelligent automobile which senses road environment through a vehicle-mounted sensing system, automatically plans a driving route and controls the automobile to reach a preset target. The vehicle-mounted sensor is used for sensing the surrounding environment of the vehicle, and controlling the steering and the speed of the vehicle according to the road, the vehicle position and the obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road. The system integrates a plurality of technologies such as automatic control, a system structure, artificial intelligence, visual calculation and the like, is a product of high development of computer science, mode recognition and intelligent control technology, is an important mark for measuring national scientific research strength and industrial level, and has wide application prospect in the fields of national defense and national economy.
Automotive laser radar has begun to be used primarily in unmanned systems where the laser radar is used to sense obstacles in the surrounding environment. When the unmanned vehicle has to collide with the surrounding environment, the laser radar data does not have any semantic information, so that the unmanned vehicle is often subjected to non-selective collision.
The invention patent CN201510292794.8 of retrieval China discloses an image recognition system based on infrared imaging, which consists of an infrared imaging system and an image recognition system connected with the infrared imaging system; the infrared imaging system consists of an infrared light source, an optical system connected with the infrared light source, a scanning mechanism connected with the optical system, an infrared detector connected with the scanning mechanism and an image acquisition module connected with the infrared detector; the visible components in natural light can be greatly inhibited, the method is suitable for being used in various environments such as night, day, side, reverse and positive light, and the application occasions of the method are greatly improved. Meanwhile, the target is captured by adopting an infrared imaging principle, so that the target is not easily interfered by external factors, the obtained information is rich, and the accuracy of the image recognition system is improved to a great extent. But it can not perception surrounding environment's barrier, can not improve the accuracy of ambient light discernment analysis, helps the vehicle to master surrounding environment under the condition that far-reaching headlamp shines and causes blindly or road surface illumination is not good, effectively avoids the emergence of traffic accident.
The retrieved Chinese patent of invention CN201880002314.1 discloses an image recognition system, which comprises a multi-view imaging module with a micro-lens array, wherein the light of an imaged object is respectively refracted by a plurality of micro-lenses and then enters different photosensitive areas of a photosensitive element, so that a plurality of weak images and depth information with different views of the object can be obtained through one-time imaging; the image information of the identified object at a plurality of different angles is obtained by repeatedly acquiring the image information of the identified object at a single visual angle of the identified object by the multi-visual-angle imaging module; taking the collected image information of the identified objects at different angles as sample training data, and training based on a convolutional neural network model to obtain a target model; so as to perform image recognition on the object to be recognized. The method adopts a plurality of weak images with different visual angles of the same object for identification, thereby greatly improving the accuracy of object identification. But it can not perception surrounding environment's barrier, can not improve the accuracy of ambient light discernment analysis, helps the vehicle to master surrounding environment under the condition that far-reaching headlamp shines and causes blindly or road surface illumination is not good, effectively avoids the emergence of traffic accident.
An effective solution to the problems in the related art has not been proposed yet.
Disclosure of Invention
Aiming at the problems in the related art, the invention provides an image recognition system based on a convolutional network, which reconstructs the environmental semantic information around a vehicle by predicting the image semantic data of a corresponding visual scene, reduces the loss of local information, increases the feeling while containing more front and back pixel information, improves the accuracy of environmental illumination recognition and analysis, can effectively help the vehicle to master the surrounding environment under the condition that the vehicle is blinded due to far-reaching light irradiation or the road surface illumination is poor, and effectively avoids the occurrence of traffic accidents so as to overcome the technical problems in the prior art.
The technical scheme of the invention is realized as follows:
an image recognition system based on a convolutional network comprises a collection module, a recognition module and a presentation module, wherein the recognition module comprises an extraction unit and a deep convolutional neural network model, the collection module is connected with the extraction unit, the extraction unit is connected with the deep convolutional neural network model, and the deep convolutional neural network model is connected with the presentation module;
the acquisition module is used for acquiring street view image data and transmitting the street view image data to the extraction unit;
the extraction unit is used for extracting a semantic segmentation map of an image from the obtained street view image data and taking the semantic segmentation map as the input of the deep convolutional neural network model;
the deep convolutional neural network model is used for building the deep convolutional neural network model and identifying and analyzing a semantic segmentation map of an image, and comprises the following steps:
calibrating a deep convolutional neural network model and training to determine a loss function of the deep convolutional neural network;
obtaining glow characteristics f output from the convolutional layerGAnd a GM used to represent the glow location through small scale convolution filtering;
the obtained GM and fGObtaining glow intensity estimation S through convolution filtering;
cascading the obtained features, performing convolution output to obtain an image H after glow removal;
acquiring the transmittance and the ambient light of the image H as an identification image and outputting the identification image;
and the presentation module is used for presenting the acquired identification image.
Further, the deep convolutional neural network model further includes the following steps:
acquiring a training database set;
setting hyper-parameters and parameters of a deep convolutional neural network;
and obtaining a well-trained deep convolutional neural network model through multi-generation Epoch and Batch Batch training.
Further, the deep convolutional neural network model comprises a plurality of convolutional layers, active layers, pooling layers, anti-convolutional layers, 1 × 1 convolutional layers and softmax layers, wherein; the kernel function size of the convolutional layer is a 3 × 3 matrix, the kernel function of the deconvolution layer is 4 × 4, the pooling field size is 2 × 2, and the pooling step size is 2.
Further, the GM of the glow location, including the glow component decomposition, is represented as:
I(x)=H(x)+G(x),
wherein H (x) ═ Jc(x)t(x)+Ac(x) (1_ t (x)) a semantic segmentation map expressed as street view image data,
Figure BDA0002712866600000031
wherein, the image data of I street view, G is glow image, and S is represented by k representative glowLight image shape and illuminated SkGM is represented as a binary glow mask of light source areas and non-light source areas.
Further, the transmittance and ambient illumination of the image H are expressed as:
Figure BDA0002712866600000032
wherein, ω iskIs a filtering window;
the following are obtained:
Figure BDA0002712866600000033
bk=μk-akμk
wherein, mukAnd
Figure BDA0002712866600000034
are respectively the window omegakMean and variance of the inner pixels, | ω | is ω |kThe number of medium pixels; each pixel point of the output image a is the mean of all linear functions including the point, and is represented as:
Figure BDA0002712866600000041
the invention has the beneficial effects that:
the image recognition system based on the convolutional network integrates the acquisition module, the recognition module, the presentation module, the extraction unit and the deep convolutional neural network model, acquires street view image data and transmits the street view image data to the extraction unit, extracts a semantic segmentation map of an image from the acquired street view image data and uses the semantic segmentation map as the input of the deep convolutional neural network model, and presents the semantic segmentation map of the image by recognition and analysis, thereby not only predicting the image semantic data of a corresponding visual scene and reconstructing the environmental semantic information around a vehicle, but also reducing the loss of local information, increasing the feeling and containing more front and back pixel information, improving the accuracy of the environmental illumination recognition and analysis, effectively helping the vehicle master the surrounding environment under the condition of blindness caused by far-distance light irradiation or poor road illumination, and effectively avoiding the occurrence of traffic accidents, the adaptability is high.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a functional block diagram of a convolutional network-based image recognition system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a deep convolutional neural network model of a convolutional network-based image recognition system according to an embodiment of the present invention.
In the figure:
1. an acquisition module; 2. an identification module; 3. a presentation module; 4. an extraction unit; 5. a deep convolutional neural network model.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present invention.
According to an embodiment of the present invention, there is provided a convolutional network-based image recognition system.
As shown in fig. 1-2, the image recognition system based on the convolutional network according to the embodiment of the present invention includes an acquisition module 1, a recognition module 2 and a presentation module 3, where the recognition module 2 includes an extraction unit 4 and a deep convolutional neural network model 5, where the acquisition module 1 is connected to the extraction unit 4, the extraction unit 4 is connected to the deep convolutional neural network model 5, and the deep convolutional neural network model 5 is connected to the presentation module 3;
the acquisition module 1 is used for acquiring street view image data and transmitting the street view image data to the extraction unit;
the extraction unit 4 is used for extracting a semantic segmentation map of an image from the obtained street view image data and taking the semantic segmentation map as the input of the deep convolutional neural network model 5;
the deep convolutional neural network model 5 is used for building a deep convolutional neural network model and identifying and analyzing a semantic segmentation map of an image, and comprises the following steps:
calibrating a deep convolutional neural network model and training to determine a loss function of the deep convolutional neural network;
obtaining glow characteristics f output from the convolutional layerGAnd a GM used to represent the glow location through small scale convolution filtering;
the obtained GM and fGObtaining glow intensity estimation S through convolution filtering;
cascading the obtained features, performing convolution output to obtain an image H after glow removal;
acquiring the transmittance and the ambient light of the image H as an identification image and outputting the identification image;
and the presentation module 3 is used for presenting the acquired identification image.
The deep convolutional neural network model 5 further includes the following steps:
acquiring a training database set;
setting hyper-parameters and parameters of a deep convolutional neural network;
and obtaining a well-trained deep convolutional neural network model through multi-generation Epoch and Batch Batch training.
The deep convolutional neural network model comprises a plurality of convolutional layers, an activation layer, a pooling layer, an anti-convolutional layer, a 1 × 1 convolutional layer and a softmax layer, wherein the convolutional layers are connected in series; the kernel function size of the convolutional layer is a 3 × 3 matrix, the kernel function of the deconvolution layer is 4 × 4, the pooling field size is 2 × 2, and the pooling step size is 2.
Wherein the GM of the glow location, including the glow component decomposition, is represented as:
I(x)=H(x)+G(x),
wherein H (x) ═ Jc(x)t(x)+Ac(x) (1_ t (x)) a semantic segmentation map expressed as street view image data,
Figure BDA0002712866600000061
wherein, the I street view image data, G is glow image, and S is k S representing glow image shape and illuminationkGM is represented as a binary glow mask of light source areas and non-light source areas.
Wherein the transmittance and ambient illumination of the image H are represented as:
Figure BDA0002712866600000062
wherein, ω iskIs a filtering window;
the following are obtained:
Figure BDA0002712866600000063
bk=μk-akμk
wherein, mukAnd
Figure BDA0002712866600000064
are respectively the window omegakMean and variance of the inner pixels, | ω | is ω |kThe number of medium pixels; each pixel point of the output image a is the mean of all linear functions including the point, and is represented as:
Figure BDA0002712866600000065
by means of the technical scheme, street view image data are collected and transmitted to the extraction unit, the semantic segmentation graph of the image is extracted from the obtained street view image data and is used as the input of the deep convolutional neural network model, in addition, the semantic segmentation graph of the image is identified and analyzed to be displayed, the image semantic data of the corresponding visual scene is predicted, the environment semantic information around the vehicle is reconstructed, the perception is increased while the loss of local information is reduced, more front and back pixel information can be contained, the accuracy of environment illumination identification and analysis is improved, the vehicle can be effectively helped to master the surrounding environment under the condition that the vehicle is blindly irradiated by a far light or the illumination of a road surface is poor, the occurrence of traffic accidents is effectively avoided, and the adaptability is high.
In addition, specifically, for the above-mentioned acquisition training data base set, it sets hyper-parameters and parameters of the deep convolutional neural network; and obtaining a well-trained deep convolutional neural network model through multi-generation Epoch and Batch Batch training. In the deep convolutional neural network training, twenty thousand pieces of image data with the resolution of 616 × 184 are collected in advance. The acquisition of the image is to provide semantic labels of the image for training of the deep convolutional neural network. The image is changed into image semantic data through manual labeling or by utilizing an image semantic segmentation algorithm to serve as a label for deep convolutional neural network training.
In addition, the image semantics comprise pedestrians, bicycles, electric vehicles, cars, trucks, sky, pavements, roadside enclosures, roadside trees, roadside buildings, roadside greenbelts, median barriers, and traffic lights.
In summary, by integrating the acquisition module, the identification module, the presentation module, the extraction unit and the deep convolutional neural network model, acquiring street view image data and transmitting the street view image data to the extraction unit, extracting a semantic segmentation map of an image from the acquired street view image data and using the semantic segmentation map as an input of the deep convolutional neural network model, and presenting the semantic segmentation map of the image by identification and analysis, the technical scheme of the invention not only predicts image semantic data of a corresponding visual scene and reconstructs environmental semantic information around a vehicle, but also reduces loss of local information, increases perception and contains more front and back pixel information, improves accuracy of environmental illumination identification and analysis, can effectively help the vehicle master surrounding environment under the condition of blind far-light illumination or poor road illumination, and effectively avoids traffic accidents, the adaptability is high.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (5)

1. An image recognition system based on a convolutional network is characterized by comprising an acquisition module (1), a recognition module (2) and a presentation module (3), wherein the recognition module (2) comprises an extraction unit (4) and a deep convolutional neural network model (5), the acquisition module (1) is connected with the extraction unit (4), the extraction unit (4) is connected with the deep convolutional neural network model (5), and the deep convolutional neural network model (5) is connected with the presentation module (3), wherein;
the acquisition module (1) is used for acquiring street view image data and transmitting the street view image data to the extraction unit;
the extraction unit (4) is used for extracting a semantic segmentation map of an image from the obtained street view image data and taking the semantic segmentation map as the input of the deep convolutional neural network model (5);
the deep convolutional neural network model (5) is used for building the deep convolutional neural network model and identifying and analyzing a semantic segmentation map of an image, and comprises the following steps:
calibrating a deep convolutional neural network model and training to determine a loss function of the deep convolutional neural network;
obtaining glow characteristics f output from the convolutional layerGAnd a GM used to represent the glow location through small scale convolution filtering;
the obtained GM and fGObtaining glow intensity estimation S through convolution filtering;
cascading the obtained features, performing convolution output to obtain an image H after glow removal;
acquiring the transmittance and the ambient light of the image H as an identification image and outputting the identification image;
the presentation module (3) is used for presenting the acquired identification image.
2. The convolutional network based image recognition system of claim 1 wherein the deep convolutional neural network model (5) further comprises the steps of:
acquiring a training database set;
setting hyper-parameters and parameters of a deep convolutional neural network;
and obtaining a well-trained deep convolutional neural network model through multi-generation Epoch and Batch Batch training.
3. The convolutional network based image recognition system of claim 2 wherein the deep convolutional neural network model comprises a plurality of convolutional layers, active layers, pooling layers, anti-convolutional layers, 1 x 1 convolutional layers, softmax layers, wherein; the kernel function size of the convolutional layer is a 3 × 3 matrix, the kernel function of the deconvolution layer is 4 × 4, the pooling field size is 2 × 2, and the pooling step size is 2.
4. The convolutional network based image recognition system of claim 1 wherein the GM of the glow location, including the glow component decomposition, is represented as:
I(x)=H(x)+G(x),
wherein H (x) ═ Jc(x)t(x)+Ac(x) (1-t (x)) a semantic segmentation map expressed as street view image data,
Figure FDA0002712866590000021
wherein, the I street view image data, G is glow image, and S is k S representing glow image shape and illuminationkGM is represented as a binary glow mask of light source areas and non-light source areas.
5. The convolutional network based image recognition system of claim 4, wherein the transmittance and ambient illumination of the image H are expressed as:
Figure FDA0002712866590000022
wherein, ω iskIs a filtering window;
the following are obtained:
Figure FDA0002712866590000023
bk=μk-akμk
wherein, mukAnd
Figure FDA0002712866590000024
are respectively the window omegakMean and variance of the inner pixels, | ω | is ω |kThe number of medium pixels; each pixel point of the output image a is the mean of all linear functions including the point, and is represented as:
Figure FDA0002712866590000025
CN202011062850.6A 2020-09-30 2020-09-30 Image recognition system based on convolutional network Pending CN112070064A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011062850.6A CN112070064A (en) 2020-09-30 2020-09-30 Image recognition system based on convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011062850.6A CN112070064A (en) 2020-09-30 2020-09-30 Image recognition system based on convolutional network

Publications (1)

Publication Number Publication Date
CN112070064A true CN112070064A (en) 2020-12-11

Family

ID=73683445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011062850.6A Pending CN112070064A (en) 2020-09-30 2020-09-30 Image recognition system based on convolutional network

Country Status (1)

Country Link
CN (1) CN112070064A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223240A (en) * 2019-05-05 2019-09-10 北京理工大学珠海学院 Image defogging method, system and storage medium based on color decaying priori
CN110647839A (en) * 2019-09-18 2020-01-03 深圳信息职业技术学院 Method and device for generating automatic driving strategy and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223240A (en) * 2019-05-05 2019-09-10 北京理工大学珠海学院 Image defogging method, system and storage medium based on color decaying priori
CN110647839A (en) * 2019-09-18 2020-01-03 深圳信息职业技术学院 Method and device for generating automatic driving strategy and computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SHIBA KUANAR ET AL: "Night Time Haze and Glow Removal using Deep Dilated Convolutional Network", 《ARXIV:1902.00855》 *

Similar Documents

Publication Publication Date Title
CN110356325B (en) Urban traffic passenger vehicle blind area early warning system
CN109389046B (en) All-weather object identification and lane line detection method for automatic driving
Hirabayashi et al. Traffic light recognition using high-definition map features
Kurihata et al. Rainy weather recognition from in-vehicle camera images for driver assistance
Pavlic et al. Classification of images in fog and fog-free scenes for use in vehicles
CN113313154A (en) Integrated multi-sensor integrated automatic driving intelligent sensing device
DE112019001657T5 (en) SIGNAL PROCESSING DEVICE AND SIGNAL PROCESSING METHOD, PROGRAM AND MOBILE BODY
Gavrila et al. Real time vision for intelligent vehicles
Tung et al. The raincouver scene parsing benchmark for self-driving in adverse weather and at night
CN111221342A (en) Environment sensing system for automatic driving automobile
CN114556249A (en) System and method for predicting vehicle trajectory
CN112215306A (en) Target detection method based on fusion of monocular vision and millimeter wave radar
Cheng et al. Modeling weather and illuminations in driving views based on big-video mining
CN111323027A (en) Method and device for manufacturing high-precision map based on fusion of laser radar and panoramic camera
Wang et al. Road edge detection in all weather and illumination via driving video mining
CN212009589U (en) Video identification driving vehicle track acquisition device based on deep learning
CN114155720B (en) Vehicle detection and track prediction method for roadside laser radar
CN113903012A (en) Collision early warning method and device, vehicle-mounted equipment and storage medium
CN112001272A (en) Laser radar environment sensing method and system based on deep learning
CN114419603A (en) Automatic driving vehicle control method and system and automatic driving vehicle
CN112070064A (en) Image recognition system based on convolutional network
CN111325811A (en) Processing method and processing device for lane line data
Nayak et al. Reference Test System for Machine Vision Used for ADAS Functions
CN113183868B (en) Intelligent matrix LED headlamp control system based on image recognition technology
Miman et al. Lane departure system design using with IR camera for night-time road conditions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201211

RJ01 Rejection of invention patent application after publication