CN108537764A - A kind of man-machine hybrid intelligent control loop - Google Patents

A kind of man-machine hybrid intelligent control loop Download PDF

Info

Publication number
CN108537764A
CN108537764A CN201810183771.7A CN201810183771A CN108537764A CN 108537764 A CN108537764 A CN 108537764A CN 201810183771 A CN201810183771 A CN 201810183771A CN 108537764 A CN108537764 A CN 108537764A
Authority
CN
China
Prior art keywords
image
fusion
subsystem
module
pseudo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810183771.7A
Other languages
Chinese (zh)
Inventor
邱炎新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ming Automatic Control Technology Co Ltd
Original Assignee
Shenzhen Ming Automatic Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ming Automatic Control Technology Co Ltd filed Critical Shenzhen Ming Automatic Control Technology Co Ltd
Priority to CN201810183771.7A priority Critical patent/CN108537764A/en
Publication of CN108537764A publication Critical patent/CN108537764A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/10Interpretation of driver requests or demands
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)

Abstract

The present invention provides a kind of man-machine hybrid intelligent control loops, including sensing subsystem, Activity recognition subsystem, command subsystem and control subsystem, the sensing subsystem is used to obtain travel condition of vehicle information and current traffic information using sensor, and travel condition of vehicle information and current traffic information are sent to control subsystem, the Activity recognition subsystem is for being identified the human body behavior of driver, described instruction subsystem is used to convert human body behavior to steering instructions, and steering instructions are sent to control subsystem, the control subsystem is used for according to travel condition of vehicle information, current traffic information and steering instructions control vehicle.Beneficial effects of the present invention are:It a kind of man-machine hybrid intelligent control loop is provided, this system overcomes the defect trusted is lacked to system in unmanned, improves Decision-making Function of the driver in driving, the significant increase driving experience of user.

Description

A kind of man-machine hybrid intelligent control loop
Technical field
The present invention relates to intelligent driving technical fields, and in particular to a kind of man-machine hybrid intelligent control loop.
Background technology
With the development of artificial intelligence technology, unmanned technology also develops rapidly, Unmanned Systems and manned System is compared, be by people exclude except control loop, still, Unmanned Systems showed when in face of complex road condition compared with Difference.
Human bodys' response is an emerging research direction in artificial intelligence field, be with a wide range of applications with it is non- The economic value of Chang Keguan, the application field being related to include mainly:Video monitoring, medical diagnosis and monitoring, motion analysis, intelligence Human-computer interaction, virtual reality etc..The corresponding groundwork flow of Human bodys' response is:Various kinds of sensors is selected to obtain human body row For data information, and the behavioral trait of sensor characteristics and people is combined to establish rational behavior model, on this basis from original Extracted in gathered data to behavior type have stronger descriptive power feature, and using suitable method to these features into Row training, and then realize the pattern-recognition to human body behavior.The image preprocessing of high quality is the key that Activity recognition research, existing The ineffective very big reason of somebody's body Activity recognition is not obtain the image of high quality.
Invention content
In view of the above-mentioned problems, the present invention is intended to provide a kind of man-machine hybrid intelligent control loop.
The purpose of the present invention is realized using following technical scheme:
Provide a kind of man-machine hybrid intelligent control loop, including sensing subsystem, Activity recognition subsystem, instruction subsystem System and control subsystem, the sensing subsystem are used to obtain travel condition of vehicle information and current road conditions letter using sensor Breath, and travel condition of vehicle information and current traffic information are sent to control subsystem, the Activity recognition subsystem is used for The human body behavior of driver is identified, described instruction subsystem is used to convert human body behavior to steering instructions, and will drive Sail instruction and be sent to control subsystem, the control subsystem be used for according to travel condition of vehicle information, current traffic information and Steering instructions control vehicle.
Beneficial effects of the present invention are:A kind of man-machine hybrid intelligent control loop is provided, this system overcomes nobody to drive It the defect trusted is lacked to system in sailing, improves Decision-making Function of the driver in driving, the significant increase driving of user Experience.
Description of the drawings
Using attached drawing, the invention will be further described, but the embodiment in attached drawing does not constitute any limit to the present invention System, for those of ordinary skill in the art, without creative efforts, can also obtain according to the following drawings Other attached drawings.
Fig. 1 is the structural schematic diagram of the present invention;
Reference numeral:
Sensing subsystem 1, Activity recognition subsystem 2, command subsystem 3, control subsystem 4.
Specific implementation mode
The invention will be further described with the following Examples.
Referring to Fig. 1, a kind of man-machine hybrid intelligent control loop of the present embodiment, including sensing subsystem 1, Activity recognition System 2, command subsystem 3 and control subsystem 4, the sensing subsystem 1 are used to obtain travel condition of vehicle using sensor Information and current traffic information, and travel condition of vehicle information and current traffic information are sent to control subsystem 4, the row It is that recognition subsystem 2 is used to that the human body behavior of driver to be identified, described instruction subsystem 3 is for converting human body behavior For steering instructions, and steering instructions are sent to control subsystem 4, the control subsystem 4 is used for according to travel condition of vehicle Information, current traffic information and steering instructions control vehicle.
A kind of man-machine hybrid intelligent control loop is present embodiments provided, this system overcomes lacked to system in unmanned The defect of weary trust, improve Decision-making Function of the driver in driving, the significant increase driving experience of user.
Preferably, the Activity recognition subsystem 2 include image capture module, image co-registration module, characteristic extracting module, Activity recognition module, described image acquisition module are acquired using visible light, infrared multi-spectral imaging system on human body image, Described image Fusion Module obtains color fusion image, the feature for being merged to visible images and infrared image Extraction module is used to extract human body target profile according to color fusion image, and the Activity recognition module is used for according to human body target Human body behavior is identified in profile.
This preferred embodiment Activity recognition subsystem obtains human body image simultaneously using visible light, infrared multi-spectral imaging system Fusion treatment is carried out to image, the image of high quality is obtained, contributes to the detectivity and tracing property that promote follow-up human body, adopt With color fusion image, it is more in line with the visual signature of the mankind.
Preferably, described image Fusion Module includes the first Fusion Module, the second Fusion Module, third Fusion Module, institute It states the first Fusion Module to merge visible images and infrared image in non-down sampling contourlet transformation domain, obtains gray scale and melt Image is closed, second Fusion Module obtains pseudo-colours blending image, the third Fusion Module root according to grayscale fusion image Color fusion image is obtained according to pseudo-colours blending image;
First Fusion Module merges visible images and infrared image in non-down sampling contourlet transformation domain, Specially:
Non-down sampling contourlet decomposition is carried out to visible images P and infrared image Q, obtains corresponding sub-band division coefficientWithLPAnd LQThe low frequency sub-band coefficient of visible images and infrared image is indicated respectively,WithPoint Not Biao Shi in j-th of scale high-frequency sub-band of visible light and infrared image k-th of direction sub-band coefficients;
Low frequency sub-band is merged using following formula:
In formula, LR(x, y) indicates the corresponding low frequency sub-band coefficients of grayscale fusion image R,Wherein, P indicates that the average gray value of infrared image, H (x, y) indicate the gray value of pixel (x, y) in infrared image;
High-frequency sub-band is merged using following formula:
In formula,Indicate the directional subband coefficient of fusion grayscale fusion image R, vP(x, y) indicates visible images The variance yields of directional subband coefficient, v in n × n windows centered on pixel (x, y)Q(x, y) indicates infrared image with pixel The variance yields of directional subband coefficient in n × n windows centered on point (x, y);
Grayscale fusion image R is reconstructed according to the low frequency sub-band coefficient of grayscale fusion image and high-frequency sub-band coefficient;
The blending image that traditional image interfusion method obtains usually exist target-to-background contrast is relatively low, image more The deficiencies of fuzzy.This preferred embodiment is improved by carrying out multiple dimensioned, multi-direction fusion to visible images and infrared image Image co-registration is horizontal, by determining image co-registration mode, can preferably merge the information of the image of different-waveband, the ash of acquisition It is more abundant to spend blending image details, textural characteristics.
Preferably, second Fusion Module obtains pseudo-colours blending image according to grayscale fusion image, specially:
Pseudo-colours blending image is obtained in YUV color spaces using following formula:
In formula, Y (x, y), U (x, y), V (x, y) indicate pseudo-colours blending image in the component of YUV color spaces, R respectively (x, y) indicates that the grayscale fusion image of visible images and infrared image, P (x, y) indicate that visible images, Q (x, y) indicate red Outer image;
The third Fusion Module obtains color fusion image according to pseudo-colours blending image, specially:
Using the color visible image shot under the conditions of a width natural daylight as image is referred to, and the reference picture is become YUV color spaces are shifted to, according to gray average and variance of the reference picture in each channel of YUV color spaces, adjust pseudo-colours The corresponding YUV component values of blending image, the pseudo-colours blending image after being adjusted specifically are carried out using following formula:
In formula, S and W correspond to reference picture and pseudo-colours blending image, Y respectively1(x,y)、U1(x,y)、V1(x, y) difference The pseudo-colours blending image after adjustment is indicated in the component of YUV color spaces, μ and σ indicate each color in YUV color spaces respectively The gray average and variance in channel;
By the pseudo-colours blending image after adjustment from YUV color notation conversion space to RGB color, color integration figure is obtained Picture.
Complementary information between this preferred embodiment organic combination different-waveband, enriches the detailed information of image, makes one Body target is enhanced, to improve accuracy and robustness to target acquisition and tracking;Meanwhile blending image can be meter Calculation machine visual analysis provides higher-quality source images;In addition, the color fusion image after color adjusts has Natural color Color visual effect can improve the degree of fatigue that observer watches the perception of scene, reduction observer video, this is for certain The Activity recognition application that observer participates in is needed to be of great significance.
Drive the vehicle using the man-machine hybrid intelligent control loop of the present invention, with selecting departure place, choose 5 destinations into Row experiment, respectively destination 1, destination 2, destination 3, destination 4, destination 5, to driving time and driver's satisfaction It is counted, is compared compared with Unmanned Systems, generation has the beneficial effect that shown in table:
Driving time shortens Driver's satisfaction improves
Destination 1 29% 27%
Destination 2 27% 26%
Destination 3 26% 26%
Destination 4 25% 24%
Destination 5 24% 22%
Finally it should be noted that the above embodiments are merely illustrative of the technical solutions of the present invention, rather than the present invention is protected The limitation of range is protected, although being explained in detail to the present invention with reference to preferred embodiment, those skilled in the art answer Work as understanding, technical scheme of the present invention can be modified or replaced equivalently, without departing from the reality of technical solution of the present invention Matter and range.

Claims (6)

1. a kind of man-machine hybrid intelligent control loop, which is characterized in that including sensing subsystem, Activity recognition subsystem, instruction Subsystem and control subsystem, the sensing subsystem are used to obtain travel condition of vehicle information and current road conditions using sensor Information, and travel condition of vehicle information and current traffic information are sent to control subsystem, the Activity recognition subsystem is used It is identified in the human body behavior of driver, described instruction subsystem is used to convert human body behavior to steering instructions, and will Steering instructions are sent to control subsystem, and the control subsystem is used for according to travel condition of vehicle information, current traffic information Vehicle is controlled with steering instructions.
2. man-machine hybrid intelligent control loop according to claim 1, which is characterized in that the Activity recognition subsystem packet Image capture module, image co-registration module, characteristic extracting module, Activity recognition module are included, described image acquisition module uses can Light-exposed, infrared multi-spectral imaging system on human body image is acquired, described image Fusion Module be used for visible images and Infrared image is merged, and color fusion image is obtained, and the characteristic extracting module is used to extract people according to color fusion image Body objective contour, the Activity recognition module is for being identified human body behavior according to human body target profile.
3. man-machine hybrid intelligent control loop according to claim 2, which is characterized in that described image Fusion Module includes First Fusion Module, the second Fusion Module, third Fusion Module, first Fusion Module is in non-down sampling contourlet transformation domain Visible images and infrared image are merged, grayscale fusion image is obtained, second Fusion Module is merged according to gray scale Image obtains pseudo-colours blending image, and the third Fusion Module obtains color fusion image according to pseudo-colours blending image.
4. man-machine hybrid intelligent control loop according to claim 3, which is characterized in that first Fusion Module is non- Down sampling contourlet transformation domain merges visible images and infrared image, specially:
Non-down sampling contourlet decomposition is carried out to visible images P and infrared image Q, obtains corresponding sub-band division coefficientWithLPAnd LQThe low frequency sub-band coefficient of visible images and infrared image is indicated respectively,WithPoint Not Biao Shi in j-th of scale high-frequency sub-band of visible light and infrared image k-th of direction sub-band coefficients;
Low frequency sub-band is merged using following formula:
In formula, LR(x, y) indicates the corresponding low frequency sub-band coefficients of grayscale fusion image R,Wherein, p tables Show that the average gray value of infrared image, H (x, y) indicate the gray value of pixel (x, y) in infrared image;
High-frequency sub-band is merged using following formula:
In formula,Indicate the directional subband coefficient of fusion grayscale fusion image R, vP(x, y) indicates visible images with picture The variance yields of directional subband coefficient, v in n × n windows centered on vegetarian refreshments (x, y)Q(x, y) indicates infrared image with pixel The variance yields of directional subband coefficient in n × n windows centered on (x, y);
Grayscale fusion image R is reconstructed according to the low frequency sub-band coefficient of grayscale fusion image and high-frequency sub-band coefficient.
5. man-machine hybrid intelligent control loop according to claim 4, which is characterized in that second Fusion Module according to Grayscale fusion image obtains pseudo-colours blending image, specially:
Pseudo-colours blending image is obtained in YUV color spaces using following formula:
In formula, Y (x, y), U (x, y), V (x, y) indicate pseudo-colours blending image in the component of YUV color spaces, R (x, y) respectively Indicate that the grayscale fusion image of visible images and infrared image, P (x, y) indicate that visible images, Q (x, y) indicate infrared figure Picture.
6. man-machine hybrid intelligent control loop according to claim 5, which is characterized in that the third Fusion Module according to Pseudo-colours blending image obtains color fusion image, specially:
Using the color visible image shot under the conditions of a width natural daylight as image is referred to, and the reference picture is converted into YUV color spaces, according to gray average and variance of the reference picture in each channel of YUV color spaces, adjustment pseudo-colours fusion The corresponding YUV component values of image, the pseudo-colours blending image after being adjusted specifically are carried out using following formula:
In formula, S and W correspond to reference picture and pseudo-colours blending image, Y respectively1(x,y)、U1(x,y)、V1(x, y) is indicated respectively For pseudo-colours blending image after adjustment in the component of YUV color spaces, μ and σ indicate each Color Channel in YUV color spaces respectively Gray average and variance;
By the pseudo-colours blending image after adjustment from YUV color notation conversion space to RGB color, color fusion image is obtained.
CN201810183771.7A 2018-03-06 2018-03-06 A kind of man-machine hybrid intelligent control loop Withdrawn CN108537764A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810183771.7A CN108537764A (en) 2018-03-06 2018-03-06 A kind of man-machine hybrid intelligent control loop

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810183771.7A CN108537764A (en) 2018-03-06 2018-03-06 A kind of man-machine hybrid intelligent control loop

Publications (1)

Publication Number Publication Date
CN108537764A true CN108537764A (en) 2018-09-14

Family

ID=63486768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810183771.7A Withdrawn CN108537764A (en) 2018-03-06 2018-03-06 A kind of man-machine hybrid intelligent control loop

Country Status (1)

Country Link
CN (1) CN108537764A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255774A (en) * 2018-09-28 2019-01-22 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method, device and its equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548644A (en) * 2016-11-30 2017-03-29 深圳明创自控技术有限公司 A kind of automated driving system
CN107253485A (en) * 2017-05-16 2017-10-17 北京交通大学 Foreign matter invades detection method and foreign matter intrusion detection means
CN107719376A (en) * 2017-09-18 2018-02-23 清华大学 Man-machine mixing enhancing intelligent driving system and electric automobile

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548644A (en) * 2016-11-30 2017-03-29 深圳明创自控技术有限公司 A kind of automated driving system
CN107253485A (en) * 2017-05-16 2017-10-17 北京交通大学 Foreign matter invades detection method and foreign matter intrusion detection means
CN107719376A (en) * 2017-09-18 2018-02-23 清华大学 Man-machine mixing enhancing intelligent driving system and electric automobile

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴燕燕等: "结合NSST和颜色对比度增强的彩色夜视方法", 《光电工程》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255774A (en) * 2018-09-28 2019-01-22 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method, device and its equipment
CN109255774B (en) * 2018-09-28 2022-03-25 中国科学院长春光学精密机械与物理研究所 Image fusion method, device and equipment

Similar Documents

Publication Publication Date Title
CN105069746B (en) Video real-time face replacement method and its system based on local affine invariant and color transfer technology
CN101795400B (en) Method for actively tracking and monitoring infants and realization system thereof
CN109241830B (en) Classroom lecture listening abnormity detection method based on illumination generation countermeasure network
CN107103277B (en) Gait recognition method based on depth camera and 3D convolutional neural network
CN104992189B (en) Shoal of fish abnormal behaviour recognition methods based on deep learning network model
CN108830150A (en) One kind being based on 3 D human body Attitude estimation method and device
CN106256606A (en) A kind of lane departure warning method based on vehicle-mounted binocular camera
CN104083258A (en) Intelligent wheel chair control method based on brain-computer interface and automatic driving technology
CN110728241A (en) Driver fatigue detection method based on deep learning multi-feature fusion
CN102982518A (en) Fusion method of infrared image and visible light dynamic image and fusion device of infrared image and visible light dynamic image
CN109949593A (en) A kind of traffic lights recognition methods and system based on crossing priori knowledge
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN106874884A (en) Human body recognition methods again based on position segmentation
Dong et al. Infrared image colorization using a s-shape network
CN106815826A (en) Night vision image Color Fusion based on scene Recognition
CN110516633A (en) A kind of method for detecting lane lines and system based on deep learning
CN105069745A (en) face-changing system based on common image sensor and enhanced augmented reality technology and method
CN110378234A (en) Convolutional neural networks thermal imagery face identification method and system based on TensorFlow building
CN109583349A (en) A kind of method and system for being identified in color of the true environment to target vehicle
CN114426069B (en) Indoor rescue vehicle based on real-time semantic segmentation and image semantic segmentation method
CN105930793A (en) Human body detection method based on SAE characteristic visual learning
CN104253994B (en) A kind of night monitoring video real time enhancing method merged based on sparse coding
CN108537764A (en) A kind of man-machine hybrid intelligent control loop
CN108009512A (en) A kind of recognition methods again of the personage based on convolutional neural networks feature learning
Sugirtha et al. Semantic segmentation using modified u-net for autonomous driving

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180914

WW01 Invention patent application withdrawn after publication