CN111951301A - Method for reducing interference degree of vehicle vision system - Google Patents

Method for reducing interference degree of vehicle vision system Download PDF

Info

Publication number
CN111951301A
CN111951301A CN202010683492.4A CN202010683492A CN111951301A CN 111951301 A CN111951301 A CN 111951301A CN 202010683492 A CN202010683492 A CN 202010683492A CN 111951301 A CN111951301 A CN 111951301A
Authority
CN
China
Prior art keywords
image
target
brightness
infrared
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010683492.4A
Other languages
Chinese (zh)
Inventor
罗映
李丙洋
王淑超
罗全巧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Promote Electromechanical Technology Co ltd
Original Assignee
Shandong Promote Electromechanical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Promote Electromechanical Technology Co ltd filed Critical Shandong Promote Electromechanical Technology Co ltd
Priority to CN202010683492.4A priority Critical patent/CN111951301A/en
Publication of CN111951301A publication Critical patent/CN111951301A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

A method for reducing the interference degree of a vehicle vision system is characterized by mainly comprising the following steps: s1: identifying and marking a target image; s2: and when the target image appears or disappears in the infrared image, the computer controller sends a screen brightness adjusting instruction to adjust the brightness of the infrared image display screen. The method can obviously distinguish the target image from the background image by generating the mask for the target image, so that a driver can instantly notice pedestrians, vehicles and the like in the infrared image, and the surrounding environment is judged according to the information in the infrared image. Meanwhile, the brightness of the infrared image display screen is adjusted along with the generation and disappearance of the mask image, so that the influence of the constantly changing image on the display screen in the vehicle vision system on a driver can be reduced, positioning, identifying and reminding can be performed when vehicles, pedestrians and the like appear, and the driving safety is greatly improved.

Description

Method for reducing interference degree of vehicle vision system
Technical Field
The invention relates to the technical field of vehicle monitoring, in particular to a method for reducing the interference degree of a vehicle vision system.
Background
Vehicle vision systems, which use cameras more sensitive to human eyes to capture a scene in front of a vehicle and display the scene on a display screen in a cab in time, are intended to help a driver to safely drive at night or under conditions of reduced visibility such as rain, snow, fog, and other weather conditions. The existing vehicle vision system mostly adopts a far infrared camera to shoot, and can shoot a picture clearer than that of a common camera at night or in an environment with low visibility, but in practical application, because the road condition environment is complicated and changeable, a driver can hardly find out people or vehicles in the picture shot by the far infrared camera instantly, especially, the driver watches a display screen while driving, the attention of the driver is greatly dispersed, meanwhile, once the vehicle vision system is started, the system can continuously display images, and the images are continuously changed along with the advance of the vehicle, for the driver, the changed images are easy to disperse the attention, and the driving safety is influenced. The only way to stop the display is to deactivate the system, however, in this case the driver no longer benefits from the system's prompt, reducing driving safety on the other hand. Therefore, it is necessary to provide a method for helping the driver to find the road condition in time and not distract from the driver.
Disclosure of Invention
In order to solve the technical problems, the invention provides a method for reducing the interference degree of a vehicle vision system, which can position image characters and vehicles in time and adjust the screen brightness according to the image positioning condition.
The invention provides a method for reducing the interference degree of a vehicle vision system, which mainly comprises the following steps
S1: identifying and marking a target image;
s2: and when the target image appears or disappears in the infrared image, the computer controller sends a screen brightness adjusting instruction to adjust the brightness of the infrared image display screen.
Specifically, S1 includes the steps of:
s101: and establishing a target model.
Before analyzing and positioning the infrared image, the computer controller preprocesses a large amount of acquired image information of pedestrians, vehicles and the like, extracts characteristic information and related data of the pedestrians, the vehicles and the like, and establishes a target image model. Where the object to be located includes, but is not limited to, a pedestrian, a vehicle, an animal, etc.
S102: and acquiring an infrared image of the road condition in front of the vehicle.
According to the method, the infrared image of the road condition in front of the vehicle is an image acquired by a far infrared camera, and the purpose is to position pedestrians, vehicles and the like appearing in the acquired infrared image and to perform labeling and reminding.
S103: and positioning and identifying the target image through the target model to obtain coordinate information corresponding to the target in the infrared image.
And carrying out target image positioning and identification through the target model, namely matching the shot infrared image information with the target model, and carrying out image analysis on the infrared image so as to detect whether a corresponding target exists in the shot image. And (3) coordinate information corresponding to the target image represents the position of the target in the whole infrared image, the coordinate information can be specifically the coordinates of a central pixel point of the target and the width and height of the target, such as (x, y, w, h), and a rectangular surrounding frame corresponding to the target in the infrared image is determined by utilizing the coordinate information, wherein the rectangular surrounding frame is the minimum rectangular frame containing the target image.
S104: a target area mask image is formed.
The target region mask image may be represented by a category (belonging to a target or a background) of each pixel point in the bounding box, and may specifically be represented by a matrix, for example, a value of each element on the matrix may be represented by 1 or 0, where 1 represents that the pixel point at the corresponding position belongs to the target, and 0 represents that the pixel point at the corresponding position belongs to the background. The mask image corresponding to the target substantially reflects the positions of the pixel points constituting the target body in the bounding box corresponding to the target image. The mask image is used for circling out the target image, so that the marking is realized and the attention of a driver is attracted.
Specifically, S2 includes the steps of:
s201: specifically, when the mask image is generated, the computer controller sends out a screen brightness adjusting instruction, the processor acquires brightness adjusting data, and the duty ratio of the pulse width adjusting signal is output.
S202: the screen brightness is adjusted using the pulse width adjustment signal.
And adjusting the screen brightness by using the pulse width adjusting signal to adjust the screen brightness according to the duty ratio. In some embodiments, the screen whose brightness is adjusted is a low voltage differential signaling screen.
The invention has the beneficial effects that: 1. the method can obviously distinguish the target image from the background image by generating the mask for the target image, so that a driver can instantly notice pedestrians, vehicles and the like in the infrared image, and the surrounding environment is judged according to the information in the infrared image.
2. The method can also predict the coordinate position of the target image in the display screen, namely the activity track of the target, so that the driver can plan a route in advance according to the predicted track.
3. According to the invention, the brightness of the infrared image display screen is adjusted along with the generation and disappearance of the mask image, and when the mask image is generated, the computer controller sends a screen brightness adjusting instruction to control the screen to be brightened so as to attract the attention of a driver; when the shading image disappears, the computer controller sends out a screen brightness adjusting instruction to control the screen to become dark, so that the visual effect of the screen is reduced, and the interference to a driver is reduced. The method can reduce the influence of the constantly changing image on the display screen in the vehicle vision system on the driver, and can also carry out positioning, identifying and reminding when vehicles, pedestrians and the like appear, thereby greatly improving the driving safety.
Drawings
FIG. 1 is a schematic diagram of the process steps of the present invention.
Detailed Description
The technical solutions of the present invention will be described in detail, and all other embodiments obtained by those skilled in the art without inventive work shall fall within the scope of the present invention.
The vehicle vision system in the technical scheme of the invention can sense the surrounding environment through sensors such as a laser radar, an ultrasonic radar, a millimeter wave radar and a camera, realize the control of each module and perform information interaction with a server through a vehicle-mounted terminal, wherein the vehicle vision system at least comprises a camera, a display terminal (display screen) and a computer controller.
The first embodiment is as follows:
the invention provides a method for reducing the interference degree of a vehicle vision system, which mainly comprises the following steps:
s1: identifying and marking a target image;
s2: and when the target image appears or disappears in the infrared image, the computer controller sends a screen brightness adjusting instruction to adjust the brightness of the infrared image display screen.
Specifically, S1 includes the steps of:
s101: and establishing a target model.
Before analyzing and positioning the infrared image, the computer controller preprocesses a large amount of acquired image information of pedestrians, vehicles and the like, extracts characteristic information and related data of the pedestrians, the vehicles and the like, and establishes a target image model. Where the object to be located includes, but is not limited to, a pedestrian, a vehicle, an animal, etc.
The preset target image model is learned through the target image sample, and therefore the target image model has the target image recognition capability. The predetermined target image positioning model can be obtained by learning through a deep neural network model, and therefore, the predetermined target image model is a machine learning model which is trained in advance and has the target image recognition and positioning capabilities.
Specifically, the computer controller may preset a model structure of the target image, construct an initial positioning model, and perform model training on the initial positioning model through a large number of target image samples to obtain trained model parameters. Therefore, when the infrared image shot by the camera needs to be subjected to target image positioning, the computer controller can acquire the trained model parameters, and meanwhile, the model parameters are led into the initial positioning model, so that the model structure of the target image model is continuously corrected, and a more accurate preset image positioning model is obtained.
S102: and acquiring an infrared image of the road condition in front of the vehicle.
According to the method, the infrared image of the road condition in front of the vehicle is an image acquired by a far infrared camera, pedestrians, vehicles and the like in the acquired infrared image are positioned and marked for reminding. The method specifically comprises the following steps:
image acquisition: the method comprises the following steps that a camera collects images of a current frame road condition and transmits collected current frame image information containing current frame image time to a computer controller;
and (3) image correction: the computer controller corrects the distorted image collected by the camera into an undistorted image which accords with the vision habit of people through a fisheye image correction algorithm.
S103: and positioning and identifying the target image through the target model to obtain coordinate information corresponding to the target in the infrared image.
And carrying out target image positioning and identification through the target model, namely matching the shot infrared image information with the target model, and carrying out image analysis on the infrared image so as to detect whether a corresponding target exists in the shot image. And (3) coordinate information corresponding to the target image represents the position of the target in the whole infrared image, the coordinate information can be specifically the coordinates of a central pixel point of the target and the width and height of the target, such as (x, y, w, h), and a rectangular surrounding frame corresponding to the target in the infrared image is determined by utilizing the coordinate information, wherein the rectangular surrounding frame is the minimum rectangular frame containing the target image.
It should be noted that, in an infrared image, there may be a plurality of different types of positioning targets, and the computer controller may position the target images in one by one through a predetermined image positioning model.
S104: a target area mask image is formed.
The target region mask image may be represented by a category (belonging to a target or a background) of each pixel point in the bounding box, and may specifically be represented by a matrix, for example, a value of each element on the matrix may be represented by 1 or 0, where 1 represents that the pixel point at the corresponding position belongs to the target, and 0 represents that the pixel point at the corresponding position belongs to the background. The mask image corresponding to the target substantially reflects the positions of the pixel points constituting the target body in the bounding box corresponding to the target image.
After the current frame image is binarized, a Gaussian mixture model is adopted to model the background in the image and extract the image foreground, median filtering is adopted to remove image noise, and a Mask R-CNN frame can be adopted to realize the binarization, wherein the R-CNN is called Region-based conditional Neural Network (regional Convolutional Neural Network) in full, and the Mask refers to the Mask information of an object obtained by carrying out pixel classification on pixel points in the image. Therefore, the positioning network can obtain the coordinate information of the positioning target and the category of each pixel point in the rectangular surrounding frame corresponding to the target to obtain a mask image corresponding to the object, the accuracy of extracting the target image characteristics from the infrared image can be effectively improved through the mask image, and finally, the image without noise is subjected to hole (Blob) filling mask through closing operation and opening operation.
The mask image is used for circling out the target image, so that the marking is realized and the attention of a driver is attracted.
Specifically, S2 includes the steps of:
s201: when a mask image is generated, the computer controller sends a screen brightness adjusting instruction, the processor acquires brightness adjusting data and outputs the duty ratio of a pulse width adjusting signal;
wherein, the duty ratio of the pulse width adjusting signal output by the control processor according to the screen brightness adjusting data comprises:
the processor analyzes the adjusting command and the brightness value from the screen brightness adjusting data;
the processor controls the duty ratio of the pulse width adjusting signal according to the adjusting command and the brightness value.
Optionally, the host may send the same screen brightness adjustment data to the multiple screens as needed to adjust the screens to the same brightness, or send different screen brightness adjustment data to the screens respectively to adjust the screens to different brightness.
Specifically, the corresponding screen brightness adjusting parameter can be specifically selected and preset according to information such as a vehicle controller, a driving requirement and the like, so that when the target image appears or disappears, the corresponding screen brightness parameter is called, and the screen is controlled to adjust to form the corresponding brightness. Specifically, when the mask image is generated, the computer controller sends a screen brightness adjusting instruction to control the screen to be brightened so as to attract the attention of a driver; when the shading image disappears, the computer controller sends out a screen brightness adjusting instruction to control the screen to become dark, so that the visual effect of the screen is reduced, and the interference to a driver is reduced.
S202: the screen brightness is adjusted using the pulse width adjustment signal.
And adjusting the screen brightness by using the pulse width adjusting signal to adjust the screen brightness according to the duty ratio. In some embodiments, the screen whose brightness is adjusted is a low voltage differential signaling screen.
Example two
On the basis of the embodiment, the method can also predict the movement track of the target in the infrared image, so that the driver is better reminded to plan the driving route in advance.
Specifically, the relative speed of the target in the current frame image is obtained by dividing the difference value of the target image coordinates on the current frame i and the previous frame i-1 frame image by the difference value of the target image acquisition time of the current frame i and the previous frame i-1 frame image; and predicting the coordinates of the target image in the i +1 th, i +2, i +3 th and other multi-frame images on the camera image coordinate system by adopting a Kalman filtering method according to the coordinates and the relative speed of the target image in the i-4 th to i-1 th frames of images on the camera coordinate system, thereby predicting the track of the target.
The method can obviously distinguish the target image from the background image by generating the mask for the target image, so that a driver can instantly notice pedestrians, vehicles and the like in the infrared image, and the surrounding environment is judged according to the information in the infrared image. Meanwhile, the brightness of the infrared image display screen is adjusted along with the generation and disappearance of the mask image, and when the mask image is generated, the computer controller sends a screen brightness adjusting instruction to control the screen to be brightened so as to attract the attention of a driver; when the shading image disappears, the computer controller sends out a screen brightness adjusting instruction to control the screen to become dark, so that the visual effect of the screen is reduced, and the interference to a driver is reduced. The method can reduce the influence of the constantly changing image on the display screen in the vehicle vision system on the driver, and can also carry out positioning, identifying and reminding when vehicles, pedestrians and the like appear, thereby greatly improving the driving safety.

Claims (7)

1. A method for reducing the interference degree of a vehicle vision system is characterized by mainly comprising the following steps:
s1: identifying and marking a target image;
s2: and when the target image appears or disappears in the infrared image, the computer controller sends a screen brightness adjusting instruction to adjust the brightness of the infrared image display screen.
2. The method for reducing the distractibility of the vehicle vision system as claimed in claim 1, wherein said step S1 includes the steps of:
s101: establishing a target model;
preprocessing a large amount of acquired image information of pedestrians, vehicles and the like by the computer controller, extracting characteristic information and relevant data of the pedestrians, the vehicles and the like, and establishing a target image model; wherein the objects to be located include, but are not limited to, pedestrians, vehicles, animals, etc.;
s102: acquiring an infrared image of road conditions in front of a vehicle;
s103: positioning and identifying a target image through a target model to obtain coordinate information corresponding to a target in an infrared image;
matching the shot infrared image information with a target model, carrying out image analysis on the infrared image, and detecting whether a corresponding target exists in the shot image; determining a rectangular surrounding frame corresponding to the target in the infrared image by utilizing the coordinate information, wherein the rectangular surrounding frame is a minimum rectangular frame containing the target image;
s104: forming a target area mask image;
the target area mask image is represented by the category of each pixel point in the bounding box through a matrix.
3. The method for reducing the distractibility of the vehicle vision system as claimed in claim 1, wherein said step S2 includes the steps of:
s201: when a mask image is generated, the computer controller sends a screen brightness adjusting instruction, the processor acquires brightness adjusting data and outputs the duty ratio of a pulse width adjusting signal;
s202: and adjusting the screen brightness by using the pulse width adjusting signal according to the duty ratio.
4. The method for reducing the disturbance degree of a vehicle vision system according to claim 2, wherein said step S102 comprises:
image acquisition: the method comprises the following steps that a camera collects images of a current frame road condition and transmits collected current frame image information containing current frame image time to a computer controller;
and (3) image correction: the computer controller corrects the distorted image collected by the camera into an undistorted image which accords with the vision habit of people through a fisheye image correction algorithm.
5. The method for reducing the disturbance degree of a vehicle vision system according to claim 2, wherein said step S103 comprises:
dividing the difference value of the coordinates of the target image on the current frame i and the previous frame i-1 frame image by the difference value of the acquisition time of the target image on the current frame i and the previous frame i-1 frame image to obtain the relative movement speed of the target on the current frame image;
and predicting the coordinates of the target image in the i +1 th, i +2, i +3 th and other multi-frame images on the camera image coordinate system by adopting a Kalman filtering method according to the coordinates and the relative motion speed of the target image in the i-4 th to i-1 th frames of images on the camera coordinate system, and predicting the track of the target.
6. A method for reducing distractibility of a vehicle vision system as claimed in claim 1 or claim 3, wherein the screen whose brightness is adjusted is a low voltage differential signaling screen.
7. The method of claim 1, wherein the step S2 of adjusting the brightness of the ir image display screen comprises the following strategies: when the target image appears in the infrared image, the brightness of the display screen is increased; when the target image disappears in the infrared image, the brightness of the display screen is reduced.
CN202010683492.4A 2020-07-16 2020-07-16 Method for reducing interference degree of vehicle vision system Pending CN111951301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010683492.4A CN111951301A (en) 2020-07-16 2020-07-16 Method for reducing interference degree of vehicle vision system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010683492.4A CN111951301A (en) 2020-07-16 2020-07-16 Method for reducing interference degree of vehicle vision system

Publications (1)

Publication Number Publication Date
CN111951301A true CN111951301A (en) 2020-11-17

Family

ID=73341321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010683492.4A Pending CN111951301A (en) 2020-07-16 2020-07-16 Method for reducing interference degree of vehicle vision system

Country Status (1)

Country Link
CN (1) CN111951301A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050039313A (en) * 2003-10-24 2005-04-29 현대자동차주식회사 Night vision system of vehicle and method for controlling display of head-up display thereof
JP2008292753A (en) * 2007-05-24 2008-12-04 Denso Corp Information display device
CN105224272A (en) * 2015-09-24 2016-01-06 宇龙计算机通信科技(深圳)有限公司 A kind of method for displaying image and automotive display
CN107240072A (en) * 2017-04-27 2017-10-10 努比亚技术有限公司 A kind of screen luminance adjustment method, terminal and computer-readable recording medium
CN107817963A (en) * 2017-10-27 2018-03-20 维沃移动通信有限公司 A kind of method for displaying image, mobile terminal and computer-readable recording medium
CN108877269A (en) * 2018-08-20 2018-11-23 清华大学 A kind of detection of intersection vehicle-state and V2X broadcasting method
CN108909624A (en) * 2018-05-13 2018-11-30 西北工业大学 A kind of real-time detection of obstacles and localization method based on monocular vision
CN109712173A (en) * 2018-12-05 2019-05-03 北京空间机电研究所 A kind of picture position method for estimating based on Kalman filter
CN110263721A (en) * 2019-06-21 2019-09-20 北京字节跳动网络技术有限公司 Car light setting method and equipment
CN110599982A (en) * 2019-09-19 2019-12-20 广州小鹏汽车科技有限公司 Screen brightness adjusting method and system of vehicle-mounted terminal, vehicle-mounted terminal and vehicle

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050039313A (en) * 2003-10-24 2005-04-29 현대자동차주식회사 Night vision system of vehicle and method for controlling display of head-up display thereof
JP2008292753A (en) * 2007-05-24 2008-12-04 Denso Corp Information display device
CN105224272A (en) * 2015-09-24 2016-01-06 宇龙计算机通信科技(深圳)有限公司 A kind of method for displaying image and automotive display
CN107240072A (en) * 2017-04-27 2017-10-10 努比亚技术有限公司 A kind of screen luminance adjustment method, terminal and computer-readable recording medium
CN107817963A (en) * 2017-10-27 2018-03-20 维沃移动通信有限公司 A kind of method for displaying image, mobile terminal and computer-readable recording medium
CN108909624A (en) * 2018-05-13 2018-11-30 西北工业大学 A kind of real-time detection of obstacles and localization method based on monocular vision
CN108877269A (en) * 2018-08-20 2018-11-23 清华大学 A kind of detection of intersection vehicle-state and V2X broadcasting method
CN109712173A (en) * 2018-12-05 2019-05-03 北京空间机电研究所 A kind of picture position method for estimating based on Kalman filter
CN110263721A (en) * 2019-06-21 2019-09-20 北京字节跳动网络技术有限公司 Car light setting method and equipment
CN110599982A (en) * 2019-09-19 2019-12-20 广州小鹏汽车科技有限公司 Screen brightness adjusting method and system of vehicle-mounted terminal, vehicle-mounted terminal and vehicle

Similar Documents

Publication Publication Date Title
CN107506760B (en) Traffic signal detection method and system based on GPS positioning and visual image processing
Cheng et al. Lane detection with moving vehicles in the traffic scenes
US6985172B1 (en) Model-based incident detection system with motion classification
CN101656023B (en) Management method of indoor car park in video monitor mode
WO2014084218A1 (en) Subject detection device
CN112800860B (en) High-speed object scattering detection method and system with coordination of event camera and visual camera
CN109409186B (en) Driver assistance system and method for object detection and notification
CN102792314A (en) Cross traffic collision alert system
KR20040053344A (en) Method and system for improving car safety using image-enhancement
US11436839B2 (en) Systems and methods of detecting moving obstacles
CN109703460A (en) The complex scene adaptive vehicle collision warning device and method for early warning of multi-cam
CN111989915B (en) Methods, media, and systems for automatic visual inference of environment in an image
El Maadi et al. Outdoor infrared video surveillance: A novel dynamic technique for the subtraction of a changing background of IR images
CN110852177A (en) Obstacle detection method and system based on monocular camera
CN111881832A (en) Lane target detection method, device, equipment and computer readable storage medium
EP4145398A1 (en) Systems and methods for vehicle camera obstruction detection
Cualain et al. Multiple-camera lane departure warning system for the automotive environment
Gupta et al. Real-time lane detection using spatio-temporal incremental clustering
Hautiere et al. Meteorological conditions processing for vision-based traffic monitoring
CN116228756B (en) Method and system for detecting bad points of camera in automatic driving
JP3294468B2 (en) Object detection method in video monitoring device
CN111951301A (en) Method for reducing interference degree of vehicle vision system
CN114677658A (en) Billion-pixel dynamic large-scene image acquisition and multi-target detection method and device
Vijay et al. Design and integration of lane departure warning, adaptive headlight and wiper system for automobile safety
CN114581863A (en) Vehicle dangerous state identification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination