CN112906504B - Night vehicle high beam opening state discrimination method based on double cameras - Google Patents

Night vehicle high beam opening state discrimination method based on double cameras Download PDF

Info

Publication number
CN112906504B
CN112906504B CN202110130214.0A CN202110130214A CN112906504B CN 112906504 B CN112906504 B CN 112906504B CN 202110130214 A CN202110130214 A CN 202110130214A CN 112906504 B CN112906504 B CN 112906504B
Authority
CN
China
Prior art keywords
cameras
vehicle
connected domain
image
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110130214.0A
Other languages
Chinese (zh)
Other versions
CN112906504A (en
Inventor
潘行杰
王栋
潘顺真
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Anxie Intelligent Scie Tech Co ltd
Original Assignee
Zhejiang Anxie Intelligent Scie Tech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Anxie Intelligent Scie Tech Co ltd filed Critical Zhejiang Anxie Intelligent Scie Tech Co ltd
Priority to CN202110130214.0A priority Critical patent/CN112906504B/en
Publication of CN112906504A publication Critical patent/CN112906504A/en
Application granted granted Critical
Publication of CN112906504B publication Critical patent/CN112906504B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Signal Processing (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for judging the open state of a high beam of a vehicle at night based on two cameras, which comprises the following steps: step 1: arranging two cameras in the equipment at the same time, enabling the two cameras to be located at the same position, and combining pictures shot by the two cameras into one picture, wherein one picture is a low exposure picture, and the other picture is a high exposure picture; step 2: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows: 2.1: and carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain. By implementing the method and the device, the opening state of the high beam of the vehicle running at night can be correctly identified, the license plate number of the vehicle can be correctly identified, the identification accuracy of the high beam is improved to a greater extent, and outstanding progress is achieved.

Description

Night vehicle high beam opening state discrimination method based on double cameras
Technical Field
The invention relates to the technical field of high beam judgment, in particular to a method for judging the opening state of a high beam of a vehicle at night based on double cameras.
Background
The method is characterized in that the method adopts a monitoring video mode to complete the identification of the opening state of the high beam of the vehicle running at night, the most effective method is to set a video picture which can positively shoot the vehicle running from far to near, which brings a difficult condition, the vehicle lamp used as a luminous body and the night environment form a wide dynamic range, so that a single camera cannot simultaneously complete the simultaneous clear capture of the high beam state and the vehicle identity information (license plate) for opening the high beam, and on the other hand, the single camera cannot obtain sufficient high beam characteristic information under the condition of fixed exposure parameters.
Because the wide dynamic range formed after the car light is lighted is not adaptable by the physical characteristics of the current camera, if the picture is adjusted and exposed to the light-emitting states of the high beam and the low beam, clear license plate information cannot be obtained at all, and even if the behavior that the high beam is turned on due to a violation is judged, the identity information (license plate number) of the car cannot be determined; if the exposure is adjusted to be able to see the license plate clearly, the difference of the light-emitting states of the high beam and the low beam cannot be distinguished.
Since the high beam lamp used against the regulations needs to judge whether the high beam lamp is continuously lighted in a period of time, the multi-level exposure switching of a single camera can cause the discontinuity of information, so that the law enforcement evidence is insufficient.
In summary, a method for judging the open state of a high beam of a night vehicle based on two cameras is needed to solve the defects in the prior art.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for judging the open state of a high beam of a vehicle at night based on two cameras, and aims to solve the problems.
In order to achieve the purpose, the invention provides the following technical scheme: a method for judging the open state of a high beam lamp of a vehicle at night based on two cameras comprises the following steps:
step 1: arranging two cameras in the equipment at the same time, enabling the two cameras to be located at the same position, and combining pictures shot by the two cameras into one picture, wherein one picture is a low exposure picture, and the other picture is a high exposure picture;
step 2: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area of a connected domain expressed by the number LNi of pixel points, the position coordinates (Lxi, Lyi) of a central point, the perimeter LPi of the connected domain and the perimeter LCi of an external convex hull;
and 4, step 4: in the high exposure map, the light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of the connected domain, the position coordinates (Hxi, Hyi) of the central point, the perimeter HPi of the connected domain and the perimeter HCi of the external convex hull, which are expressed by the number HNi of the pixels;
and 5: the results from the high and low exposures were paired: because a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras cannot be located at the same coordinate position;
step 6: and (3) carrying out comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, LPi, LCi, HNi, HPi, HCi ], inputting the 6-dimensional parameter vector into a BP neural network for training, and realizing the classification of the dipped headlights or the high beams of the vehicle, thereby finishing the identification of the opening state of the high beams of the vehicle at night.
Further, the two cameras in step 1 are placed in any one of a left-right adjacent placement mode or an up-down adjacent placement mode, and the position combination mode of the two pictures includes a left-right combination mode or an up-down combination mode.
Further, in the step 5, it is assumed that a vehicle lamp i at infinity has coordinates of a center point position in the high exposure image of (Hxi, Hyi) and coordinates of a center point position in the low exposure image of (Lxi, Lyi); when the device is installed and fixed, dx = Hxi-Lxi and dy = Hyi-Lyi are provided, and dx and dy are fixed values; the system can manually set the two parameters through a software interface; the condition of determining the infinity is Hyi =0, that is, the vehicle lamp is at the topmost end of the image, that is, the vehicle lamp is considered to be approximately infinity from the camera, when the vehicle lamp gradually approaches the camera, because the two cameras are not completely overlapped and a point is not, the parallax inevitably occurs, taking the left and right placement of the cameras as an example, as the vehicle lamp approaches the camera, dx = Hxi-Lxi gradually increases, in order to correct the parallax, the system introduces an adjustment coefficient dk, and then the position deviation of the same vehicle lamp in the two pictures: dx = Hxi-Lxi + dk Hyi, that is, as Hyi increases, it indicates that the parallax influence is larger as the vehicle lamp is closer to the camera, and after the device is installed and fixed, dk is a fixed value and can be set through a software interface, and the parallax in the vertical direction does not exist in the left and right cameras, so dy does not need to be corrected; similarly, when the camera is placed up and down, dy needs to be corrected, and dx does not need to be corrected.
The invention has the beneficial effects that:
1. according to the invention, the license plate number of the vehicle can be correctly identified while the opening state of the high beam of the vehicle running at night can be correctly identified, so that a relatively complete law enforcement evidence is formed, and sufficient evidence is provided for subsequent law enforcement treatment.
2. According to the invention, through the design based on the double-camera multi-stage exposure, more characteristic parameters of the same car light can be obtained, the identification accuracy of the high beam is improved to a greater extent, and a technical guarantee is provided for practical application.
3. In the invention, an effective scheme of matching the double cameras with the target lays a foundation for extracting multiple features, and the scheme can be popularized to more cameras, has stronger applicability and makes outstanding progress.
Detailed Description
A method for judging the open state of a high beam lamp of a vehicle at night based on two cameras comprises the following steps:
step 1: two cameras are arranged in the equipment at the same time, the two cameras are positioned at the same position, and pictures shot by the two cameras are combined into one picture, one picture is a low exposure picture, and the other picture is a high exposure picture;
and 2, step: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area of a connected domain, central point position coordinates (Lxi, Lyi), the perimeter LPi of the connected domain and the perimeter LCi of an external convex hull, which are expressed by the number LNi of pixel points;
and 4, step 4: in the high exposure map, the light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of the connected domain, the position coordinates (Hxi, Hyi) of the central point, the perimeter HPi of the connected domain and the perimeter HCi of the external convex hull, which are expressed by the number HNi of the pixels;
and 5: the results obtained from the high and low exposures are paired: because a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras cannot be located at the same coordinate position;
step 6: and (3) carrying out comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, LPi, LCi, HNi, HPi, HCi ], inputting the 6-dimensional parameter vector into a BP neural network for training, and realizing the classification of the dipped headlights or the high beams of the vehicle, thereby finishing the identification of the opening state of the high beams of the vehicle at night.
Example (b):
step 1: two cameras are arranged in the equipment at the same time, the two cameras are placed at the same position left and right, and pictures shot by the two cameras are combined into one picture left and right, wherein one picture is a low exposure picture, and the other picture is a high exposure picture;
step 2: in the low exposure image, because the target is less and only has a car light area, the low exposure image is used for tracking the driving track of the vehicle while judging the turning-on state of the car light; the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and 3, step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area of a connected domain, central point position coordinates (Lxi, Lyi), the perimeter LPi of the connected domain and the perimeter LCi of an external convex hull, which are expressed by the number LNi of pixel points;
and 4, step 4: in the high exposure map, the lamp light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of the connected domain, the position coordinates (Hxi, Hyi) of the central point, the perimeter HPi of the connected domain and the perimeter HCi of the external convex hull, which are expressed by the number HNi of the pixels;
and 5: the results from the high and low exposures were paired: considering that a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras is not located at the same coordinate position; a vehicle lamp i at more than 80 meters, whose center point position coordinate in the high exposure image is (Hxi, Hyi), and whose center point position coordinate in the low exposure image is (Lxi, Lyi); when the device is fixed, dx = Hxi-Lxi and dy = Hyi-Lyi exist, and then dx and dy are fixed values; the system can manually set the two parameters through a software interface; when the condition of infinity is determined to be Hyi =0, that is, the car light is at the topmost end of the image, that is, the car light is considered to be approximately infinity from the camera, and when the car light gradually approaches the camera, because the two cameras are not completely overlapped and a point is not, parallax inevitably occurs, taking the left and right placement of the cameras as an example, as the car light approaches the camera, dx = Hxi-Lxi gradually increases, and in order to correct the parallax, the system introduces an adjustment coefficient dk, so that the position deviation of the same car light in the two pictures is determined: dx = Hxi-Lxi + dk Hyi, that is, with the increase of Hyi, it indicates that the closer the car light is to the camera, the greater the parallax influence is, after the device is fixed, dk is a fixed value and can be set through a software interface, and the cameras placed on the left and right do not have parallax in the vertical direction, so dy does not need to be corrected; similarly, if the camera is placed up and down, dy needs to be corrected, and dx does not need to be corrected;
step 6: and (3) carrying out comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, LPi, LCi, HNi, HPi, HCi ], the 6-dimensional parameter vector is input into a BP neural network for training, and then the classification of the dipped headlights or the high beams of the vehicle can be realized, thereby completing the identification of the opening state of the high beams of the vehicle at night.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents or improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (3)

1. A method for judging the open state of a high beam lamp of a vehicle at night based on two cameras is characterized by comprising the following steps:
step 1: two cameras are arranged in the equipment at the same time, the two cameras are positioned at the same position, and pictures shot by the two cameras are combined into one picture, one picture is a low exposure picture, and the other picture is a high exposure picture;
step 2: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and 3, step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area of a connected domain, central point position coordinates (Lxi, Lyi), the perimeter LPi of the connected domain and the perimeter LCi of an external convex hull, which are expressed by the number LNi of pixel points;
and 4, step 4: in the high exposure map, the lamp light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of the connected domain, the position coordinates (Hxi, Hyi) of the central point, the perimeter HPi of the connected domain and the perimeter HCi of the external convex hull, which are expressed by the number HNi of the pixels;
and 5: the results obtained from the high and low exposures are paired: because a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras cannot be located at the same coordinate position;
step 6: and (3) comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, LPi, LCi, HNi, HPi, HCi ], inputting the 6-dimensional parameter vector into a BP neural network for training, and realizing the classification of the near and far headlights of the vehicle, thereby finishing the identification of the opening state of the far headlights of the vehicle at night.
2. The method for determining the on-state of the high beam of the night vehicle based on the dual cameras as claimed in claim 1, wherein the two cameras in the step 1 are placed in any one of a left-right adjacent placement mode or an up-down adjacent placement mode, and the position combination mode of the two pictures is a left-right combination mode or an up-down combination mode.
3. The method for determining the on-state of the high beam of the night vehicle based on the two cameras as claimed in claim 1, wherein in the step 5, at infinity, one of the lamps i has coordinates of the center point position in the high exposure image of (Hxi, Hyi) and coordinates of the center point position in the low exposure image of (Lxi, Lyi); when the device is installed and fixed, dx = Hxi-Lxi and dy = Hyi-Lyi are provided, and dx and dy are fixed values; the system can manually set the two parameters through a software interface; the condition of determining the infinity is Hyi =0, that is, the vehicle lamp is at the topmost end of the image, that is, the vehicle lamp is considered to be approximately infinity from the camera, when the vehicle lamp gradually approaches the camera, because the two cameras are not completely overlapped and a point is not, the parallax inevitably occurs, taking the left and right placement of the cameras as an example, as the vehicle lamp approaches the camera, dx = Hxi-Lxi gradually increases, in order to correct the parallax, the system introduces an adjustment coefficient dk, and then the position deviation of the same vehicle lamp in the two pictures: dx = Hxi-Lxi + dk Hyi, that is, as Hyi increases, it indicates that the parallax influence is larger as the vehicle lamp is closer to the camera, and after the device is installed and fixed, dk is a fixed value and can be set through a software interface, and the parallax in the vertical direction does not exist in the left and right cameras, so dy does not need to be corrected; similarly, when the camera is placed up and down, dy needs to be corrected, and dx does not need to be corrected.
CN202110130214.0A 2021-01-29 2021-01-29 Night vehicle high beam opening state discrimination method based on double cameras Active CN112906504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110130214.0A CN112906504B (en) 2021-01-29 2021-01-29 Night vehicle high beam opening state discrimination method based on double cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110130214.0A CN112906504B (en) 2021-01-29 2021-01-29 Night vehicle high beam opening state discrimination method based on double cameras

Publications (2)

Publication Number Publication Date
CN112906504A CN112906504A (en) 2021-06-04
CN112906504B true CN112906504B (en) 2022-07-12

Family

ID=76121664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110130214.0A Active CN112906504B (en) 2021-01-29 2021-01-29 Night vehicle high beam opening state discrimination method based on double cameras

Country Status (1)

Country Link
CN (1) CN112906504B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114360257B (en) * 2022-01-07 2023-02-28 重庆紫光华山智安科技有限公司 Vehicle monitoring method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164685A (en) * 2011-12-09 2013-06-19 株式会社理光 Car light detection method and car light detection device
CN108230690A (en) * 2018-02-09 2018-06-29 浙江安谐智能科技有限公司 A kind of high beam based on convolutional neural networks continues the method for discrimination of opening
CN111882519A (en) * 2020-06-15 2020-11-03 上海眼控科技股份有限公司 Method and device for identifying car lamp

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4253271B2 (en) * 2003-08-11 2009-04-08 株式会社日立製作所 Image processing system and vehicle control system
JP6310899B2 (en) * 2015-11-25 2018-04-11 株式会社Subaru Outside environment recognition device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103164685A (en) * 2011-12-09 2013-06-19 株式会社理光 Car light detection method and car light detection device
CN108230690A (en) * 2018-02-09 2018-06-29 浙江安谐智能科技有限公司 A kind of high beam based on convolutional neural networks continues the method for discrimination of opening
CN111882519A (en) * 2020-06-15 2020-11-03 上海眼控科技股份有限公司 Method and device for identifying car lamp

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种夜间车灯照明区域的识别方法;严非等;《第十四届全国图象图形学学术会议论文集》;20080531;439-442 *

Also Published As

Publication number Publication date
CN112906504A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
US20150278615A1 (en) Vehicle exterior environment recognition device
US11700457B2 (en) Flicker mitigation via image signal processing
CN101872546B (en) Video-based method for rapidly detecting transit vehicles
EP1962226B1 (en) Image recognition device for vehicle and vehicle head lamp controller and method of controlling head lamps
CN105812674A (en) Signal lamp color correction method, monitoring method, and device thereof
WO2017014023A1 (en) Onboard environment recognition device
US20140132769A1 (en) Exterior environment recognition device
US20140293055A1 (en) Image processing apparatus
CN107147841B (en) Binocular camera adjusting method, device and system
CN106778534B (en) Method for identifying ambient light during vehicle running
KR101038650B1 (en) Adaptive modeling method for background image, detecting method and system for illegal-stopping and parking vehicle using it
CN112906504B (en) Night vehicle high beam opening state discrimination method based on double cameras
CN106161984B (en) Video image highlight suppression, contour and detail enhancement processing method and system
JP2013073305A (en) Image processing device
CN110688907A (en) Method and device for identifying object based on road light source at night
CN110520898B (en) Image processing method for eliminating bright area
CN104008548B (en) Feature point extraction method for vehicle-mounted around view system camera parameter calibration
JP2019020956A (en) Vehicle surroundings recognition device
JP4084578B2 (en) Character recognition method and apparatus
CN201247528Y (en) Apparatus for obtaining and processing image
JP2012088785A (en) Object identification device and program
KR100801989B1 (en) Recognition system for registration number plate and pre-processor and method therefor
JP2001357396A (en) Image processor and image recognizing device
JP7201706B2 (en) Image processing device
Chen et al. Robust rear light status recognition using symmetrical surfs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant