CN112906504A - Night vehicle high beam opening state discrimination method based on double cameras - Google Patents
Night vehicle high beam opening state discrimination method based on double cameras Download PDFInfo
- Publication number
- CN112906504A CN112906504A CN202110130214.0A CN202110130214A CN112906504A CN 112906504 A CN112906504 A CN 112906504A CN 202110130214 A CN202110130214 A CN 202110130214A CN 112906504 A CN112906504 A CN 112906504A
- Authority
- CN
- China
- Prior art keywords
- cameras
- vehicle
- connected domain
- image
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012850 discrimination method Methods 0.000 title description 2
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000000034 method Methods 0.000 claims abstract description 16
- 238000002372 labelling Methods 0.000 claims abstract description 5
- 238000013459 approach Methods 0.000 claims description 6
- 238000013528 artificial neural network Methods 0.000 claims description 4
- 238000012549 training Methods 0.000 claims description 4
- 238000013519 translation Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 claims description 3
- 230000009977 dual effect Effects 0.000 claims 1
- 230000007547 defect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/54—Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/017—Detecting movement of traffic to be counted or controlled identifying vehicles
- G08G1/0175—Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Signal Processing (AREA)
- Evolutionary Biology (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for judging the open state of a high beam of a vehicle at night based on two cameras, which comprises the following steps: step 1: two cameras are arranged in the equipment at the same time, the two cameras are positioned at the same position, and pictures shot by the two cameras are combined into one picture, one picture is a low exposure picture, and the other picture is a high exposure picture; step 2: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows: 2.1: and carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain. By implementing the invention, the opening state of the high beam of the vehicle running at night can be correctly identified, and the license plate number of the vehicle can be correctly identified, so that the identification accuracy of the high beam is improved to a greater extent, and the outstanding progress is obtained.
Description
Technical Field
The invention relates to the technical field of high beam judgment, in particular to a method for judging the opening state of a high beam of a vehicle at night based on two cameras.
Background
The method is characterized in that the method adopts a monitoring video mode to complete the identification of the opening state of the high beam of the vehicle running at night, the most effective method is to set a video picture which can positively shoot the vehicle running from far to near, which brings a difficult condition, the vehicle lamp used as a luminous body and the night environment form a wide dynamic range, so that a single camera cannot simultaneously complete the simultaneous clear capture of the high beam state and the vehicle identity information (license plate) for opening the high beam, and on the other hand, the single camera cannot obtain sufficient high beam characteristic information under the condition of fixed exposure parameters.
Because the wide dynamic range formed after the car light is lighted is not adaptable by the physical characteristics of the current camera, if the picture is adjusted and exposed to the light-emitting states of the high beam and the low beam, clear license plate information cannot be obtained at all, and even if the behavior that the high beam is turned on due to a violation is judged, the identity information (license plate number) of the car cannot be determined; if the exposure is adjusted to be able to see the license plate clearly, the difference of the light-emitting states of the high beam and the low beam cannot be distinguished.
Since the high beam lamp used against the regulations needs to judge whether the high beam lamp is continuously lighted in a period of time, the multi-level exposure switching of a single camera can cause the discontinuity of information, so that the law enforcement evidence is insufficient.
In summary, a method for judging the open state of a high beam of a night vehicle based on two cameras is needed to solve the defects in the prior art.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides a method for judging the open state of a high beam of a vehicle at night based on two cameras, and aims to solve the problems.
In order to achieve the purpose, the invention provides the following technical scheme: a method for judging the open state of a high beam lamp of a vehicle at night based on two cameras comprises the following steps:
step 1: two cameras are arranged in the equipment at the same time, the two cameras are positioned at the same position, and pictures shot by the two cameras are combined into one picture, one picture is a low exposure picture, and the other picture is a high exposure picture;
step 2: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area (pixel number LNi) of the connected domain, the position coordinates (Lxi, Lyi) of the central point, the perimeter (LPi) of the connected domain and the perimeter (LCi) of the external convex hull;
and 4, step 4: in the high exposure map, the lamp light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of a connected domain (the number of pixel points is HNi), the position coordinates of a central point (Hxi, Hyi), the perimeter (HPi) of the connected domain and the perimeter (HCi) of an external convex hull;
and 5: the results from the high and low exposures were paired: because a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras cannot be located at the same coordinate position;
step 6: and (3) carrying out comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, L3Pi, LCi, HNi, HPi, HCi ], inputting the 6-dimensional parameter vector into a BP neural network for training, and realizing the classification of the vehicle dipped headlights or high beams, thereby finishing the identification of the on-state of the high beams of the vehicle at night.
Further, the two cameras in step 1 are placed in any one of a left-right adjacent manner or an up-down adjacent manner, and the position combination manner of the two pictures is a left-right combination or an up-down combination.
Further, in the step 5, it is assumed that a vehicle lamp i at infinity has coordinates of a center point position in the high exposure image of (Hxi, Hyi) and coordinates of a center point position in the low exposure image of (Lxi, Lyi); when the device is fixed, dx is Hxi-Lxi, and dy is Hyi-Lyi, then dx and dy are fixed values; the system can manually set the two parameters through a software interface; when the condition of determining the infinity is Hyi ═ 0, that is, the vehicle lamp is at the topmost image, that is, the vehicle lamp is considered to be approximately infinity from the camera, and when the vehicle lamp gradually approaches the camera, because the two cameras are not completely overlapped and a single point, parallax is inevitably generated, taking the left and right placement of the camera as an example, as the vehicle lamp approaches the camera, dx ═ Hxi-Lxi will gradually increase, and in order to correct parallax, the system introduces an adjustment coefficient dk, and then the position deviation of the same vehicle lamp in the two pictures: Hxi-Lxi + dk Hyi, that is, with the increase of Hyi, it indicates that the parallax effect is larger as the car light is closer to the camera, and after the device is fixed, dk is a fixed value and can be set through a software interface, and the cameras placed at left and right have no parallax in the vertical direction, so dy does not need to be corrected; similarly, if the camera is placed up and down, dy needs to be corrected, and dx does not need to be corrected.
The invention has the beneficial effects that:
1. according to the invention, the license plate number of the vehicle can be correctly identified while the opening state of the high beam of the vehicle running at night can be correctly identified, so that a relatively complete law enforcement evidence is formed, and sufficient evidence is provided for subsequent law enforcement treatment.
2. According to the invention, through the design based on the double-camera multi-stage exposure, more characteristic parameters of the same car light can be obtained, the identification accuracy of the high beam is improved to a greater extent, and a technical guarantee is provided for practical application.
3. According to the invention, an effective scheme of pairing the double cameras with the target is provided, so that a foundation is laid for extracting multiple features, and meanwhile, the scheme can be popularized to more cameras, has stronger applicability and makes outstanding progress.
Detailed Description
A method for judging the open state of a high beam lamp of a vehicle at night based on two cameras comprises the following steps:
step 1: two cameras are arranged in the equipment at the same time, the two cameras are positioned at the same position, and pictures shot by the two cameras are combined into one picture, one picture is a low exposure picture, and the other picture is a high exposure picture;
step 2: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area (pixel number LNi) of the connected domain, the position coordinates (Lxi, Lyi) of the central point, the perimeter (LPi) of the connected domain and the perimeter (LCi) of the external convex hull;
and 4, step 4: in the high exposure map, the lamp light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of a connected domain (the number of pixel points is HNi), the position coordinates of a central point (Hxi, Hyi), the perimeter (HPi) of the connected domain and the perimeter (HCi) of an external convex hull;
and 5: the results from the high and low exposures were paired: because a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras cannot be located at the same coordinate position;
step 6: and (3) carrying out comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, L3Pi, LCi, HNi, HPi, HCi ], inputting the 6-dimensional parameter vector into a BP neural network for training, and realizing the classification of the vehicle dipped headlights or high beams, thereby finishing the identification of the on-state of the high beams of the vehicle at night.
Example (b):
step 1: two cameras are arranged in the equipment at the same time, the two cameras are placed at the same position left and right, and pictures shot by the two cameras are combined into one picture left and right, wherein one picture is a low exposure picture, and the other picture is a high exposure picture;
step 2: in the low exposure image, because the target is less and only has a car light area, the low exposure image is used for tracking the driving track of the vehicle while judging the turning-on state of the car light; the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area (pixel number LNi) of the connected domain, the position coordinates (Lxi, Lyi) of the central point, the perimeter (LPi) of the connected domain and the perimeter (LCi) of the external convex hull;
and 4, step 4: in the high exposure map, the lamp light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of a connected domain (the number of pixel points is HNi), the position coordinates of a central point (Hxi, Hyi), the perimeter (HPi) of the connected domain and the perimeter (HCi) of an external convex hull;
and 5: the results from the high and low exposures were paired: considering that a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras cannot be located at the same coordinate position; a vehicle lamp i at more than 80 meters, whose center point position coordinate in the high exposure image is (Hxi, Hyi), and whose center point position coordinate in the low exposure image is (Lxi, ly); when the device is fixed, dx is Hxi-Lxi, and dy is Hyi-Lyi, then dx and dy are fixed values; the system can manually set the two parameters through a software interface; when the condition of determining the infinity is Hyi ═ 0, that is, the vehicle lamp is at the topmost image, that is, the vehicle lamp is considered to be approximately infinity from the camera, and when the vehicle lamp gradually approaches the camera, because the two cameras are not completely overlapped and a single point, parallax is inevitably generated, taking the left and right placement of the camera as an example, as the vehicle lamp approaches the camera, dx ═ Hxi-Lxi will gradually increase, and in order to correct parallax, the system introduces an adjustment coefficient dk, and then the position deviation of the same vehicle lamp in the two pictures: Hxi-Lxi + dk Hyi, that is, with the increase of Hyi, it indicates that the parallax effect is larger as the car light is closer to the camera, and after the device is fixed, dk is a fixed value and can be set through a software interface, and the cameras placed at left and right have no parallax in the vertical direction, so dy does not need to be corrected; similarly, if the camera is placed up and down, dy needs to be corrected, and dx does not need to be corrected;
step 6: and (3) carrying out comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, L3Pi, LCi, HNi, HPi, HCi ], inputting the 6-dimensional parameter vector into a BP neural network for training, and realizing the classification of the vehicle dipped headlights or high beams, thereby finishing the identification of the on-state of the high beams of the vehicle at night.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents or improvements made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (3)
1. A method for judging the open state of a high beam lamp of a vehicle at night based on two cameras is characterized by comprising the following steps:
step 1: two cameras are arranged in the equipment at the same time, the two cameras are positioned at the same position, and pictures shot by the two cameras are combined into one picture, one picture is a low exposure picture, and the other picture is a high exposure picture;
step 2: the method comprises the following steps of judging the starting state of a vehicle lamp by using a low exposure image, and tracking the driving track of a vehicle at the same time, wherein the specific processing algorithm is as follows:
2.1: carrying out binarization processing on the image, then carrying out expansion processing on the binary image, and carrying out labeling algorithm on the expanded binary image to find each connected domain;
2.2: selecting a car light region, wherein the number of pixels of each connected domain is Ni, the position in the image is (xi, yi), selecting the connected domain with Ni > k x yi as a detected car light region, and the area threshold of the connected domain is a value in direct proportion to a coordinate yi so as to adapt to the size change of the same target object from far to near in the picture;
and step 3: extracting the characteristics of each light connected domain, comprising the following steps: the area (pixel number LNi) of the connected domain, the position coordinates (Lxi, Lyi) of the central point, the perimeter (LPi) of the connected domain and the perimeter (LCi) of the external convex hull;
and 4, step 4: in the high exposure map, the lamp light area is extracted:
4.1: carrying out binarization processing on the image;
4.2: extracting features from the connected domain, including: the area of a connected domain (the number of pixel points is HNi), the position coordinates of a central point (Hxi, Hyi), the perimeter (HPi) of the connected domain and the perimeter (HCi) of an external convex hull;
and 5: the results from the high and low exposures were paired: because a certain position deviation exists when the cameras are erected, the same car lamp in the two cameras cannot be located at the same coordinate position;
step 6: and (3) carrying out comprehensive analysis after pairing: according to the coordinate deviation dx and dy of the same car lamp in the high-low exposure two images obtained in the step 5, corresponding coordinate translation is carried out, so that the parameters of the image connected domain of the same car lamp in the high-exposure and low-exposure are obtained, and the system obtains the characteristic parameters of each car lamp as follows: [ LNi, L3Pi, LCi, HNi, HPi, HCi ], inputting the 6-dimensional parameter vector into a BP neural network for training, and realizing the classification of the vehicle dipped headlights or high beams, thereby finishing the identification of the on-state of the high beams of the vehicle at night.
2. The method for determining the on-state of the high beam of the night vehicle based on the dual cameras as claimed in claim 1, wherein the two cameras in the step 1 are placed in any one of a left-right adjacent placement mode or an up-down adjacent placement mode, and the position combination mode of the two pictures is a left-right combination mode or an up-down combination mode.
3. The method for determining the on-state of the high beam of the night vehicle based on the two cameras as claimed in claim 1, wherein in the step 5, a vehicle lamp i at infinity is assumed to have coordinates of the center point position in the high exposure image of (Hxi, Hyi) and the center point position in the low exposure image of (Lxi, Lyi); when the device is fixed, dx is Hxi-Lxi, and dy is Hyi-Lyi, then dx and dy are fixed values; the system can manually set the two parameters through a software interface; when the condition of determining the infinity is Hyi ═ 0, that is, the vehicle lamp is at the topmost image, that is, the vehicle lamp is considered to be approximately infinity from the camera, and when the vehicle lamp gradually approaches the camera, because the two cameras are not completely overlapped and a single point, parallax is inevitably generated, taking the left and right placement of the camera as an example, as the vehicle lamp approaches the camera, dx ═ Hxi-Lxi will gradually increase, and in order to correct parallax, the system introduces an adjustment coefficient dk, and then the position deviation of the same vehicle lamp in the two pictures: Hxi-Lxi + dk Hyi, that is, with the increase of Hyi, it indicates that the parallax effect is larger as the car light is closer to the camera, and after the device is fixed, dk is a fixed value and can be set through a software interface, and the cameras placed at left and right have no parallax in the vertical direction, so dy does not need to be corrected; similarly, if the camera is placed up and down, dy needs to be corrected, and dx does not need to be corrected.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110130214.0A CN112906504B (en) | 2021-01-29 | 2021-01-29 | Night vehicle high beam opening state discrimination method based on double cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110130214.0A CN112906504B (en) | 2021-01-29 | 2021-01-29 | Night vehicle high beam opening state discrimination method based on double cameras |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112906504A true CN112906504A (en) | 2021-06-04 |
CN112906504B CN112906504B (en) | 2022-07-12 |
Family
ID=76121664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110130214.0A Active CN112906504B (en) | 2021-01-29 | 2021-01-29 | Night vehicle high beam opening state discrimination method based on double cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112906504B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114360257A (en) * | 2022-01-07 | 2022-04-15 | 重庆紫光华山智安科技有限公司 | Vehicle monitoring method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050036660A1 (en) * | 2003-08-11 | 2005-02-17 | Yuji Otsuka | Image processing system and vehicle control system |
CN103164685A (en) * | 2011-12-09 | 2013-06-19 | 株式会社理光 | Car light detection method and car light detection device |
US20170144585A1 (en) * | 2015-11-25 | 2017-05-25 | Fuji Jukogyo Kabushiki Kaisha | Vehicle exterior environment recognition apparatus |
CN108230690A (en) * | 2018-02-09 | 2018-06-29 | 浙江安谐智能科技有限公司 | A kind of high beam based on convolutional neural networks continues the method for discrimination of opening |
CN111882519A (en) * | 2020-06-15 | 2020-11-03 | 上海眼控科技股份有限公司 | Method and device for identifying car lamp |
-
2021
- 2021-01-29 CN CN202110130214.0A patent/CN112906504B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050036660A1 (en) * | 2003-08-11 | 2005-02-17 | Yuji Otsuka | Image processing system and vehicle control system |
CN103164685A (en) * | 2011-12-09 | 2013-06-19 | 株式会社理光 | Car light detection method and car light detection device |
US20170144585A1 (en) * | 2015-11-25 | 2017-05-25 | Fuji Jukogyo Kabushiki Kaisha | Vehicle exterior environment recognition apparatus |
CN108230690A (en) * | 2018-02-09 | 2018-06-29 | 浙江安谐智能科技有限公司 | A kind of high beam based on convolutional neural networks continues the method for discrimination of opening |
CN111882519A (en) * | 2020-06-15 | 2020-11-03 | 上海眼控科技股份有限公司 | Method and device for identifying car lamp |
Non-Patent Citations (1)
Title |
---|
严非等: "一种夜间车灯照明区域的识别方法", 《第十四届全国图象图形学学术会议论文集》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114360257A (en) * | 2022-01-07 | 2022-04-15 | 重庆紫光华山智安科技有限公司 | Vehicle monitoring method and system |
CN114360257B (en) * | 2022-01-07 | 2023-02-28 | 重庆紫光华山智安科技有限公司 | Vehicle monitoring method and system |
Also Published As
Publication number | Publication date |
---|---|
CN112906504B (en) | 2022-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101872546B (en) | Video-based method for rapidly detecting transit vehicles | |
Yoneyama et al. | Robust vehicle and traffic information extraction for highway surveillance | |
WO2017014023A1 (en) | Onboard environment recognition device | |
EP1962226B1 (en) | Image recognition device for vehicle and vehicle head lamp controller and method of controlling head lamps | |
CN105812674A (en) | Signal lamp color correction method, monitoring method, and device thereof | |
CN107147841B (en) | Binocular camera adjusting method, device and system | |
US11282235B2 (en) | Vehicle surroundings recognition apparatus | |
KR20210006276A (en) | Image processing method for flicker mitigation | |
CN106991418B (en) | Winged insect detection method and device and terminal | |
CN106778534B (en) | Method for identifying ambient light during vehicle running | |
JP2018005682A (en) | Image processor | |
CN111046741A (en) | Method and device for identifying lane line | |
JP2013073305A (en) | Image processing device | |
CN110520898B (en) | Image processing method for eliminating bright area | |
CN112906504B (en) | Night vehicle high beam opening state discrimination method based on double cameras | |
CN108124122A (en) | Image treatment method, device and vehicle | |
CN109143001A (en) | pantograph detection system | |
CN107122732B (en) | High-robustness rapid license plate positioning method in monitoring scene | |
JP3550874B2 (en) | Monitoring device | |
JP4084578B2 (en) | Character recognition method and apparatus | |
CN201247528Y (en) | Apparatus for obtaining and processing image | |
JP2012088785A (en) | Object identification device and program | |
JP2004086417A (en) | Method and device for detecting pedestrian on zebra crossing | |
KR101402089B1 (en) | Apparatus and Method for Obstacle Detection | |
JP7201706B2 (en) | Image processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |