CN106407951B - A kind of night front vehicles detection method based on monocular vision - Google Patents

A kind of night front vehicles detection method based on monocular vision Download PDF

Info

Publication number
CN106407951B
CN106407951B CN201610873523.6A CN201610873523A CN106407951B CN 106407951 B CN106407951 B CN 106407951B CN 201610873523 A CN201610873523 A CN 201610873523A CN 106407951 B CN106407951 B CN 106407951B
Authority
CN
China
Prior art keywords
formula
value
image
car light
censure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610873523.6A
Other languages
Chinese (zh)
Other versions
CN106407951A (en
Inventor
蒋卓韵
戴芳
赵凤群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201610873523.6A priority Critical patent/CN106407951B/en
Publication of CN106407951A publication Critical patent/CN106407951A/en
Application granted granted Critical
Publication of CN106407951B publication Critical patent/CN106407951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The night front vehicles detection method based on monocular vision that the invention discloses a kind of, comprising the following steps: step 1, acquire image, car light detection is carried out to night front vehicles based on CenSurE operator, obtains strong angle point;Step 2, the segmentation that night front vehicles are carried out based on car light colouring information, obtains cut zone;Step 3, the region for selecting the strong angle point for occupying step 1 in the cut zone of step 2 most, obtains detection zone;Step 4, car light pairing is carried out to the detection zone in step 3, determines the position of target vehicle.The present invention is by using multiple dimensioned lower CenSurE operator is calculated, and then detection not only solves in vehicle at night vehicle lamp detection method, Matching Error is larger, the problem of detection target inaccuracy, and good application value as a result, the operation of vehicle pairing can be carried out.

Description

A kind of night front vehicles detection method based on monocular vision
Technical field
The invention belongs to night object detection method technical fields, and in particular to a kind of night front based on monocular vision Vehicle checking method.
Background technique
The leaping growth of automobile exacerbates taking place frequently for the generation of road traffic accident, especially night traffic accident, gives The lives and properties of people bring huge loss.The target detection technique of view-based access control model provides for the detection of night traffic scene objects May, since more mesh (including binocular) vision system adjacent interframe Feature Points Matching is computationally intensive, when number of vehicles is more, Real-time is poor, so the vehicle testing techniques based on monocular vision come into being.However, the existing vehicle based on monocular vision Detection technique is adapted to traffic environment on daytime mostly, since nighttime driving environment variation leads to night traffic accident more compared with daytime It takes place frequently, therefore, night front vehicles detection method of the research based on monocular vision reduces traffic accident for improving driving environment Have great importance.
Night illumination condition is poor, and the resemblance of vehicle is difficult to be detected, and the most significant feature of vehicle at night is highlighted The car light of degree, therefore, most vehicle detection at night method are all by detection car light to detect vehicle.Junbin Guo etc. People counts the distribution of taillight brightness under 300 width varying environments, determines optimal segmenting threshold using maximum variance between clusters (Otsu), And non-taillight target is rejected according to the red threshold in hsv color space, finally carry out the tail based on priori knowledges such as position, areas Lamp pairing.
Wei Zhang et al. obtains the reflection grayscale image and reflection suppression figure of headlight based on light scatter attenuation model, makees For the input vector of markov random file, but if vehicle distances camera is close, reflection coefficient is difficult to calculate.
Jiann-Der Lee et al. obtains vehicle lamp area and using optical flow method to vehicle using LoG operator and photon diffusion models It is tracked, using LoG operator, solves the problems, such as the detection of short distance vehicle difficulty.
O ' Malley et al. is based on hsv color space and proposes red threshold dividing method, and is tested according to cross correlation coefficient It demonstrate,proves the symmetry of car light and is tracked using Kalman filtering, but car light pairing is only carried out according to cross correlation coefficient, accidentally Difference is larger.
Naoya Kosaka et al. surround filter using double-layered central come approximate LoG operator, and it is high to filter out response Characteristic point, then classified to characteristic point using support vector machines (SVM), and made an uproar by lane detection and motion profile exclusion It is high to detect car light accuracy for sound point, but carries out causing Matching Error only according to the consistent principle of filter response with clock synchronization It is larger.
Hulin Kuang et al. finds the higher sense of score in image after multiple dimensioned Retinex enhancing using EdgeBoxes Final score after extracting ROI5 feature, by the weight of each feature of SVM training, is modified, score is high in interest region (ROI) ROI is then vehicle.This method does not need to carry out car light pairing, reduces some errors, but for dim traffic scene, increases Strong algorithms are not very effectively that the ROI accuracy obtained also declines therewith.
Summary of the invention
The night front vehicles detection method based on monocular vision that the object of the present invention is to provide a kind of, solves existing night Between in vehicle checking method, Matching Error is larger, detects the problem of target inaccuracy.
A kind of night front vehicles detection method based on monocular vision of the present invention the technical scheme adopted is that include with Lower step:
Step 1, image is acquired, car light detection is carried out to night front vehicles based on CenSurE operator, obtains strong angle point;
Step 2, the segmentation that night front vehicles are carried out based on car light colouring information, obtains cut zone;
Step 3, the region for selecting the strong angle point for occupying step 1 in the cut zone of step 2 most, obtains detection zone;
Step 4, car light pairing is carried out to the detection zone in step 3, determines the position of target vehicle.
It is of the invention to be further characterized in that,
The operating procedure of step 1 specifically:
Step 1.1, it takes pictures and acquires image, the value of corresponding integrogram, any point I in integrogram are calculated according to acquisition image The value of (x, y) is the summation of corresponding position upper left corner area all values in original image, such as formula (1),
Step 1.2, construction CenSurE filter carries out logarithmic scale sampling to integrogram,
The scale space that the value of any point I (x, y) indicates is divided into three groups, wherein every layer of CenSurE filter in first group Wave device kernel size successively increases 2, and every layer of CenSurE filter kernel size successively increases 4 in second group, every in third group The CenSurE filter kernel size of layer successively increases 8, selects 5 layers of scale image in each group, outside CenSurE filter Core size is also calculated in the manner described above;
That is the size of cores of CenSurE filter should meet (2n+1) × (2n+1), outer core size should meet (4n+1) × (4n+1) is zero to respond the DC of filter, normalizes to scale space, then the weight coefficient I of kernelnFormula should be met (2),
The weight coefficient O of outer corenFormula (3) should be met
When the pixel value summation that outer core includes be out_value, the pixel value summation that kernel includes be in_value, then as Plain filter response value L meets formula (4),
L=On·out_value-InIn_value, (4)
Step 1.3, extremum extracting is carried out to the scale space in step 1.2,
The image that will be handled by step 1.2 is responded according to the pixel filter that formula (4) calculate each scale space in image Then value carries out non-maxima suppression in the enterprising row of scale space, and records extreme point;
Step 1.4, the extreme point in step 1.3 is carried out filtering out unstable characteristic point,
LxAnd LyLocal derviation for pixel filter response L in the direction x and y, to Lx、Ly、LxLyGaussian filtering is carried out, is obtained Harris matrix exgenvalue obtains strong angle point if lesser characteristic value is greater than adaptive threshold t.
The operating procedure of step 2 makes specifically, the image that step 1 acquires is transformed into hsv color space by rgb space The all areas being partitioned into red threshold, using the region for being located at left side 1/3 in all areas of white Threshold segmentation out, altogether With as hsv color space divide as a result,
H indicates tone, wherein H >=340 ° or H≤30 ° of red threshold, and the H of white threshold value is 0 °~360 °;S indicates full And degree, S≤30 of red threshold, S≤20 of white threshold value;V indicates that the lightness of color, red threshold and white threshold value take 80 ≤V≤100。
The operating procedure of the car light pairing of step 4 is specifically, assume Li、LjFor two candidate car lights, area is respectively Ai、Aj, The image coordinate at car light center is (xi,yi)、(xj,yj), pairing constraint condition is as follows:
A. when two candidate car light height are consistent, both two car light ordinates should meet formula (5)
|yi-yj| < Δ h, (5)
B. when two candidate car light horizontal direction distances in a certain range, formula (6) should be met
Δw1< | xi-xj| < Δ w2, (6)
C. when two candidate car light areas are consistent, formula (7) should be met
|Ai-Aj| < Δ A, (7)
Δ h is difference in height threshold value, Δ w in formula (6) in formula (5)1With Δ w2For level error threshold value, Δ A is area in formula (7) Poor threshold value meets pairing constraint condition, and the ratio of width to height should be met certain by obtaining the corresponding boundary rectangle frame of car light after the completion of pairing In range, meet formula (8),
Wherein, xi,left、xj,rightThe respectively Far Left in region and rightmost coordinate, yi,top、yj,bottomMost for region Top and bottom coordinate, Δ ration are the ratio of width to height threshold value of frame.
The specific steps of non-maxima suppression in step 1.3 are as follows: each point and its 26 consecutive points in scale space, Wherein 26 consecutive points include being located in the middle test point to correspond to it with 8 consecutive points of scale and neighbouring scale 9 × 2 points be compared, then record extreme point.
Adaptive threshold in step 1.4 is obtained using multistage Otsu method, the specific steps are that:
In grey level histogram, if fiThe pixel number for being i for gray level, N are pixel sum, then N meets formula (9)
N=f0+f1+…+fl-1, (9)
Wherein l be histogram number, l=1,2,3,4 ...,
Then gray level is the pixel number f of iiDistribution probability PiFor formula (10),
Use k threshold value T={ t1,…,tn,…,tk, divide the image into k+1 classification, inter-class variance VBCIt (T) is formula (11)
Wherein, formula (11) μnGray average when for k=n, μTFor overall gray average, wnAnd μnValue such as formula (12),
Variance within clusters vWC(T) it is formula (13)
Wherein, σ in formula (13)nGray variance when for k=n, wnAnd σnValue such as formula (14)
By formula (9)~formula (14) joint, the population variance v of image is obtainedTWith the grand mean μ of imageT, it is formula (15)
The splitting factor SF for defining image is formula (16),
As SF > 0.9, stops classification, take t at this timekFor adaptive threshold.
The beneficial effects of the present invention are: a kind of night front vehicles detection method based on monocular vision of the present invention is by adopting With multiple dimensioned lower CenSurE operator is calculated, then detects as a result, the operation of vehicle pairing can be carried out, not only solve night Between in vehicle car light detection method, Matching Error is larger, detects the problem of target inaccuracy, and good application value.
Detailed description of the invention
Fig. 1 is the kernel of CenSurE filter of the present invention and the structure chart of outer core;
Fig. 2 is area pixel and calculating schematic diagram.
Specific embodiment
The following describes the present invention in detail with reference to the accompanying drawings and specific embodiments.
A kind of night front vehicles detection method based on monocular vision of the present invention, comprising the following steps:
Step 1, image is acquired, car light detection is carried out to night front vehicles based on CenSurE operator, obtains strong angle point;
Step 2, the segmentation that night front vehicles are carried out based on car light colouring information, obtains cut zone;
Step 3, the region for selecting the strong angle point for occupying step 1 in the cut zone of step 2 most, obtains detection zone;
Step 4, car light pairing is carried out to the detection zone in step 3, determines the position of target vehicle.
The operating procedure of step 1 specifically:
Step 1.1, it takes pictures and acquires image, the value of corresponding integrogram, any point I in integrogram are calculated according to acquisition image The value of (x, y) is the summation of corresponding position upper left corner area all values in original image, such as formula (1),
Step 1.2, CenSurE filter is constructed, logarithmic scale sampling is carried out to integrogram, in order to improve part The stability of extreme point, as shown in Figure 1, CenSurE filter, using square forming core, square core computational efficiency highest meets Requirement of real-time, after integrogram constructs, the pixel of any matrix area is cumulative in image and can pass through simple operation It obtains, as shown in Fig. 2,
The scale space that the value of any point I (x, y) indicates is divided into three groups, each group and selects 5 layers of scale image, In, every layer of CenSurE filter kernel size successively increases 2 in first group, and selecting filter kernel, size is followed successively by 3 × 3,5 × 5,7 × 7,9 × 9 and 11 × 11, every layer of CenSurE filter kernel size successively increases 4 in second group, chooses Filter kernel, size are followed successively by 7 × 7,11 × 11,15 × 15,19 × 19 and 23 × 23, every layer in third group CenSurE filter kernel size successively increases 8, selecting filter kernel, and size is followed successively by 15 × 15,23 × 23,31 × 31,39 × 39,47 × 47, CenSurE filter outer core size is also calculated in the manner described above;
That is the size of cores of CenSurE filter should meet (2n+1) × (2n+1), outer core size should meet (4n+1) × (4n+1) is zero to respond the DC of filter, normalizes to scale space, then the weight coefficient I of kernelnFormula should be met (2),
The weight coefficient O of outer corenFormula (3) should be met
When the pixel value summation that outer core includes be out_value, the pixel value summation that kernel includes be in_value, then as Plain filter response value L meets formula (4),
L=On·out_value-InIn_value, (4)
Step 1.3, extremum extracting is carried out to the scale space in step 1.2,
The image that will be handled by step 1.2 is responded according to the pixel filter that formula (4) calculate each scale space in image Value, then carries out non-maxima suppression, and record extreme point on scale space;
Step 1.4, the extreme point in step 1.3 is carried out filtering out unstable characteristic point,
Stable characteristic point in order to obtain, it is inadequate for only filtering out weak response point according to threshold value, because filter is to image There is stronger response at edge, once characteristic point is fallen on the edge of image, these put just very unstable.Due to edge or line On characteristic point there is larger principal curvatures in a parallel direction and there is smaller principal curvatures in vertical direction, it is adaptive using scale The Harris method answered calculates principal curvatures ratio H and removes unstable response point, is shown below,
LxAnd LyLocal derviation for pixel filter response L in the direction x and y carries out gaussian filtering to principal curvatures ratio H, obtains Harris matrix exgenvalue obtains strong angle point if lesser characteristic value is greater than adaptive threshold t.
Since headlight and tail-light highlight white and red respectively, red and the white area of image need to be partitioned into Domain, then the operating procedure of step 2 uses threshold specifically, the image that step 1 acquires is transformed into hsv color space by rgb space Value segmentation red and white area, by experience, opposite and next vehicle is generally present in the left side of image, detects headstock Lamp, the vehicle detection tail-light travelled in the same direction, all areas that red threshold is partitioned into, all areas that white Threshold segmentation goes out In domain be located at left side 1/3 region, collectively as hsv color space segmentation as a result,
H indicates tone, wherein H >=340 ° or H≤30 ° of red threshold, and the H of white threshold value is 0 °~360 °;S indicates full And degree, S≤30 of red threshold, S≤20 of white threshold value;V indicates that the lightness of color, red threshold and white threshold value take 80 ≤V≤100。
The operating procedure of the car light pairing of step 4 is specifically, assume Li、LjFor two candidate car lights, area is respectively Ai、Aj, The image coordinate at car light center is (xi,yi)、(xj,yj), pairing constraint condition is as follows:
A. when two candidate car light height are consistent, both two car light ordinates should meet formula (5)
|yi-yj| < Δ h, (5)
B. when two candidate car light horizontal direction distances in a certain range, formula (6) should be met
Δw1< | xi-xj| < Δ w2, (6)
C. when two candidate car light areas are consistent, formula (7) should be met
|Ai-Aj| < Δ A, (7)
Δ h is difference in height threshold value, Δ w in formula (6) in formula (5)1With Δ w2For level error threshold value, Δ A is area in formula (7) Poor threshold value meets pairing constraint condition, and the ratio of width to height should be met certain by obtaining the corresponding boundary rectangle frame of car light after the completion of pairing In range, meet formula (8),
Wherein, xi,left、xj,rightThe respectively Far Left in region and rightmost coordinate, yi,top、yj,bottomMost for region Top and bottom coordinate, Δ ration are the ratio of width to height threshold value of frame.
In formula (7)~(8), due to daily life experience, Δ h is taken into 10 pixels, Δ w1Take 20 pixels, Δ w250 pixels are taken, Δ A takes 30 pixels, and Δ ration takes 10 pixels to carry out operation pairing.
The specific steps of non-maxima suppression in step 1.3 are as follows: each point and its 26 consecutive points in scale space, Wherein 26 consecutive points include being located in the middle test point to correspond to it with 8 consecutive points of scale and neighbouring scale 9 × 2 points be compared, then record extreme point.
Adaptive threshold in step 1.4 is obtained using multistage Otsu method, the specific steps are that:
In grey level histogram, if fiThe pixel number for being i for gray level, N are pixel sum, then N meets formula (9)
N=f0+f1+…+fl-1, (9)
Wherein l be histogram number, l=1,2,3,4 ...,
Then gray level is the pixel number f of iiDistribution probability PiFor formula (10),
Use k threshold value T={ t1,…,tn,…,tk, divide the image into k+1 classification, inter-class variance VBCIt (T) is formula (11)
Wherein, formula (11) μnGray average when for k=n, μTFor overall gray average, wnAnd μnValue such as formula (12),
Variance within clusters vWC(T) it is formula (13)
Wherein, σ in formula (13)nGray variance when for k=n, wnAnd σnValue such as formula (14)
By formula (9)~formula (14) joint, the population variance v of image is obtainedTWith the grand mean μ of imageT, it is formula (15)
The splitting factor SF for defining image is formula (16),
As SF > 0.9, stops classification, take t at this timekFor adaptive threshold.
The beneficial effects of the present invention are: CenSurE operator of the present invention by using calculating under multiple dimensioned, then detects it As a result, the operation of vehicle pairing can be carried out, not only solve in vehicle at night vehicle lamp detection method, Matching Error is larger, detection The problem of target inaccuracy, and good application value.

Claims (4)

1. a kind of night front vehicles detection method based on monocular vision, which comprises the following steps:
Step 1, image is acquired, car light detection is carried out to night front vehicles based on CenSurE operator, obtains strong angle point;
Step 2, the segmentation that night front vehicles are carried out based on car light colouring information, obtains cut zone;
Step 3, the region for selecting the strong angle point for occupying step 1 in the cut zone of step 2 most, obtains detection zone;
Step 4, car light pairing is carried out to the detection zone in step 3, determines the position of target vehicle;
The operating procedure of the step 1 specifically:
Step 1.1, it takes pictures and acquires image, the value of corresponding integrogram is calculated according to acquisition image, any point I in integrogram (x, Y) value is the summation of corresponding position upper left corner area all values in original image, such as formula (1),
Step 1.2, construction CenSurE filter carries out logarithmic scale sampling to integrogram,
The scale space that the value of any point I (x, y) indicates is divided into three groups, wherein every layer of CenSurE filter in first group Kernel size successively increases 2, and every layer of CenSurE filter kernel size successively increases 4 in second group, every layer in third group CenSurE filter kernel size successively increases 8, and 5 layers of scale image are selected in each group, and CenSurE filter outer core is big It is small also to be calculated in the manner described above;
I.e. the size of cores of CenSurE filter should meet (2n+1) × (2n+1), and outer core size should meet (4n+1) × (4n+ 1) it is, zero in order to which the DC for making filter is responded, scale space is normalized, then the weight coefficient I of kernelnFormula (2) should be met,
The weight coefficient O of outer corenFormula (3) should be met
When the pixel value summation that outer core includes is out_value, the pixel value summation that kernel includes is in_value, then pixel is filtered Wave response L meets formula (4),
L=On·out_value-InIn_value, (4)
Step 1.3, extremum extracting is carried out to the scale space in step 1.2,
The image that will be handled by step 1.2 calculates the pixel filter response of each scale space in image according to formula (4), Then non-maxima suppression is carried out on scale space, and records extreme point;
Step 1.4, the extreme point in step 1.3 is carried out filtering out unstable characteristic point,
LxAnd LyLocal derviation for pixel filter response L in the direction x and y, to Lx、Ly、LxLyGaussian filtering is carried out, Harris is obtained Matrix exgenvalue obtains strong angle point if lesser characteristic value is greater than adaptive threshold t;
Adaptive threshold in the step 1.4 is obtained using multistage Otsu method, the specific steps are that:
In grey level histogram, if fiThe pixel number for being i for gray level, N are pixel sum, then N meets formula (9)
N=f0+f1+L+fl-1, (9)
Wherein l be histogram number, l=1,2,3,4 ...,
Then gray level is the pixel number f of iiDistribution probability PiFor formula (10),
Use k threshold value T={ t1,Λ,tn,Λ,tk, divide the image into k+1 classification, inter-class variance VBC(T) it is formula (11)
Wherein, formula (11) μnGray average when for k=n, μTFor overall gray average, wnAnd μnValue such as formula (12),
Variance within clusters vWC(T) it is formula (13)
Wherein, σ in formula (13)nGray variance when for k=n, wnAnd σnValue such as formula (14)
By formula (9)~formula (14) joint, the population variance v of image is obtainedTWith the grand mean μ of imageT, it is formula (15)
The splitting factor SF for defining image is formula (16),
As SF > 0.9, stops classification, take t at this timekFor adaptive threshold.
2. a kind of night front vehicles detection method based on monocular vision according to claim 1, which is characterized in that institute The operating procedure for the step 2 stated is specifically, be transformed into hsv color space by rgb space for the image that step 1 acquires, use is red The all areas that chromatic threshold value is partitioned into, it is common to make using the region for being located at left side 1/3 in all areas of white Threshold segmentation out For hsv color space segmentation as a result,
H indicates tone, wherein H >=340 ° or H≤30 ° of red threshold, and the H of white threshold value is 0 °~360 °;S indicates saturation Degree, S≤30 of red threshold, S≤20 of white threshold value;V indicates the lightness of color, red threshold and white threshold value take 80≤ V≤100。
3. a kind of night front vehicles detection method based on monocular vision according to claim 1, which is characterized in that institute The operating procedure of the car light pairing for the step 4 stated is specifically, assume Li、LjFor two candidate car lights, area is respectively Ai、Aj, car light The image coordinate at center is (xi,yi)、(xj,yj), pairing constraint condition is as follows:
A. when two candidate car light height are consistent, both two car light ordinates should meet formula (5)
|yi-yj| < Δ h, (5)
B. when two candidate car light horizontal direction distances in a certain range, formula (6) should be met
Δw1< | xi-xj| < Δ w2, (6)
C. when two candidate car light areas are consistent, formula (7) should be met
|Ai-Aj| < Δ A, (7)
Δ h is difference in height threshold value, Δ w in formula (6) in formula (5)1With Δ w2For level error threshold value, Δ A is difference in areas threshold in formula (7) Value meets pairing constraint condition, and the ratio of width to height should be met in a certain range by obtaining the corresponding boundary rectangle frame of car light after the completion of pairing It is interior, meet formula (8),
Wherein, xi,left、xj,rightThe respectively Far Left in region and rightmost coordinate, yi,top、yj,bottomFor the top in region Coordinate bottom, Δ ration are the ratio of width to height threshold value of frame.
4. a kind of night front vehicles detection method based on monocular vision according to claim 1, which is characterized in that step The specific steps of non-maxima suppression in rapid 1.3 are as follows: each point and its 26 consecutive points in scale space, wherein 26 phases Adjoint point include be located in the middle test point and its with 8 consecutive points of scale and neighbouring scale corresponding 9 × 2 points It is compared, then records extreme point.
CN201610873523.6A 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision Active CN106407951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610873523.6A CN106407951B (en) 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610873523.6A CN106407951B (en) 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision

Publications (2)

Publication Number Publication Date
CN106407951A CN106407951A (en) 2017-02-15
CN106407951B true CN106407951B (en) 2019-08-16

Family

ID=59228048

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610873523.6A Active CN106407951B (en) 2016-09-30 2016-09-30 A kind of night front vehicles detection method based on monocular vision

Country Status (1)

Country Link
CN (1) CN106407951B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3637368B1 (en) * 2017-06-05 2021-12-15 Hitachi Astemo, Ltd. Image processing device and light distribution control system
JP6764378B2 (en) * 2017-07-26 2020-09-30 株式会社Subaru External environment recognition device
CN109523555A (en) * 2017-09-18 2019-03-26 百度在线网络技术(北京)有限公司 Front truck brake behavioral value method and apparatus for automatic driving vehicle
CN110020575B (en) * 2018-01-10 2022-10-21 富士通株式会社 Vehicle detection device and method and electronic equipment
CN109800693B (en) * 2019-01-08 2021-05-28 西安交通大学 Night vehicle detection method based on color channel mixing characteristics
CN110132302A (en) * 2019-05-20 2019-08-16 中国科学院自动化研究所 Merge binocular vision speedometer localization method, the system of IMU information

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344988A (en) * 2008-06-16 2009-01-14 上海高德威智能交通***有限公司 Image acquisition and processing equipment and method, vehicle monitoring and recording system
CN101382997A (en) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 Vehicle detecting and tracking method and device at night
CN101556739A (en) * 2009-05-14 2009-10-14 浙江大学 Vehicle detecting algorithm based on intrinsic image decomposition
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification
CN103020948A (en) * 2011-09-28 2013-04-03 中国航天科工集团第二研究院二○七所 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105303160A (en) * 2015-09-21 2016-02-03 中电海康集团有限公司 Method for detecting and tracking vehicles at night
CN105718893A (en) * 2016-01-22 2016-06-29 江苏大学 Car tail light pair detecting method for night environment

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382997A (en) * 2008-06-13 2009-03-11 青岛海信电子产业控股股份有限公司 Vehicle detecting and tracking method and device at night
CN101344988A (en) * 2008-06-16 2009-01-14 上海高德威智能交通***有限公司 Image acquisition and processing equipment and method, vehicle monitoring and recording system
CN101556739A (en) * 2009-05-14 2009-10-14 浙江大学 Vehicle detecting algorithm based on intrinsic image decomposition
CN102044151A (en) * 2010-10-14 2011-05-04 吉林大学 Night vehicle video detection method based on illumination visibility identification
CN103020948A (en) * 2011-09-28 2013-04-03 中国航天科工集团第二研究院二○七所 Night image characteristic extraction method in intelligent vehicle-mounted anti-collision pre-warning system
CN103366571A (en) * 2013-07-03 2013-10-23 河南中原高速公路股份有限公司 Intelligent method for detecting traffic accident at night
CN104732235A (en) * 2015-03-19 2015-06-24 杭州电子科技大学 Vehicle detection method for eliminating night road reflective interference
CN105303160A (en) * 2015-09-21 2016-02-03 中电海康集团有限公司 Method for detecting and tracking vehicles at night
CN105718893A (en) * 2016-01-22 2016-06-29 江苏大学 Car tail light pair detecting method for night environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于尾灯跟踪的夜间车辆检测;祁秋红 等;《通信技术》;20121031;第45卷(第10期);第58-60页
复杂环境下的夜间视频车辆检测;吴海涛 等;《计算机应用研究》;20071231;第24卷(第12期);第386-389页

Also Published As

Publication number Publication date
CN106407951A (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN106407951B (en) A kind of night front vehicles detection method based on monocular vision
CN105608417B (en) Traffic lights detection method and device
Li et al. Nighttime lane markings recognition based on Canny detection and Hough transform
TWI401473B (en) Night time pedestrian detection system and method
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN110569782A (en) Target detection method based on deep learning
CN105022990A (en) Water surface target rapid-detection method based on unmanned vessel application
CN108830199A (en) Identify method, apparatus, readable medium and the electronic equipment of traffic light signals
CN108363957A (en) Road traffic sign detection based on cascade network and recognition methods
CN105740886B (en) A kind of automobile logo identification method based on machine learning
CN103218831A (en) Video moving target classification and identification method based on outline constraint
CN108764096B (en) Pedestrian re-identification system and method
CN112488046B (en) Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN109886086B (en) Pedestrian detection method based on HOG (histogram of oriented gradient) features and linear SVM (support vector machine) cascade classifier
CN110427979B (en) Road water pit identification method based on K-Means clustering algorithm
CN111160293A (en) Small target ship detection method and system based on characteristic pyramid network
CN105893970A (en) Nighttime road vehicle detection method based on luminance variance characteristics
US11657592B2 (en) Systems and methods for object recognition
KR20200039548A (en) Learning method and testing method for monitoring blind spot of vehicle, and learning device and testing device using the same
CN116279592A (en) Method for dividing travelable area of unmanned logistics vehicle
CN112613392A (en) Lane line detection method, device and system based on semantic segmentation and storage medium
CN106056115B (en) A kind of infrared small target detection method under non-homogeneous background
CN106529533A (en) Complex weather license plate positioning method based on multi-scale analysis and matched sequencing
JP6472504B1 (en) Information processing apparatus, information processing program, and information processing method
CN108268866B (en) Vehicle detection method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant