CN110188693B - Improved complex environment vehicle feature extraction and parking discrimination method - Google Patents

Improved complex environment vehicle feature extraction and parking discrimination method Download PDF

Info

Publication number
CN110188693B
CN110188693B CN201910464427.XA CN201910464427A CN110188693B CN 110188693 B CN110188693 B CN 110188693B CN 201910464427 A CN201910464427 A CN 201910464427A CN 110188693 B CN110188693 B CN 110188693B
Authority
CN
China
Prior art keywords
feature
harr
improved
vehicle
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910464427.XA
Other languages
Chinese (zh)
Other versions
CN110188693A (en
Inventor
赵敏
孙棣华
王齐天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN201910464427.XA priority Critical patent/CN110188693B/en
Publication of CN110188693A publication Critical patent/CN110188693A/en
Application granted granted Critical
Publication of CN110188693B publication Critical patent/CN110188693B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/259Fusion by voting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an improved complex environment vehicle feature extraction and parking discrimination method, which is based on the actual environment of an expressway, and aims at a specific scene of the expressway, after analyzing the problem and difficulty of detecting abnormal parking of the expressway based on video, the Haar-like feature extraction of vehicles under the complex condition of the expressway and the defects of an Adaboost cascade classifier parking discrimination model when being applied to the scene of the expressway are mainly researched, the Haar-like + Adaboost expressway parking event detection algorithm is improved, the performance of the expressway parking event detection algorithm is improved, the intra-class difference of the Harr-like features of the vehicles during illumination change, visual angle change and scale change can be reduced, the feature learning efficiency is improved, and the detection precision and the detection system generalization capability of the abnormal parking event of the expressway are improved.

Description

Improved complex environment vehicle feature extraction and parking discrimination method
Technical Field
The invention relates to the field of traffic detection technology and data processing, in particular to an improved Harr-like feature extraction and Adaboost frame parking judgment method for vehicles in complex environments.
Background
The abnormal parking on the highway has great potential safety hazard, the real-time parking incident state judgment can provide the traffic information of highway sections in real time, and the automatic service is better provided for traffic managers. The method for researching the Harr-like feature extraction of the vehicle in the complex open environment and optimizing the learning based on the Adaboost feature can effectively avoid the conditions of high false detection and missing detection when the complex environment changes in the existing traffic incident detection system, and is beneficial to improving the robustness and robustness of the whole traffic incident detection system.
In the aspect of video-based highway vehicle identification, the discrimination effect of reading the existing patents and papers is more and more remarkable along with the rapid development of digital image processing technology in recent years. The method mainly comprises four categories:
the first type is a motion-based vehicle detection method, which mainly uses the characteristic that an object is in motion to analyze the change part between continuous frames in a video so as to detect a vehicle. At present, there are three main motion-based vehicle detection methods: optical flow, frame differencing, and background differencing. The second type is a vehicle detection method based on prior knowledge, which is to locate a vehicle from an image by applying the prior knowledge of a target and a scene, wherein the prior knowledge mainly comprises symmetry, color, shadow, geometric features, texture and vehicle light. The third type is a template-based vehicle detection method, which is to pre-construct a 2D or 3D model of a vehicle by using prior knowledge, project the model to an image space by using proper transformation, match the model projection with image features, and take the best match as a detection result. The fourth category is a vehicle detection method based on feature learning, which considers the problem as a two-category problem of vehicles and non-vehicles, and the identification system includes finding a most suitable boundary for distinguishing the two categories. At present, a target detection algorithm based on Harr-like + Adaboost has good target classification capability, fewer image samples are needed, overfitting is not easy to generate in feature training, and the method is suitable for solving the problem of vehicle detection in complex scenes.
Disclosure of Invention
In view of the above, one of the objectives of the present invention is to provide an improved complex environment vehicle feature extraction and parking determination method. The invention provides an improved method for extracting Harr-like characteristics of vehicles in a complex environment and judging parking events under an Adaboost frame, which is based on the actual environment of an expressway and aims at solving the problem of overlarge differences in Harr-like characteristics of vehicles under the complex environment, so that the adaptability of an algorithm to perspective changes, illumination changes and scale changes of the expressway can be quickly and effectively improved, and the detection accuracy of the parking events and the generalization capability of a detection system are improved. It is another object of the present invention to provide a system.
The purpose of the invention is realized by the following technical scheme:
the invention discloses an improved complex environment vehicle feature extraction and parking judgment method, which is characterized by comprising the following steps: the method comprises the following steps:
step S1: performing visual angle correction on the video image sequence;
s2, performing illumination correction;
and step S3: extracting a candidate frame of the potential target area;
and step S4: carrying out scale correction;
step S5: and (5) carrying out event judgment through a Harr-like statistical learning classifier.
In particular, said step S1 comprises the following sub-steps:
step S11: extracting an interested region;
s12, judging the area visual angle;
step S13: and (5) image overturning is carried out.
In particular, said step 2 comprises the following sub-steps:
step S21: modeling a single Gaussian background;
step S22: judging the brightness of the ROI;
step S23: and Gamma conversion.
In particular, said step S3 comprises the following sub-steps:
step S31: modeling by adopting a mixed Gaussian background with low background update rate;
step S32: judging a standing object;
step S33: expanding and corroding the foreground binary image;
step S34: local light spot elimination based on Scanny extraction;
and step S35, outputting the potential target candidate region.
In particular, said step S4 comprises the following sub-steps:
step S41: loading a Harr-Like characteristic weak classifier;
step S42: traversing the variable-scale window of the potential target area;
step S43: detecting window feature remapping.
In particular, said step S5 comprises the following sub-steps:
step S51: outputting the size of the characteristic value of each weak classifier Harr-like;
step S52: weighting and voting by adopting a weak classifier;
step S53: and outputting a detection result.
In particular, the method is carried out by adopting an improved Adaboost feature learning system, and comprises the following steps:
removing atypical features in the early stage, in the training process of each weak classifier, before traversing and screening all Harr-like features, firstly prejudging whether the features are atypical features, and if so, not calculating the features of the type;
aiming at local point block characteristics of vehicles in an expressway scene, the scale-variable central contour characteristics are designed to participate in the Adaboost characteristic learning process, the optimal classification capability item in the central contour characteristics is obtained through an algorithm, and the classification effect is improved.
The second purpose of the invention is realized by the following technical scheme, and the complex environment vehicle feature extraction and parking discrimination system comprises:
a visual angle correction module: unifying the visual angles of images of different scenes, and reducing the Harr-like characteristic intra-class difference caused by different visual angles;
the illumination correction module: adjusting the brightness of the image by detecting the real-time brightness, and reducing the Harr-like characteristic intra-class difference caused by illumination change;
the potential target area candidate box extraction module: obtaining a foreground target candidate area and target scale information through background difference, reducing unnecessary operation resource waste during traversing of a sliding detection window, improving detection real-time performance, and simultaneously providing candidate area information for next scale correction;
a scale correction module: traversing a variable-scale detection window of the candidate region, loading a trained classifier file, remapping the Harr-like characteristics of each weak classifier in the detection window, and calculating to obtain corresponding Harr-like characteristic values;
a Harr-like feature statistical learning discrimination module: and judging the Harr-like characteristic values which are obtained in the last step and are unique to the weak classifiers, and obtaining a final judgment result through weighted voting.
The invention has the beneficial effects that: the method starts from the actual environment of the highway, and aims at the specific scene of the highway, after the problem and the difficulty of detecting abnormal parking of the highway based on the video are analyzed, the Haar-like feature extraction of the vehicles under the complex condition of the highway and the defects of the Adaboost cascade classifier parking discrimination model when being applied to the scene of the highway are mainly researched, the detection algorithm of the parking events of the highway based on the Haar-like and the Adaboost is improved, the performance of the detection algorithm of the parking events of the highway is improved, the intra-class difference of the Harr-like features of the vehicles during the illumination change, the visual angle change and the scale change can be reduced, the feature learning efficiency is improved, and the detection precision and the generalization capability of the detection system of the abnormal parking events of the highway are improved.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof.
Drawings
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail with reference to the accompanying drawings, in which:
FIG. 1 is a general flow diagram of Harr-like and Adaboost assays;
FIG. 2 is a flow chart of an improved Adaboost feature learning algorithm
FIG. 3 is a vehicle variable-scale center feature template added to the design herein.
FIG. 4 is a schematic diagram of feature remapping for a variable-scale detection window;
FIG. 5 is an exemplary illustration of an atypical Harr-like feature;
FIG. 6 is a schematic diagram of five Harr-like feature templates;
FIG. 7 is a schematic diagram of a center feature type;
FIG. 8 is a schematic diagram of nine Harr-like center contour feature templates and their characterization of local vehicle information;
FIG. 9 is a sorted list of positive and negative sample values for a particular Haar-like feature.
Detailed Description
Hereinafter, preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and fig. 2, the improved complex environment vehicle feature extraction and parking discrimination method of the present invention includes the following steps:
step S1: performing visual angle correction on the video image sequence;
s2, performing illumination correction;
and step S3: extracting a candidate frame of the potential target area;
and step S4: carrying out scale correction;
step S5: and (4) carrying out event judgment through a Harr-like statistical learning classifier.
The method comprises the steps that firstly, aiming at view angle change, a potential target area of an image is turned over through automatic identification of the view angle of an area of interest and two-dimensional image mapping transformation, and the obvious change of the appearance of a vehicle caused by different view angles is reduced; aiming at illumination change, carrying out real-time brightness detection on a background picture obtained by single Gaussian background modeling, and then enabling the brightness of the vehicle to keep small-range fluctuation through Gamma conversion; aiming at scale change, the process of mapping the original Harr-like feature prototype in a variable scale window is improved.
And secondly, researching a candidate region extraction scheme based on a lag updating background model under an Adaboost frame aiming at local light spot interference under high-speed highway scenes such as tree shadow water stain and the like, and improving the candidate region extraction accuracy by comparing texture information of a current picture and a background picture scanny. And the Harr-like feature statistical learning algorithm is improved, the atypical Harr-like features are removed, and the training speed is improved on the premise of ensuring the training effect. Finally, aiming at the characteristic that local characteristics of vehicles in the expressway scene show point block distribution, variable-scale Harr-like central characteristics are designed and added on the basis of the original algorithm, so that the central characteristics participate in the characteristic screening process based on Adaboost, and the classification capability of vehicle targets and non-vehicle targets is improved.
The invention discloses an improved Harr-like feature extraction and Adaboost frame parking judgment method for a complex environment vehicle, which mainly comprises the following modules:
(1) A visual angle correction module: the method is used for unifying the visual angles of images of different scenes and reducing the Harr-like characteristic intra-class difference caused by different visual angles, and specifically comprises the following steps:
firstly, extracting a region of interest: the extraction of the region of interest belongs to the preprocessing of visual angle correction, and the main purpose of the extraction is to separate a lane region from an image, remove the influence of a non-lane region on vehicle lamp extraction and obtain the original coordinates of the visual angle correction region. By analyzing the video image in the expressway scene, in the embodiment, the region of interest is extracted according to the following principle:
(1) except for the non-road areas. Since the vehicle is likely to travel only in the road area, the objects such as trees existing in the non-road area are not within the vehicle feature extraction range.
(2) And removing the far area of the road. In the distance of road, there is a large amount of shelters from between each vehicle, and the target texture characteristic is too vague for the extraction interference to the vehicle characteristic is great, for the accuracy of guaranteeing vehicle characteristic extraction, this text will not disturb the great region of road distance in the region of interest.
(3) Character interference existing in the image is removed. Because the video images collected by part of the cameras contain characters such as road information, the characters of road names and time are displayed on the upper left corner and the lower right corner of the images, and the white characters of the camera information are also arranged on the road on the left side, the extraction and learning of the vehicle characteristics are influenced to a certain extent, and therefore the characters need to be removed when the region of interest is selected. In summary, the reasonable selection of the image interesting region not only can reduce the interference of vehicle feature extraction and provide reference information for image visual angle correction, but also improves the processing speed of the algorithm.
And finally, judging the visual angle according to the information of the region of interest. According to the four corner points of the quadrilateral interesting area, whether the shooting visual angle of the scene camera to the vehicle is positioned on the left side or the right side can be judged, and through horizontal image overturning, the vehicle images shot by all the scene cameras can be all the same side visual angle (all the left side visual angles or all the right side visual angles).
(2) The illumination correction module: adjusting the brightness of the image by detecting the real-time brightness, and reducing the Harr-like characteristic intra-class difference caused by illumination change;
when the illumination of a region of interest (ROI) is sufficient, the gray distribution of an image is concentrated on a highlight part, so that the image is entirely whitened, and the vehicle characteristics are not obvious; when the illumination of a region of interest (ROI) is insufficient, the gray distribution of the image is concentrated on a dark part, and the characteristics of the vehicle are fuzzy. Therefore, an automatic judgment index can be provided for the illumination intensity of the ROI, so that the adaptive adjustment of the image gray distribution is performed by using the index, and the significance of the detail feature is ensured no matter in bright or dark scenes by the vehicle image feature is achieved.
The invention uses a scene model-based brightness discrimination scheme. Firstly, a continuous video image sequence in a period of time is obtained and used as a data source for establishing a scene model in the period of time. And then obtaining a background picture of the current period according to a Gaussian background modeling method of the fast learning rate. And carrying out image gray statistics on the picture, and calculating the average value of pixel brightness distribution in the region of interest to obtain the ROI region brightness discrimination index.
After the background picture under the current scene is obtained, the pixel brightness alpha under the current scene can be calculated, and the image brightness index adopted by the method is represented by calculating the ratio of the average gray value of the interested area of the current background picture and the picture under normal illumination. When alpha is larger than 1, the current scene is brighter, and when alpha is smaller than 1, the current scene is darker. And K represents the average gray value of the region of interest under the normal illumination environment, and the value is an adjustable threshold value and is appropriately adjusted according to actual detection. Wherein
Figure BDA0002079016220000061
(n represents the number of pixel points in the region of interest, x i And expressing the gray value of a certain pixel point in the region of interest. )
The brightness of the picture is corrected in time according to the brightness information of the current scene, so that the local characteristics of the image of the vehicle can be prevented from being difficult to extract due to over-strong and over-weak illumination. The most common illumination modification method is Gamma correction. Gamma correction is a method of editing a gamma curve of an image to perform nonlinear tone editing on the image, and detects a dark portion and a light portion in an image signal to increase the ratio of the dark portion and the light portion, thereby improving the image contrast effect. Normally, the Gamma correction takes a fixed correction value, but in an expressway scene, the illumination is dark, and at this time, if the Gamma correction cannot be adaptively adjusted along with the dynamic change of the illumination, the adverse effect may occur. The invention provides an adaptive illumination correction scheme based on region of interest (ROI) brightness discrimination. The specific modification scheme is as follows:
Figure BDA0002079016220000062
wherein I (x, y) represents the pixel value size of the current picture at the point with the coordinate (x, y), wherein Gamma transformation index γ = k × α, that is, when the image brightness is high, the Gamma index is greater than 1, and the contrast of the high brightness value region is improved after transformation; when the image brightness is dark, the Gamma index is less than 1, and the contrast of a low-brightness area is improved after transformation. The method comprises the following specific steps: assuming that there is a pixel in the image with a value of 200, the correction for this pixel must be performed as follows:
1. normalization: the pixel values are converted to real numbers between 0 and 1. The algorithm is as follows (i + 0.5)/256 here contains 1 division and 1 addition operation. For pixel a, its corresponding normalized value is 0.783203.
2. Pre-compensation: and according to a formula, solving a corresponding value of the data after the pixel normalization with 1/gamma as an index. This step involves an exponentiation operation. If the gamma value is 2.2, the 1/gamma is 0.454545, and the result of pre-compensating the normalized A value is 0.783203^0.454545= -0.894872.
3. And (3) inverse normalization: the precompensated real values are inversely transformed into integer values between 0 and 255. The specific algorithm is that f is 256-0.5, and the step comprises multiplication and subtraction. In the previous example, the pre-compensation result of a, 0.894872, is substituted into the above equation to obtain the corresponding pixel value of 228 after a is pre-compensated, and this 228 is the data that is finally sent to the display.
Assuming an image with a resolution of 800 x 600, gamma correction requires 48 thousand floating-point multiply, divide and exponent operations to be performed, if programmed directly as described above. The efficiency is too low to achieve real-time effect at all.
Aiming at the situation, a quick algorithm is provided, if the pixel value range of the image can be confirmed, for example, an integer between 0 and 255, any pixel value in the image can only be one of 256 integers from 0 to 255; in the case of a known gamma value, any integer between 0 and 255, after "normalization, precompensation, denormalization" operations, will have a unique result and will also fall within the range 0 to 255.
As before, knowing the gamma value of 2.2 and the original value of pixel A of 200, the pre-compensation value for gamma corrected A is found to be 228. Based on the principle, only one pre-compensation operation is needed to be executed for each integer between 0 and 255, and the corresponding pre-compensation value is stored into a pre-established Gamma correction lookup Table (LUT: look Up Table), so that the Table can be used for carrying out Gamma correction on any image with the pixel value between 0 and 255.
In summary, by calculating the Gamma correction lookup Table (LUT: look Up Table) offline, the fast mapping transformation can be realized, and the real-time performance of the algorithm is ensured. In general, the vehicle features are enabled to be robust to illumination transformation, and adaptive correction of image illumination is achieved.
(3) The potential target area candidate box extraction module: obtaining a foreground target candidate area and target scale information through background difference, reducing unnecessary operation resource waste during traversing of a sliding detection window, improving detection real-time performance, and simultaneously providing candidate area information for next scale correction;
firstly, obtaining a foreground target area through background difference, and then carrying out Scanny texture extraction on corresponding areas of a background picture and a current picture. In an expressway scene, the texture characteristics of the road surface do not change significantly with the change of illumination, but the texture characteristics of the road area change significantly once the vehicle stops in the area. According to the principle, if the texture features of the background picture and the real-time picture in the target area are similar, the fact that the brightness change of light causes the noise of the Gaussian extracted foreground target is shown, and the candidate area is judged to be the non-vehicle target. By the method, the extraction accuracy of the vehicle candidate region is remarkably improved.
As shown in fig. 3, in the vehicle target area, the Scanny texture in the area is significantly different from the Scanny texture in the same area of the background; and for the area with the local light spot, the background picture and the picture to be detected have no obvious difference for the Scanny texture of the corresponding area. Based on the method, the local light spot interference can be eliminated, and the extraction accuracy of the candidate region is improved.
(4) A scale correction module: performing variable-scale detection window traversal on the candidate region, loading well trained classifier files, performing feature remapping on Harr-like features of each weak classifier in a detection window, and calculating to obtain corresponding Harr-like feature values;
in the existing vehicle Harr-like feature extraction process, the mapping from a feature prototype to a detection window is carried out by adopting the size of a fixed proportion and the length-width ratio of the fixed proportion, and the method cannot meet the actual condition that the variation range of the vehicle target scale is large in a highway scene.
After self-adaptive filtering is carried out on potential target areas with different scales, a detection window feature extraction method which replaces the original fixed scale and fixed length-width ratio is provided, and the potential target areas with unfixed scale and unfixed length-width ratio are used as detection windows to extract features after remapping. The method can reduce the difference of the characteristic value distribution between the training sample and the actual detection target, and effectively improves the detection precision. The specific method of variable scale feature mapping is shown in fig. 4:
setting the size of a training sample picture as 24 x 24 pixels; the size of the potential target area detection window traversing to a certain position is W x H; one Harr-like edge Feature prototype is Feature (x) 1 ,y 1 ,x 2 ,y 2 S =1, t = 2), i.e. the position of the feature rectangle in the sample picture of 24 × 24 size is: coordinate of upper left corner is (x) 1 ,y 1 ) The coordinate of the lower right corner is (x) 2 ,y 2 ) And is composed of two congruent rectangles of black and white.
Feature (x) 1 ,y 1 ,x 2 ,y 2 S =1,t =2) is mapped from 24 × 24 size picture to W × H size detection window, and becomes Feature (x' 1 ,y’ 1 ,x’ 2 ,y’ 2 S =1, t = 2) as follows:
(1) calculating a scale factor:
x-direction scale factor α = W/24; the Y-direction scale factor β = H/24.
(2) Calculating mapping coordinate (x ') of upper left corner' 1 ,y’ 1 ):
x’ 1 =[α*x 1 ],y’ 1 =[β*y 1 ],
Where [ ] indicates rounding down the internal element.
(3) Calculate bottom-right corner mapping coordinate (x' 2 ,y’ 2 ):
To ensure that the rectangle after remapping still satisfies the (s, t) condition of the conditional rectangle, the coordinates are adjusted after scaling by the scaling factor.
Figure BDA0002079016220000081
Figure BDA0002079016220000082
(4) Obtaining Feature (Feature rectangle upper left corner and lower right corner positions and Feature template type) Feature (x ') of the detection window subjected to Feature remapping' 1 ,y’ 1 ,x’ 2 ,y’ 2 S =1, t = 2), and then the eigenvalue size of the Harr-like feature is calculated from the integral picture calculated in advance by the potential target candidate area.
(5) A Harr-like feature statistical learning discrimination module: and judging the Harr-like characteristic values which are obtained in the last step and are unique to the weak classifiers, and obtaining a final judgment result through weighted voting.
Obtaining T best weak classifiers h through training 1 (x),...h T (x) A strong classifier can be combined in the following way:
Figure BDA0002079016220000091
wherein
Figure BDA0002079016220000092
When the strong classifier treats an image to be detected, the strong classifier votes all the weak classifiers, the voting results are subjected to weighted summation according to the error rate of the weak classifiers, and the result of the weighted summation of the voting is compared with the average voting result to obtain the final result. In practical applications, the threshold of the weighted voting can be adjusted appropriately to achieve the best classification effect.
The following introduces a specific flow for statistically learning the Harr-like characteristics of the vehicle through the Adaboost algorithm:
a basic algorithm main process of vehicle Haar-like feature statistical learning based on Adaboost is as follows:
first, the algorithm for training the strong classifier by Adaboost is described as follows:
1) A series of training samples collected from highway scene history videos are given, wherein they are represented as negative samples (non-vehicle sample pictures) and as positive samples (vehicle sample pictures). n is the total number of training samples;
2) Initialization weight w 1,i = D (i); wherein
Figure BDA0002079016220000093
(for negative samples) or>
Figure BDA0002079016220000094
(positive samples), m and l are the number of negative samples (non-vehicle objects) and positive samples (vehicle objects), respectively. />
3) For T =1,.. T:
a. normalizing the weights such that the sum of the weights of all positive and negative samples is 1:
Figure BDA0002079016220000095
a. training a weak classifier h (x, f, p, theta) for each feature f; computing the weighting q of the weak classifiers for all features t Error Rate ε f ):
ε f =∑ i q i |h(x i ,f,p,θ)-y i |
b. Selecting the best weak classifier h t (x) The weak classifier has a minimum weighted error rate ε t
ε t =min f,p,θi q i |h(x i ,f,p,θ)-y i |=∑ i q i |h(x i ,f t ,p tt )-y i |
h t (x)=h(x,f t ,p tt )
c. According to the optimal weak classifier, the weight is adjusted:
Figure BDA0002079016220000096
wherein e i =0 denotes that the i-th sample is correctly classified, e i =1 indicates that the ith sample is misclassified;
Figure BDA0002079016220000101
4) The final strong classifier is:
Figure BDA0002079016220000102
wherein:
Figure BDA0002079016220000103
wherein λ is in the range of 0 to 1, generally taking the value of 0.5, and in practical application, the parameter can be adjusted to meet the actual requirement. If the value is increased, the false detection rate of the strong classifier is reduced, but the missing detection is increased; if this value is decreased, the false detection rate of the strong classifier increases, but the false detection decreases.
Next, the weak classifier model is introduced as follows:
a complete vehicle object classification weak classifier h (x, f, p, θ) is composed of the following three elements:
1) Under the current positive and negative sample weight, obtaining the feature with the minimum classification weighting error rate by traversing all sample pictures and all Haar-like feature types;
2) The feature value limit theta of the optimal classification is realized by traversing and sequencing the feature values of the f features of all the positive and negative sample pictures;
3) P in the direction of unequal sign, if the calculation result of the characteristic f of the target is greater than the threshold value theta, judging the target as a positive sample, and taking the positive sample as p; otherwise p takes the negative sign.
In summary, the weak classifiers are constructed as follows:
Figure BDA0002079016220000104
the weak classifier is trained as follows:
training a weak classifier (Feature (x)) 1 ,y 1 ,x 2 ,y 2 S, t)) is to determine Feature (x) with the current weight distribution 1 ,y 1 ,x 2 ,y 2 S, t) of the weak classifier (Feature (x) 1 ,y 1 ,x 2 ,y 2 S, t)) has the lowest weighted classification error for all training samples. Selecting an optimal weak classifier is to select the weak classifier with the lowest classification error of all positive and negative training samples in all weak classifiers, i.e. selecting Feature (x) 1 ,y 1 ,x 2 ,y 2 And s, t) is one of the reference characteristics of the positive and negative sample detection.
For each Feature (x) i ,y i ,x j ,y j S, t), calculating the eigenvalues of all positive and negative training samples and ordering them. By scanning through the sorted feature values, an optimal threshold value can be determined for the feature, and thus a weak classifier is trained. Specifically, for each element in the sorted table, the following four values are calculated:
1) Sum of weights T of all vehicle samples +
2) Weight sum T of all non-face samples -
3) Sum of weights S of vehicle samples preceding this element +
4) Sum of weights S of non-vehicle samples after this element -
When selecting the characteristic value of the current element
Figure BDA0002079016220000111
And a characteristic value preceding it>
Figure BDA0002079016220000112
When the number in between is used as a threshold, the resulting weak classifier separates the samples at the current element — that is, the weak classifier corresponding to this threshold classifies all elements before the current element as vehicles (or non-vehicles) and classifies all elements after (or including) the current element as non-vehicles (or vehicles). The classification error obtained under the threshold is obtained as follows:
e=min(S + +(T - -S - ),S - +(T + -S + ))
by scanning the sorted list from beginning to end, the threshold (optimal threshold) that minimizes the classification error can be selected for the weak classifier, i.e., an optimal weak classifier is selected. Because the optimal threshold value can only be positioned at the adjacent junction of the positive and negative sample elements in the sorted list, in the implementation process of the algorithm, classification errors are not obtained for all elements, but obtained for all element nodes with adjacent positive and negative samples in the sorted element list, and then screening is carried out, so that the efficiency of obtaining the optimal threshold value theta of the weak classifier is improved. As shown in fig. 9:
calculating the characteristic value of the extracted target region in the actual detection process when e = S + +(T - -S - ) At time, if the characteristic value is greater than the optimum threshold value, it is determined as a vehicle target; and if the characteristic value is less than the optimal threshold value, determining that the vehicle is not a vehicle target. When e = S - +(T + -S + ) At time, it means that if the characteristic value is smaller than the optimum threshold value, it is determined as a vehicle target; and if the characteristic value is larger than the optimal threshold value, determining that the vehicle is not the target.
Thirdly, the strong classifier is constructed as follows:
after T iterations, T optimal weak classifiers h are obtained 1 (x),...h T (x) A strong classifier can be combined in the following way:
Figure BDA0002079016220000113
wherein:
Figure BDA0002079016220000114
when the strong classifier treats an image to be detected, the strong classifier equivalently votes all the weak classifiers, then the voting results are weighted and summed according to the error rate of the weak classifiers, and the result of weighted and summed voting is compared with the average voting result to obtain the final result. In practical applications, the threshold of the weighted voting can be adjusted appropriately to achieve the best classification effect.
The invention adopts an improved Adaboost characteristic learning system to be matched with Harr-like, and specifically comprises the following steps:
(1) Partial atypical Harr-like features) early stage culling
For rectangular features occupying four pixels, the randomness of the feature value calculated by using the features is too large, and an appropriate threshold value cannot be found through experimental verification. At this step, the number of features can be reduced by 3,243; and because the trained vehicle features are all concentrated in the middle, the contribution of the rectangular features on the edges to the improvement of the classification effect is not particularly large, so that the rectangular features on the edges can be appropriately reduced, and the number of the rectangular features can be further reduced by 652. As shown in fig. 5.
Therefore, for the Adaboost integrated learning of the Harr-like features in the expressway scene, the extraction of the tiny Harr-like features and the edge Harr-like features can be eliminatedAnd calculating, and improving the training speed on the premise of ensuring the training effect. In this embodiment of the invention, the micro Harr-like feature is first defined as: small _ Feature (x) 1 ,y 1 ,x 2 ,y 2 S, t) and the five parameters describing the feature type should satisfy the following six conditions simultaneously:
1.(s,t)∈{(1,2),(2,1),(1,3),(3,1),(2,2)}
2.x 1 ∈{1,2,...SCALE-s,SCALE-s+1}
3.y 1 ∈{1,2,...SCALE-t,SCALE-t+1}
4.x 2 ∈X={x 1 +s-1,x 1 +2*s-1,...,x 1 +(p-1)*s-1,x 1 +p*s-1}
5.y 2 ∈Y={y 1 +t-1,y 1 +2*t-1,...,y 1 +(q-1)*t-1,y 1 +q*t-1}
6.(x 2 -x 1 +1)*(y 2 -y 1 +1)≤4
wherein:
Figure BDA0002079016220000121
Figure BDA0002079016220000122
second, define edge Harr-like features as: margin _ Feature (x) 1 ,y 1 ,x 2 ,y 2 S, t) and the five parameters describing the feature type should satisfy the following six conditions at the same time:
1.(s,t)∈{(1,2),(2,1),(1,3),(3,1),(2,2)}
2.x 1 ∈{1,2,...SCALE-s,SCALE-s+1}
3.y 1 ∈{1,2,...SCALE-t,SCALE-t+1}
4.x 2 ∈X={x 1 +s-1,x 1 +2*s-1,...,x 1 +(p-1)*s-1,x 1 +p*s-1}
5.y 2 ∈Y={y 1 +t-1,y 1 +2*t-1,...,y 1 +(q-1)*t-1,y 1 +q*t-1}
6.(x 1 =1∩x 2 =1)∪(y 1 =1∩y 2 =1)∪(x 1 =SCALE∩x 2 =SCALE)∪(y 1 =SCALE∩y 2 =SCALE)
wherein:
Figure BDA0002079016220000123
Figure BDA0002079016220000124
(2) Harr-like characteristic template suitable for describing characteristics of vehicles on highway scene
Firstly, the size of the Haar-like characteristic value reflects the gray level change condition of the image. Five Harr-like feature templates and corresponding type descriptions that are currently in common use are shown in fig. 6. These features may better characterize the edge texture features of the vehicle in both vertical and horizontal directions:
however, in an expressway scene, the vehicle target has obvious central features (such as windows on the left side and the right side of the vehicle) at the same time, and the vehicle target and the non-vehicle target can be well distinguished. Therefore, the central feature template is added to the original Harr-like feature template library in the design. The center template is defined by subtracting the sum of the pixel values of the black area from the sum of the pixels of the white area in the rectangle D as the feature value of the center feature, as shown in fig. 7:
Feature(x 1 ,y 1 ,x 2 ,y 2 ,s,t)
=intgePicture(1)+intgePicture(4)-intgePicture(2)-intgePicture(3)+(intgePicture(5)+intgePicture(8)-intgePicture(6)-intgePicture(7))*2
wherein, the intgePicture (x) represents a matrix calculation value of an integral graph calculated by the original potential target area picture at a point x.
As shown in fig. 8, the central features of the inventive design are subdivided into several types as shown in the figure.
According to the design scheme, the factors of view angle change, illumination change and scale change in the changeable scene of the expressway are fully considered, and the corresponding Harr-like feature extraction process is improved according to each possible influence factor. The robustness and accuracy of the detection system is improved as a whole.
The method starts from the actual environment of the expressway, and aims at a specific scene of the expressway, after the problem and difficulty of detecting abnormal parking of the expressway based on video are analyzed, the Haar-like feature extraction of vehicles under the complicated condition of the expressway and the defects of an Adaboost cascade classifier parking discrimination model when being applied to the scene of the expressway are mainly researched, the detection algorithm of the parking event of the expressway based on Haar-like and Adaboost is improved, and the performance of the detection algorithm of the parking event of the expressway is finally improved. The method mainly comprises the following steps:
(1) in the aspect of vehicle Harr-like feature extraction under complex conditions, the potential target area of an image is overturned through automatic identification of the visual angle of an interested area and two-dimensional image mapping transformation aiming at the change of the visual angle, so that the obvious change of the vehicle appearance caused by different visual angles is reduced; aiming at illumination change, carrying out real-time brightness detection on a background picture obtained by single Gaussian background modeling, and then enabling the brightness of the vehicle to keep small-range fluctuation through Gamma conversion; aiming at scale change, the process of mapping the original Harr-like feature prototype in a variable scale window is improved. Experimental results show that the series of improvements effectively reduce intra-class differences of the vehicle Haar-like characteristics in a changeable scene, and improve generalization capability and identification accuracy of a detection algorithm.
(2) In the aspect of improvement of a discrimination model of a cascade classifier under an Adaboost frame, a candidate region extraction scheme based on a lag updating background model under the Adaboost frame is firstly researched aiming at local light spot interference under high-speed highway scenes such as tree shadow water stains, and the candidate region extraction accuracy is improved by comparing texture information of a current picture and background pictures scanny. And the Harr-like feature statistical learning algorithm is improved, the atypical Harr-like feature is removed, and the training speed is improved on the premise of ensuring the training effect. Finally, aiming at the characteristic that local characteristics of vehicles in the expressway scene show point block distribution, variable-scale Harr-like central characteristics are designed and added on the basis of the original algorithm, so that the central characteristics participate in the characteristic screening process based on Adaboost, and the classification capability of vehicle targets and non-vehicle targets is improved.
Through verification, the system provided by the invention can improve the detection precision on the premise of ensuring the real-time performance of event detection.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. When the website intrusion detection method and the technology based on big data log analysis are programmed, the invention also comprises the computer.
A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
Finally, although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. The improved complex environment vehicle feature extraction and parking discrimination method is characterized by comprising the following steps: the method comprises the following steps:
step S1: performing visual angle correction on the video image sequence;
step S11: extracting the region of interest: separating a lane area from the image, removing the influence of a non-lane area on vehicle lamp extraction, and obtaining an original coordinate of a visual angle correction area;
step S12: and (3) judging the area visual angle: judging whether the shooting visual angle of the camera corresponding to the vehicle is positioned on the left side or the right side according to the four corner points of the quadrilateral interesting area;
step S13: and (3) image overturning: horizontally turning the images to enable the vehicle images shot by the camera to be all at the same side visual angle;
step S2: performing illumination correction;
step S21: modeling a single Gaussian background;
step S22: judging the brightness of the ROI: the image brightness index is represented by calculating the ratio of the average gray value of the current background picture and the picture interesting region under normal illumination, and the calculation expression is as follows:
Figure FDA0004036825830000011
wherein alpha represents the pixel brightness in the current scene, when alpha is larger than 1, the current scene is brighter, and when alpha is smaller than 1, the current scene is darker; k represents the average gray value of the region of interest under the normal illumination environment, and the K value is an adjustable threshold value and can be properly adjusted according to actual detection; n represents the number of pixel points in the interested region, x i Expressing the gray value of a certain pixel point in the region of interest;
step S23: and (3) performing Gamma transformation: according to Gamma correction, an adaptive illumination modification scheme based on region of interest (ROI) brightness discrimination is provided:
Figure FDA0004036825830000012
wherein, I (x, y) represents the pixel value size of the current picture at the point with the coordinate (x, y), wherein Gamma transformation index γ = k × α, that is, when the image brightness is higher, the Gamma index is greater than 1, and the contrast of the high brightness value region is improved after transformation; when the image brightness is dark, the Gamma index is less than 1, and the contrast of a low-brightness area is improved after conversion;
and step S3: extracting a candidate frame of the potential target area;
and step S4: carrying out scale correction;
step S5: and (5) carrying out event judgment through a Harr-like statistical learning classifier.
2. The improved complex environment vehicle feature extraction and parking discrimination method of claim 1, wherein: the step S3 includes the following substeps:
step S31: modeling by adopting a mixed Gaussian background with low background update rate;
step S32: judging a standing object;
step S33: expanding and corroding the foreground binary image;
step S34: local light spot elimination based on Scanny extraction;
step S35: and outputting the potential target candidate region.
3. The improved complex environment vehicle feature extraction and parking discrimination method of claim 1, wherein: the step S4 includes the following substeps:
step S41: loading a Harr-Like characteristic weak classifier;
step S42: traversing the variable-scale window of the potential target area;
step S43: detecting window feature remapping;
feature (x) 1 ,y 1 ,x 2 ,y 2 S =1,t = 2) is mapped from the picture of the pixel 24 × 24 to the detection window of W × H size, and becomes Feature (x' 1 ,y’ 1 ,x’ 2 ,y’ 2 S =1, t = 2), the algorithm is as follows:
(1) calculating a scale factor:
the X-direction scale factor α = W/24; y-direction scale factor β = H/24; taking the size of a training sample picture as 24 × 24 pixels, the size of a potential target area detection window traversing to a certain position as W × H, and a certain Harr-like edge Feature prototype as Feature (x) 1 ,y 1 ,x 2 ,y 2 S =1, t = 2), i.e. the position of the feature rectangle in the sample picture of 24 × 24 size is: the coordinate of the upper left corner is (x) 1 ,y 1 ) The coordinate of the lower right corner is (x) 2 ,y 2 ) And is composed of two congruent rectangles of black and white;
(2) calculate the top left corner mapping coordinate (x' 1 ,y’ 1 ):
x’ 1 =[α*x 1 ],y’ 1 =[β*y 1 ],
Wherein [ ] represents rounding down the internal elements;
(3) calculate bottom-right corner mapping coordinate (x' 2 ,y’ 2 ):
Figure FDA0004036825830000021
Figure FDA0004036825830000022
(4) Obtaining Feature information Feature (x' 1 ,y’ 1 ,x’ 2 ,y’ 2 S =1, t = 2), and then the eigenvalues of the Harr-like signature are calculated from the previously calculated integral pictures of the potential target candidate regions.
4. The improved complex environment vehicle feature extraction and parking discrimination method of claim 1, wherein: the step S5 includes the following substeps:
step S51: outputting the size of the characteristic value of each weak classifier Harr-like;
step S52: weighting and voting by adopting a weak classifier;
step S53: and outputting a detection result.
5. The improved complex environment vehicle feature extraction and parking discrimination method of claim 1, wherein: the method is carried out by adopting an improved Adaboost characteristic learning system in a matching way, and comprises the following steps:
removing atypical features in the early stage, in the training process of each weak classifier, before traversing and screening all Harr-like features, firstly prejudging whether the features are atypical features, and if so, not performing operation on the features of the type;
aiming at local point block characteristics of vehicles in an expressway scene, the scale-variable central contour characteristics are designed to participate in the Adaboost characteristic learning process, the optimal classification capability item in the central contour characteristics is obtained through an algorithm, and the classification effect is improved.
6. Improved Harr-like feature extraction and Adaboost frame parking discrimination system for vehicles in complex environments is characterized in that: the system comprises:
a visual angle correction module: unifying the visual angles of the images of different scenes, and reducing the Harr-like characteristic intra-class difference caused by different visual angles;
the illumination correction module: adjusting the brightness of the image by detecting the real-time brightness, and reducing the Harr-like characteristic intra-class difference caused by illumination change;
the potential target area candidate box extraction module: obtaining a foreground target candidate area and target scale information through background difference, reducing unnecessary operation resource waste during traversing of a sliding detection window, improving detection real-time performance, and simultaneously providing candidate area information for next scale correction;
a scale correction module: traversing a variable-scale detection window of the candidate region, loading a trained classifier file, remapping the Harr-like characteristics of each weak classifier in the detection window, and calculating to obtain corresponding Harr-like characteristic values;
characteristic remapping: feature (x) 1 ,y 1 ,x 2 ,y 2 S =1,t = 2) is mapped from the picture of the pixel 24 × 24 to the detection window of W × H size, and becomes Feature (x' 1 ,y’ 1 ,x’ 2 ,y’ 2 S =1, t = 2), the algorithm is as follows:
(1) calculating a scale factor:
x-direction scale factor α = W/24; y-direction scale factor β = H/24; taking the size of a training sample picture as 24 × 24 pixels, the size of a potential target area detection window traversing to a certain position as W × H, and a certain Harr-like edge Feature prototype as Feature (x) 1 ,y 1 ,x 2 ,y 2 S =1, t = 2), i.e. the position of the feature rectangle in the sample picture of 24 × 24 size is: coordinate of upper left corner is (x) 1 ,y 1 ) The coordinate of the lower right corner is (x) 2 ,y 2 ) And is composed of two congruent rectangles of black and white;
(2) calculating mapping coordinate (x ') of upper left corner' 1 ,y’ 1 ):
x’ 1 =[α*x 1 ],y’ 1 =[β*y 1 ],
Wherein [ ] indicates that the internal elements are rounded down;
(3) calculate bottom-right corner mapping coordinate (x' 2 ,y’ 2 ):
Figure FDA0004036825830000041
Figure FDA0004036825830000042
(4) Obtaining Feature information Feature (x' 1 ,y’ 1 ,x’ 2 ,y’ 2 S =1, t = 2), and then calculating eigenvalues of the Harr-like features from the integral pictures calculated in advance for the potential target candidate areas;
a Harr-like feature statistical learning discrimination module: and judging the Harr-like characteristic values which are obtained in the last step and are unique to the weak classifiers, and obtaining a final judgment result through weighted voting.
CN201910464427.XA 2019-05-30 2019-05-30 Improved complex environment vehicle feature extraction and parking discrimination method Active CN110188693B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910464427.XA CN110188693B (en) 2019-05-30 2019-05-30 Improved complex environment vehicle feature extraction and parking discrimination method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910464427.XA CN110188693B (en) 2019-05-30 2019-05-30 Improved complex environment vehicle feature extraction and parking discrimination method

Publications (2)

Publication Number Publication Date
CN110188693A CN110188693A (en) 2019-08-30
CN110188693B true CN110188693B (en) 2023-04-07

Family

ID=67719068

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910464427.XA Active CN110188693B (en) 2019-05-30 2019-05-30 Improved complex environment vehicle feature extraction and parking discrimination method

Country Status (1)

Country Link
CN (1) CN110188693B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112070172B (en) * 2020-09-11 2023-12-22 湖南人文科技学院 Abnormal target detection method and device based on targeting analysis and computer equipment
CN113415232B (en) * 2021-07-07 2022-02-01 深圳前海汉视科技有限公司 Vehicle lamp system for obstacle identification

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637257A (en) * 2012-03-22 2012-08-15 北京尚易德科技有限公司 Video-based detection and recognition system and method of vehicles
WO2015147764A1 (en) * 2014-03-28 2015-10-01 Kisa Mustafa A method for vehicle recognition, measurement of relative speed and distance with a single camera
CN104992160A (en) * 2015-07-16 2015-10-21 山东大学 Night preceding vehicle detection method for heavy-duty truck
CN105373794A (en) * 2015-12-14 2016-03-02 河北工业大学 Vehicle license plate recognition method
CN105608431A (en) * 2015-12-22 2016-05-25 杭州中威电子股份有限公司 Vehicle number and traffic flow speed based highway congestion detection method
CN107316036A (en) * 2017-06-09 2017-11-03 广州大学 A kind of insect recognition methods based on cascade classifier
CN109299672A (en) * 2018-09-05 2019-02-01 重庆大学 The Parking detection system and method for automatic adjusument threshold value and algorithm structure

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9269012B2 (en) * 2013-08-22 2016-02-23 Amazon Technologies, Inc. Multi-tracker object tracking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102637257A (en) * 2012-03-22 2012-08-15 北京尚易德科技有限公司 Video-based detection and recognition system and method of vehicles
WO2015147764A1 (en) * 2014-03-28 2015-10-01 Kisa Mustafa A method for vehicle recognition, measurement of relative speed and distance with a single camera
CN104992160A (en) * 2015-07-16 2015-10-21 山东大学 Night preceding vehicle detection method for heavy-duty truck
CN105373794A (en) * 2015-12-14 2016-03-02 河北工业大学 Vehicle license plate recognition method
CN105608431A (en) * 2015-12-22 2016-05-25 杭州中威电子股份有限公司 Vehicle number and traffic flow speed based highway congestion detection method
CN107316036A (en) * 2017-06-09 2017-11-03 广州大学 A kind of insect recognition methods based on cascade classifier
CN109299672A (en) * 2018-09-05 2019-02-01 重庆大学 The Parking detection system and method for automatic adjusument threshold value and algorithm structure

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于Adaboost与背景差分级联的室内人数统计方法;叶锋等;《福建师范大学学报(自然科学版)》(第01期);12-18 *
多特征多阈值级联AdaBoost行人检测器;崔华等;《交通运输工程学报》;20150415(第02期);113-121 *
计算机视觉在室外移动机器人中的应用;胡斌等;《自动化学报》;20060922(第05期);138-148 *

Also Published As

Publication number Publication date
CN110188693A (en) 2019-08-30

Similar Documents

Publication Publication Date Title
US10088600B2 (en) Weather recognition method and device based on image information detection
US10565479B1 (en) Identifying and excluding blurred areas of images of stained tissue to improve cancer scoring
CN107545239B (en) Fake plate detection method based on license plate recognition and vehicle characteristic matching
CN108615226B (en) Image defogging method based on generation type countermeasure network
Parker et al. An approach to license plate recognition
US11775875B2 (en) Method for recognizing fog concentration of hazy image
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
US7747079B2 (en) Method and system for learning spatio-spectral features in an image
CN109255326B (en) Traffic scene smoke intelligent detection method based on multi-dimensional information feature fusion
CN110717896A (en) Plate strip steel surface defect detection method based on saliency label information propagation model
CN113052170B (en) Small target license plate recognition method under unconstrained scene
CN110689003A (en) Low-illumination imaging license plate recognition method and system, computer equipment and storage medium
CN110188693B (en) Improved complex environment vehicle feature extraction and parking discrimination method
CN111489330A (en) Weak and small target detection method based on multi-source information fusion
CN116664565A (en) Hidden crack detection method and system for photovoltaic solar cell
CN113537037A (en) Pavement disease identification method, system, electronic device and storage medium
Tangsakul et al. Single image haze removal using deep cellular automata learning
CN110060221B (en) Bridge vehicle detection method based on unmanned aerial vehicle aerial image
CN107038690A (en) A kind of motion shadow removal method based on multi-feature fusion
CN113743421A (en) Method for segmenting and quantitatively analyzing anthocyanin developing area of rice leaf
TWI498830B (en) A method and system for license plate recognition under non-uniform illumination
Bala et al. Image simulation for automatic license plate recognition
CN110633705A (en) Low-illumination imaging license plate recognition method and device
CN111402185A (en) Image detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant