CN111275698A - Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation - Google Patents

Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation Download PDF

Info

Publication number
CN111275698A
CN111275698A CN202010087143.6A CN202010087143A CN111275698A CN 111275698 A CN111275698 A CN 111275698A CN 202010087143 A CN202010087143 A CN 202010087143A CN 111275698 A CN111275698 A CN 111275698A
Authority
CN
China
Prior art keywords
image
road
gray value
value
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010087143.6A
Other languages
Chinese (zh)
Other versions
CN111275698B (en
Inventor
黄鹤
茹锋
王会峰
程慈航
胡凯益
陈永安
郭璐
许哲
黄莺
惠晓滨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Huizhi Information Technology Co ltd
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN202010087143.6A priority Critical patent/CN111275698B/en
Publication of CN111275698A publication Critical patent/CN111275698A/en
Application granted granted Critical
Publication of CN111275698B publication Critical patent/CN111275698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30192Weather; Meteorology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • G06T2207/30256Lane; Road marking
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a method for detecting visibility of a road in a fog day based on unimodal migration maximum entropy threshold segmentation, which comprises the steps of obtaining a road traffic image in the fog day, carrying out gray value processing on the obtained traffic image to obtain a gray value image, calculating the gray value image to obtain a numerical value with the most dense gray value distribution, carrying out left-direction migration operation according to the calculated numerical value, simultaneously calculating an entropy value corresponding to the gray value of each point and obtaining a threshold numerical value meeting the road segmentation requirement, carrying out maximum entropy road segmentation according to the obtained threshold value, obtaining a connecting pass band for the image after road segmentation, obtaining a gray value catastrophe point in the obtained connecting band area, carrying out median processing on the gray value catastrophe point to obtain a sky and ground segmentation line, obtaining a visibility Vi value according to the obtained sky and ground segmentation lines, and further solving the visibility of. The method can well divide the gray level image of the original image, and simultaneously solves the problems of slow calculation, low precision and the like based on region growing and inflection point algorithm.

Description

Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a visibility detection method for a fog road based on unimodal deviation maximum entropy threshold segmentation.
Background
Visibility is obviously reduced in a foggy day condition, so that human eyes can easily excessively estimate a scene distance, and traffic accidents are caused by too high driving speed, so that the real-time measurement of visibility information can enable a driver to limit the vehicle speed in a proper range, and the real-time measurement of visibility information plays an important role in air and ground traffic safety under adverse weather conditions. At present, a detection method based on multiple images in the same scene and a detection method based on a single image are widely concerned by experts at home and abroad. However, since the detection method based on multiple images has no real-time property, the method for detecting the cloud fog based on a single image becomes a research hotspot. The visible pixel model is defined based on the contrast of four neighborhoods in the video image and the brightness characteristics of road surface pixels, and the farthest distance from the visible pixels to the camera is calculated by adopting a camera calibration technology, so that the visibility measurement without manual marking is realized, but the defect is that the characteristics of road surface marks (such as lane lines, road signs and the like) need to be extracted. However, these features cannot be extracted efficiently in complex scenes or with occlusion, thereby affecting the efficiency and accuracy of visibility measurements. In order to solve the problems, a novel visibility detection method is provided by combining a foggy day image imaging model and an atmospheric contrast attenuation model. Firstly, solving a second derivative of a foggy day imaging model to obtain a relation between an inflection point position of change of an image gray value and an atmospheric extinction coefficient; then, acquiring a specific position of a change inflection point of the gray value of the video image by adopting a region growing algorithm, and acquiring a value of an extinction coefficient; and finally substituting the extinction coefficient into an atmospheric contrast attenuation model to realize the measurement of visibility. The method avoids the dependence on the road mark and enhances the robustness; however, the region growing algorithm in the method is not only sensitive to noise and low in calculation accuracy, but also needs to be iterated continuously to detect the inflection point, and the time calculation complexity is high, so that the real-time requirement cannot be met.
Disclosure of Invention
The invention provides a method for detecting visibility of a foggy day road based on unimodal deviation maximum entropy threshold segmentation, which overcomes the defects in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
the method for detecting the visibility of the fog road based on the unimodal deviation maximum entropy threshold segmentation comprises the following steps:
step 1: acquiring a road traffic image in haze weather;
step 2: carrying out gray value processing on the traffic image obtained in the step 1 to obtain a gray value image;
and step 3: calculating the gray value image in the step 2 to obtain the gray value of the area with the most dense gray value distribution;
and 4, step 4: performing migration operation according to the gray scale values calculated in the step 3, namely calculating entropy by using the gray scale values obtained in the step 3 as an origin, successively migrating a unit gray scale value to the left, calculating the entropy corresponding to the gray scale value of each point, and obtaining a threshold value meeting the road segmentation requirement;
and 5: performing maximum entropy road segmentation according to the threshold obtained in the step 4;
step 6: acquiring a continuous passing band for the image obtained after the road segmentation in the step 5;
and 7: obtaining a gray value catastrophe point in the connected band region obtained in the step 6, and carrying out median processing on the gray value catastrophe point to obtain a world partition line;
and 8: deriving v from the world parting line obtained in step 7iAnd (4) obtaining a value, namely a vertical coordinate value of the world partition line in the image, and further solving the visibility in the foggy days.
Further, in step 3, the gray value distribution of the image is obtained by calculating the gray value image, so as to obtain the gray value corresponding to the region with the most dense gray value distribution, i.e. the peak value of the gray value distribution, thereby effectively and quickly finding the region with the gray value distribution.
And further, in the step 4, calculating the entropy of the image by taking the highest peak of the gray value distribution as a starting point, shifting a gray value to the left and calculating the corresponding entropy of the gray value, comparing and judging the corresponding entropy of the gray value and the highest peak, and repeating the steps until the corresponding entropy of the gray value is 1.05-1.25 times of the entropy of the highest peak, so that the entropy meeting the road segmentation requirement is obtained.
Further, in step 5, the road segmentation is performed on the traffic map of the foggy day road obtained in step 1 by using a threshold value meeting the road segmentation requirement, so as to obtain a road area.
Further, in step 6, the continuous passing band is an area where the sky and the road are communicated, which is obtained after the road area is divided; the method for solving the continuous passing band comprises the following steps: firstly, summing each row of pixel values of an image matrix processed by a unimodal offset maximum entropy threshold segmentation algorithm, and storing the sum of the pixel values of each row into a new array; secondly, comparing the sizes of all elements in the new array, and finding out the largest element, wherein the position of the largest element is the position of the column pixel and the largest column in the image, namely the position closest to the center of the road; and then, determining a range of the sum of the values of the pixels of one column to obtain the sum of a plurality of column pixels meeting the condition, drawing the columns of the column pixels meeting the condition in the original drawing, and then only keeping two vertical lines at the leftmost side and the rightmost side, wherein an area sandwiched between the two vertical lines is a connecting band.
Further, in step 7, performing subtraction processing in the continuous pass band to obtain a point where the pixel value change is obvious, that is, a gray value discontinuity point, specifically: firstly, storing the pixel value of each point of the gray level image obtained in the step (2) into a matrix; secondly, finding out position coordinates of two vertical lines of a communicating belt in the matrix; and finally, regarding the coordinates of the two positions as a boundary, regarding all elements in the middle of the boundary as a new matrix called a continuous band matrix, performing difference processing on adjacent elements in each column of the continuous band matrix, setting a threshold, marking coordinate points corresponding to the upper elements in the adjacent elements on the gray level image when the difference of pixel values is greater than the threshold, wherein the marked coordinate points are gray level mutation points, judging the coordinates of the mutation points, deducing the positions of the world partition lines when the coordinate points are gathered or connected into a piece, and prompting that the image does not meet the requirements if the judgment conditions cannot be met.
Further, the visibility V in fog in step 8 is calculated by the following formula:
V=3Hf/[2(vi-vh)cos(θ)]
wherein H represents the height of the camera from the ground plane, theta represents the included angle between the optical axis of the camera and the ground plane, f represents the effective focal length of the camera, and v represents the effective focal length of the camerahIndicating the vertical coordinate value of the horizon in the image.
Compared with the prior art, the invention has the following beneficial technical effects:
the method combines a visibility calculation model, an image segmentation algorithm and a gray value catastrophe point detection result to calculate the visibility value in the foggy weather, and the running time and the detection error are obviously optimized. When the image is segmented, the traditional image segmentation method generally needs to process the whole image or 256-point gray value, and on the basis of the traditional maximum entropy threshold segmentation algorithm, the invention can only process partial images and images corresponding to partial gray value on the premise of obtaining better image processing effect, thereby greatly reducing the calculated amount and reducing the running time. Meanwhile, the images segmented according to the method of the invention are used for extracting the target object, so that the running time can be reduced again while part of noise is eliminated. And finally, on the basis of extracting the target object, a more accurate detection result can be obtained according to the design of the catastrophe point concept, and the result is superior to that of the traditional visibility detection algorithm in foggy days.
Drawings
FIG. 1 is a camera model in a traffic scene;
FIG. 2 is a schematic flow chart of the present invention;
FIG. 3 is a schematic flow chart of a unimodal biased maximum entropy threshold segmentation algorithm;
fig. 4 is a comparison graph of the visibility detection effect of the fog based on the inflection point line, wherein (a), (c), (e), (g) and (i) are images processed by the visibility detection algorithm for the foggy roads with the catastrophe points segmented based on the unimodal offset maximum entropy threshold, and (b), (d), (f), (h) and (j) are images processed by the visibility detection algorithm for the fog based on the inflection point line.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
referring to fig. 2 and fig. 3, the invention provides a visibility detection method for a fog road based on unimodal deviation maximum entropy threshold segmentation, which uses a unimodal deviation maximum entropy threshold segmentation algorithm to process an image and realizes accurate and rapid road segmentation. The specific idea of solving the antenna line is as follows: the method is characterized in that a road, the sky and other elements in the image are separated by utilizing the principle of image segmentation and combining road characteristics, and a white area which is communicated with the sky and the road is obtained in the image which is processed by an idea. And the solution of the mutation point is carried out in the connected band, so that the data volume of algorithm operation can be greatly reduced, the running time is shortened, and the algorithm efficiency is improved. Elements such as roads, scenes and the like in the image have continuity, namely, the gray value generally does not have mutation phenomenon, and the gray value near the sky and ground lines has obvious mutation according to the characteristics of the sky and ground lines, namely, a large number of mutation points exist. Therefore, according to the property, a set deviation threshold value is proposed to calculate the catastrophe point, and finally, a sky line is drawn.
The method comprises the following specific steps:
step 1, obtaining a road image in haze weather: and obtaining the road image to be processed in the haze weather by using the image acquisition equipment.
And 2, converting the image obtained in the step 1 into a gray scale image, and waiting for the next processing.
The solving process of the maximum entropy threshold segmentation algorithm is to calculate the total entropy of the image under all segmentation thresholds so as to find the maximum entropy, the segmentation threshold corresponding to the maximum entropy is used as a final threshold, pixels with the gray scale larger than the threshold in the image are used as a foreground, and otherwise, the pixels are used as a background. The unimodal shift algorithm can quickly find a better solution k and a corresponding entropy value according to the gray value distribution characteristics of the image. Firstly, in order to use the unimodal shift maximum entropy threshold segmentation algorithm, the gray value processing is required to be carried out on the image, so that the image gray value distribution condition is obtained.
And 3, calculating the gray value image in the step 2 to obtain the gray value of the area with the most dense gray value distribution.
In step 3, the condition of the gray value distribution of the image is obtained by calculating the gray value image, so as to obtain the value with the densest gray value distribution, namely the highest peak value of the gray value distribution, and thus the area of the gray value distribution is effectively and quickly found.
And 4, carrying out left-hand offset operation on the numerical values calculated in the step 3, calculating the entropy value corresponding to the gray value of each point, and acquiring the threshold value meeting the road segmentation requirement.
In step 4, the image entropy is calculated by taking the highest peak of the gray value distribution as a starting point, a bit of gray value is shifted to the left at the same time, the corresponding entropy is calculated, and then the entropy is compared and judged with the corresponding entropy of the highest peak, the above steps are repeated until the entropy meeting the road segmentation requirement is obtained, the relationship between the corresponding entropy of the highest peak and the suitable entropy is different for different images, and the entropy corresponding to the optimal gray value is 1.05-1.25 times that of the highest peak. The traditional maximum entropy threshold segmentation algorithm needs to calculate 256-bit gray values, comprises a large number of invalid operations, and only needs to calculate the gray value entropy value of the interval corresponding to the highest peak and the better entropy value according to the unimodal shift algorithm, so that at least half of the image segmentation calculation time is saved.
And 5, performing maximum entropy road segmentation on the threshold value obtained in the step 4.
In step 5, road segmentation is carried out on the traffic map of the foggy day road by using the threshold value obtained by the unimodal offset operation to obtain a road area.
And 6, acquiring the continuous passing band of the segmented image of the road in the step 5.
The method for obtaining the image solving continuous passing band comprises the following steps: the concept of a connected band is provided, namely, a road segmentation is carried out on a traffic image of a foggy day road by using a unimodal offset maximum entropy threshold segmentation algorithm, and a white region with the sky connected with the road is obtained after a road region is segmented (the segmented road region is displayed in white). The design idea is as follows: firstly, summing each row of pixel values of an image matrix processed by a unimodal offset maximum entropy threshold segmentation algorithm by writing a program, and storing the sum of the pixel values of each row into a new array; secondly, comparing the sizes of all elements in the new array, and finding out the largest element, wherein the position of the largest element is the position of the column pixel and the largest column in the image, namely the position closest to the center of the road; then, a range of the column pixel value sum is determined, for example, the column pixel value sum is found to be less than the column pixel sum maximum value and more than 0.97 times of the column pixel sum maximum value, so that a plurality of column pixel sums meeting the conditions can be obtained, only the two leftmost vertical lines and the rightmost vertical lines are reserved after the column pixel sums are drawn in the original image, an area sandwiched between the two vertical lines is a communication band which is required by the user, and the width of the communication band can be conveniently changed by changing the size of the parameter, so that the operation precision and the operation time are changed. After the proper connected band is selected from the image obtained by the unimodal deviation maximum entropy threshold segmentation algorithm, the connected band in the gray level image is processed in the next step.
And 7, obtaining a gray value catastrophe point in the connected band region obtained in the step 6, and carrying out median processing on the gray value catastrophe point to obtain a world partition line.
In step 7, difference processing is performed in the selected continuous pass band to obtain a point where the pixel value change is obvious, namely a gray value mutation point. Firstly, storing the pixel value of each point of the gray image into a matrix; secondly, finding the position coordinates of the two vertical lines communicated in the step in a new matrix; and finally, regarding the two coordinates as boundaries, regarding all elements in the middle of the boundaries as a new matrix, performing difference processing on adjacent elements on each column of the new matrix, artificially giving a threshold value, and marking the corresponding point coordinates of the element on the image when the difference between pixel values is greater than the threshold value. These marked coordinate points are the gray value discontinuities, i.e. the points closest to the sky-ground line, from which we can deduce the approximate location of the sky-ground line when these points are gathered or connected together. In order to reduce the error between the extracted sky and ground lines, a method of solving the median of all gray value catastrophe points in the continuous pass band is adopted. Because some unprocessed interferences still exist in the experimental picture, for example: white lane lines, white street lights, thick trunks, etc. To eliminate the disturbances to the maximum extent, finding the median value is certainly the best way, because of the fact that among the several gray value discontinuities obtained in the above steps, there are always some gray value discontinuities distributed around the disturbances, but the disturbances are only a small part, and most of the gray value discontinuities are still distributed around the sky and ground lines. The mean value is different from the mean value, and the mean value eliminates a small part of errors caused by interference, so that a more accurate antenna line is obtained. And finally, calculating the visibility of the atmosphere by the ordinate Vi of the sky line.
And 8: and (4) obtaining a Vi value from the world parting line obtained in the step (7), and further solving the atmospheric visibility.
And (4) obtaining a Vi value from the world parting line obtained in the step (7) to solve the visibility. The process principle is as follows, Koschmieder indicates that the following relation exists between the inherent light intensity of the object and the observed light intensity:
I(X)=t(X)J(X)+(1-t(X))A (1)
where A represents atmospheric light intensity and t (x) represents scene transmittance.
t(x)=e-kd(x)(2)
Where k represents the extinction coefficient. According to the equation (2), the pixels change in a hyperbolic curve from top to bottom in the road area and the sky area because the texture and the illumination are uniform.
Substituting formula (2) into formula (1):
I(X)=e-kd(x)J(x)+(1-e-kd(x))A (3)
stewart et al demonstrated that contrast decays with the following rule as distance increases:
C=C0e-kd(x)(4)
wherein C represents the contrast exhibited by an object at a distance d, C0Representing the inherent contrast of the object relative to the background. This formula is only applicable to homogenous fog days. C must be greater than the minimum threshold epsilon for an object to be barely visible. For practical purposes, a contrast threshold ε of 0.05 is used internationally, defining the meteorological visibility distance, i.e., the maximum distance an object can see in a suitable size when the background contrast is 1.
The camera used in the invention is arranged in the middle of a road, and a camera model in a traffic scene is shown in figure 1. The coordinates in the image pixel coordinate system are given as (u, v), where u, v denote the number of rows and columns of pixels. Suppose the projection of the optical axis onto the image plane is (u)0,v0) Theta denotes the angle between the optical axis of the camera and the horizon, vhRepresenting a vertical projection of the horizon. The camera internal parameters comprise a camera focal length f and a horizontal size t of a unit pixeluVertical dimension tv. Suppose au=f/tu、av=f/tvIn general auAnd avEqual to or different from au=av=a。
By using a pinhole model, the coordinates of three-dimensional points in the scene can be found from the corresponding camera coordinates according to the following formula:
Figure RE-GDA0002446331230000091
on the image planeIn a plane, the horizon can be represented as vh=v0-atanθ (6)
Combining formulae (5) and (6) yields:
Figure RE-GDA0002446331230000092
in the world coordinate system (S, X)w,Yw,Zw) In (5), the formula (8) becomes:
Figure RE-GDA0002446331230000093
on the road surface, assuming a point M at a distance S from the origin, the coordinates of M
Satisfies the following conditions:
Figure RE-GDA0002446331230000094
substitution formula (8):
Figure RE-GDA0002446331230000095
the distance information of the plane midpoint can be calculated by equation (10):
Figure RE-GDA0002446331230000096
wherein
Figure RE-GDA0002446331230000097
The distance d from any pixel point (u, v) in the image to the camera can be defined as the following expression:
Figure RE-GDA0002446331230000098
h is the height of the camera from the ground plane, theta represents the included angle between the optical axis of the camera and the ground plane, f is the effective focal length of the camera, and v is the effective focal length of the camerahRepresenting the vertical coordinate value of the horizon (or vanishing point) in the image.
The optical model of the image in foggy weather is shown as follows:
I(x)=J(x)exp(-βd(x))+A(1-exp(-βd(x))) (12)
where I is the observed object brightness, J is the inherent brightness of the object itself, β is the scattering coefficient of the atmosphere, and A represents the atmospheric light intensity.
The second derivative of v is obtained by substituting formula (1) for formula (2):
Figure RE-GDA0002446331230000101
making equation (3) equal to zero yields two solutions, one is meaningless β ═ 0 and the other is β ═ 2 (v)i-vh)/λ=2/d(vi)。viAnd the coordinates of the inflection point of the change of the gray value in the image coordinate system along the vertical direction of the image are expressed. The atmospheric extinction coefficient and the atmospheric scattering coefficient are approximately equal in the foggy weather environment.
The visibility in foggy weather V can be calculated by the following equation:
V=3Hf/[2(vi-vh)cos(θ)](14)
as can be seen from the calculation model shown in equation (14), the visibility in foggy weather can be calculated as long as the position of the break point where the image gradation value changes in the vertical direction is detected. H is the height of the camera from the ground plane, theta represents the included angle between the optical axis of the camera and the ground plane, f is the effective focal length of the camera, vhRepresenting the vertical coordinate value of the horizon (or vanishing point) in the image. So vhAs can be derived from real world measurements.
As can be seen from fig. 4, (a), (c), (e), (g) and (i) are images processed by the visibility detection algorithm for the catastrophe foggy road segmented based on the unimodal shift maximum entropy threshold, the processing effect is good, and the sky and ground line is accurately obtained; (b) the images processed by the fog visibility detection algorithm based on the inflection line are too long in running time, and the obtained antenna and ground line has large deviation.

Claims (7)

1. The method for detecting the visibility of the fog road based on the unimodal deviation maximum entropy threshold segmentation is characterized by comprising the following steps of:
step 1: acquiring a road traffic image in haze weather;
step 2: carrying out gray value processing on the traffic image obtained in the step 1 to obtain a gray value image;
and step 3: calculating the gray value image in the step 2 to obtain the gray value of the area with the most dense gray value distribution;
and 4, step 4: performing migration operation according to the gray scale values calculated in the step 3, namely calculating entropy by using the gray scale values obtained in the step 3 as an origin, successively migrating a unit gray scale value to the left, calculating the entropy corresponding to the gray scale value of each point, and obtaining a threshold value meeting the road segmentation requirement;
and 5: performing maximum entropy road segmentation according to the threshold obtained in the step 4;
step 6: acquiring a continuous passing band for the image obtained after the road segmentation in the step 5;
and 7: obtaining a gray value catastrophe point in the connected band region obtained in the step 6, and carrying out median processing on the gray value catastrophe point to obtain a world partition line;
and 8: deriving v from the world parting line obtained in step 7iAnd (4) obtaining a value, namely a vertical coordinate value of the world partition line in the image, and further solving the visibility in the foggy days.
2. The method for detecting the visibility of the foggy road based on the unimodal shift maximum entropy threshold segmentation is characterized in that the condition of the gray value distribution of the image is obtained through calculation of the gray value image in the step 3, so that the gray value corresponding to the area with the densest gray value distribution, namely the highest peak value of the gray value distribution, is obtained, and the area with the gray value distribution is effectively and quickly found.
3. The method for detecting the visibility of the foggy day road based on the unimodal shift maximum entropy threshold segmentation is characterized in that in the step 4, the image entropy is calculated by taking the highest peak of the gray value distribution as a starting point, a gray value is shifted to the left by one bit and the corresponding entropy is calculated, then the comparison and the judgment are carried out on the entropy corresponding to the highest peak, and the steps are repeated until the entropy corresponding to the gray value is 1.05-1.25 times of the entropy corresponding to the highest peak, so that the entropy meeting the road segmentation requirement is obtained.
4. The method for detecting the visibility of the fog road based on the unimodal shift maximum entropy threshold segmentation is characterized in that in the step 5, the threshold meeting the road segmentation requirement is used for performing road segmentation on the traffic map of the fog road obtained in the step 1 to obtain a road area.
5. The method for detecting visibility of a fog road based on unimodal deviation maximum entropy threshold segmentation as claimed in claim 4, wherein the continuous passing band in step 6 is a region where a sky obtained after the road region is segmented is communicated with the road; the method for solving the continuous passing band comprises the following steps: firstly, summing each row of pixel values of an image matrix processed by a unimodal offset maximum entropy threshold segmentation algorithm, and storing the sum of the pixel values of each row into a new array; secondly, comparing the sizes of all elements in the new array, and finding out the largest element, wherein the position of the largest element is the position of the column pixel and the largest column in the image, namely the position closest to the center of the road; and then, determining a range of the sum of the values of the pixels of one column to obtain the sum of a plurality of column pixels meeting the condition, drawing the columns of the column pixels meeting the condition in the original drawing, and then only keeping two vertical lines at the leftmost side and the rightmost side, wherein an area sandwiched between the two vertical lines is a connecting band.
6. The method for detecting the visibility of the fog road based on the unimodal shift maximum entropy threshold segmentation is characterized in that in the step 7, a difference processing is performed in a continuous transmission band to obtain a point with obvious pixel value change, namely a gray value mutation point, and specifically: firstly, storing the pixel value of each point of the gray level image obtained in the step (2) into a matrix; secondly, finding out position coordinates of two vertical lines of a communicating belt in the matrix; and finally, regarding the coordinates of the two positions as a boundary, regarding all elements in the middle of the boundary as a new matrix called a continuous band matrix, performing difference processing on adjacent elements in each column of the continuous band matrix, setting a threshold, marking coordinate points corresponding to the upper elements in the adjacent elements on the gray level image when the difference of pixel values is greater than the threshold, wherein the marked coordinate points are gray level mutation points, judging the coordinates of the mutation points, deducing the positions of the world partition lines when the coordinate points are gathered or connected into a piece, and prompting that the image does not meet the requirements if the judgment conditions cannot be met.
7. The method for detecting the visibility of the fog road based on the unimodal deviation maximum entropy threshold segmentation is characterized in that the visibility V in the fog in the step 8 is calculated by the following formula:
V=3Hf/[2(vi-vh)cos(θ)]
wherein H represents the height of the camera from the ground plane, theta represents the included angle between the optical axis of the camera and the ground plane, f represents the effective focal length of the camera, and v represents the effective focal length of the camerahIndicating the vertical coordinate value of the horizon in the image.
CN202010087143.6A 2020-02-11 2020-02-11 Method for detecting visibility of road in foggy weather based on unimodal offset maximum entropy threshold segmentation Active CN111275698B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010087143.6A CN111275698B (en) 2020-02-11 2020-02-11 Method for detecting visibility of road in foggy weather based on unimodal offset maximum entropy threshold segmentation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010087143.6A CN111275698B (en) 2020-02-11 2020-02-11 Method for detecting visibility of road in foggy weather based on unimodal offset maximum entropy threshold segmentation

Publications (2)

Publication Number Publication Date
CN111275698A true CN111275698A (en) 2020-06-12
CN111275698B CN111275698B (en) 2023-05-09

Family

ID=71000586

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010087143.6A Active CN111275698B (en) 2020-02-11 2020-02-11 Method for detecting visibility of road in foggy weather based on unimodal offset maximum entropy threshold segmentation

Country Status (1)

Country Link
CN (1) CN111275698B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797848A (en) * 2023-01-05 2023-03-14 山东高速股份有限公司 Visibility detection early warning method based on video data in high-speed event prevention system
CN117094914A (en) * 2023-10-18 2023-11-21 广东申创光电科技有限公司 Smart city road monitoring system based on computer vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175613A (en) * 2011-01-26 2011-09-07 南京大学 Image-brightness-characteristic-based pan/tilt/zoom (PTZ) video visibility detection method
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102175613A (en) * 2011-01-26 2011-09-07 南京大学 Image-brightness-characteristic-based pan/tilt/zoom (PTZ) video visibility detection method
WO2018058356A1 (en) * 2016-09-28 2018-04-05 驭势科技(北京)有限公司 Method and system for vehicle anti-collision pre-warning based on binocular stereo vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
徐敏等: "基于场景深度的雾天图像能见度检测算法", 《自动化仪表》 *
苏伟等: "基于最大熵模型的玉米冠层LAI升尺度方法", 《农业工程学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797848A (en) * 2023-01-05 2023-03-14 山东高速股份有限公司 Visibility detection early warning method based on video data in high-speed event prevention system
CN117094914A (en) * 2023-10-18 2023-11-21 广东申创光电科技有限公司 Smart city road monitoring system based on computer vision
CN117094914B (en) * 2023-10-18 2023-12-12 广东申创光电科技有限公司 Smart city road monitoring system based on computer vision

Also Published As

Publication number Publication date
CN111275698B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN113450307B (en) Product edge defect detection method
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN105260699B (en) A kind of processing method and processing device of lane line data
CN107516077B (en) Traffic sign information extraction method based on fusion of laser point cloud and image data
CN107301623B (en) Traffic image defogging method and system based on dark channel and image segmentation
CN109635737B (en) Auxiliary vehicle navigation positioning method based on road marking line visual identification
CN109636732A (en) A kind of empty restorative procedure and image processing apparatus of depth image
CN106204494B (en) A kind of image defogging method and system comprising large area sky areas
CN108596165A (en) Road traffic marking detection method based on unmanned plane low latitude Aerial Images and system
CN112287838B (en) Cloud and fog automatic identification method and system based on static meteorological satellite image sequence
CN111275698A (en) Visibility detection method for fog road based on unimodal deviation maximum entropy threshold segmentation
CN113935428A (en) Three-dimensional point cloud clustering identification method and system based on image identification
CN103578083A (en) Single image defogging method based on joint mean shift
CN111563852A (en) Dark channel prior defogging method based on low-complexity MF
CN112906616A (en) Lane line extraction and generation method
CN115330684A (en) Underwater structure apparent defect detection method based on binocular vision and line structured light
CN112419272B (en) Method and system for quickly estimating visibility of expressway in foggy weather
CN111380503B (en) Monocular camera ranging method adopting laser-assisted calibration
CN109800693B (en) Night vehicle detection method based on color channel mixing characteristics
CN104881652B (en) A kind of line number automatic testing method based on corncob male features
Meng et al. Highway visibility detection method based on surveillance video
CN107424205B (en) Joint inference method for carrying out three-dimensional facade layout estimation based on day and night image pair
CN112598777B (en) Haze fusion method based on dark channel prior
CN112396572B (en) Composite insulator double-light fusion method based on feature enhancement and Gaussian pyramid
CN112686105B (en) Fog concentration grade identification method based on video image multi-feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230424

Address after: Room 508, block a, Rongcheng cloud Valley, 57 Keji 3rd road, Zhangba Street office, high tech Zone, Xi'an City, Shaanxi Province, 710075

Applicant after: Xi'an Huizhi Information Technology Co.,Ltd.

Address before: 710064 middle section, south two ring road, Shaanxi, Xi'an

Applicant before: CHANG'AN University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant