CN110532875B - Night mode lens attachment detection system, terminal and storage medium - Google Patents

Night mode lens attachment detection system, terminal and storage medium Download PDF

Info

Publication number
CN110532875B
CN110532875B CN201910679980.5A CN201910679980A CN110532875B CN 110532875 B CN110532875 B CN 110532875B CN 201910679980 A CN201910679980 A CN 201910679980A CN 110532875 B CN110532875 B CN 110532875B
Authority
CN
China
Prior art keywords
gray
area
image
gray level
evaluation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910679980.5A
Other languages
Chinese (zh)
Other versions
CN110532875A (en
Inventor
罗亮
唐锐
张笑东
于璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zongmu Technology Shanghai Co Ltd
Original Assignee
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zongmu Technology Shanghai Co Ltd filed Critical Zongmu Technology Shanghai Co Ltd
Priority to CN201910679980.5A priority Critical patent/CN110532875B/en
Publication of CN110532875A publication Critical patent/CN110532875A/en
Application granted granted Critical
Publication of CN110532875B publication Critical patent/CN110532875B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a detection system, a terminal and a storage medium of a night-mode lens attachment, wherein the detection system of the night-mode lens attachment comprises the following modules: the image segmentation module is used for searching one or more frames of images with light sources on the background, and processing and separating out suspected areas in the images by utilizing the characteristic that the light sources in the images enable the attachments to appear bright spots; the feature extraction module is used for extracting a contour area suspected of being blurred due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and comprehensively judging whether the area is the attachment according to various index results; and the early warning judging module is used for judging the area by the statistical feature extracting module, marking the accumulated value and triggering the alarm by the judging area with the accumulated times exceeding the alarm threshold. The invention can identify the attachments such as condensed water drops, fuzzy water stains, dark stains and the like on the lens in the night mode or in the environment with extremely low light intensity.

Description

Night mode lens attachment detection system, terminal and storage medium
Technical Field
The present invention relates to the field of automotive electronics, and in particular, to a system, a terminal, and a storage medium for detecting an object attached to a night-mode lens.
Background
In the prior art, "automated valet parking" (Auto VALET PARKING) is one of the hot technologies in the field of automated driving, and will also be an important milestone on automated driving mass-production roads. As a complete set of automated unmanned car systems, AVP systems drive cars at low speeds or stop cars in limited areas such as parking lots or surrounding roads. In addition, as a functional extension of the parking assist, it is one of the earliest commercialized full-automatic driving functions.
In the running process of the vehicle, accidental reasons such as road conditions, weather and the like are frequently met, so that the situation that dirt and rainwater are blocked on a lens is caused, and the situation has destructive influence on the normal operation of an AVP system. Therefore, during the running of the vehicle, it is necessary to detect the camera screen and determine whether the imaging effect is reliable.
In the night mode, the lens attachments are not easily highlighted due to the low intensity of the ambient light. The existence of attachments in the night mode can play a great interference role on the visual perception detection object, so that how to identify attachments under the condition of extremely low night light intensity and alarm becomes a problem to be solved urgently.
Disclosure of Invention
In order to solve the above and other potential technical problems, the invention provides a detection system, a terminal and a storage medium for attached objects of a night mode lens, which can identify condensed water drops, fuzzy water stains, dark stains and other attachments on the lens in the night mode or in an environment with extremely low light intensity, verify and discriminate the attachment area after detecting the attachment area, and timely alarm the system after judging the attachment, thereby reducing the influence of the lens attachment on the visual detection of a computer.
A method for detecting night mode lens attachment comprises the following steps:
s01: image segmentation: performing suspected region segmentation on the input image;
s02: feature extraction: extracting a fuzzy contour area caused by attachments, carrying out statistical calculation on one or more evaluation indexes in the contour, and judging whether the area is the attachment according to a comprehensive index result;
S03: early warning judgment: and (3) performing accumulated marking on the judging area, and performing early warning triggering on the processing result of which the accumulated times exceed the alarm threshold.
Further, the image segmentation process in step S01 in the night mode is as follows:
Continuously acquiring a plurality of frames of captured images, searching one or more frames of images with a background having a light source, converting the images into a gray level image, calculating a gray level histogram of the gray level image, and obtaining a statistical array containing the gray level value, gray level distribution and occurrence frequency of pixel points; classifying pixel points in the gray level histogram, calculating the space distances between different gray levels and other gray levels, and calculating the sum of the distance values between the gray levels and the products of probability coefficients; thereby obtaining saliency maps of different gray values and separating out the attached suspicious region.
Further, the imaging characteristics of the attached matter area attached by the lens at night are as follows: when the background has a light source, the light source appears as a bright spot for refraction of the light source. The color image is converted into gray scale image, and the brightness value of the image is used as the main distinguishing basis to reduce the calculated amount. A statistical gray image gray histogram is calculated, which is a function of the gray level distribution, which is a statistic of the gray level distribution in the image. The gray level histogram is a method for counting the occurrence frequency of all pixels in the digital image according to the gray level value:
P(k)=nk/N
wherein N is the total number of pixels, nk is the number of pixels of gray level k;
on the basis of the gray level histogram, calculating gray space distances between different gray levels and other gray values;
D(k,i)=|k-i|
Wherein D (k, i) represents the gray space distance of gray level k to gray level i;
The saliency value of the gray level k is calculated by the sum of products of probability coefficients of gray distance values of the gray level k to other gray levels and other gray levels:
s (k) represents a pixel saliency value of a gray level k, P (i) is a probability value of a gray level i, and D (k, i) is a gray space distance value from the gray level k to the gray level i.
Thereby obtaining saliency maps of different gray values and separating out the attached suspicious region.
Further, in the step S01, the image segmentation may be implemented by using a suspected region segmentation algorithm or by using a deep learning method.
Further, in the step S01, the image segmentation method adopting the suspected region segmentation algorithm includes the following steps:
s011: the image is segmented and downsampled,
S012: a fuzzy difference map is extracted and a fuzzy difference map is obtained,
S013: and performing multi-graph superposition;
s014: the suspicious region image segmentation is realized by one or more operations of binarization, filtering, morphology and threshold.
Further, in the step S011, the image segmentation and downsampling operations are expressed as follows: the original captured image is assumed to be an image with a pixel value of N x M, the downsampling coefficient is assumed to be k, the downsampling operation is to take a point every k pixel points every row and every column in the original captured image to form a downsampled image, and the downsampled image is used for reducing the calculated amount of image processing and ensuring the instantaneity of image processing.
Further, in the step S012, the operation of extracting the blur difference map is expressed as follows: and (3) carrying out fuzzy processing on the captured image by using a filtering operator, subtracting the original image from the image after the fuzzy processing, and taking the absolute value to obtain a fuzzy difference image.
Assuming xsrc as an original image, xblur is an image obtained by blur smoothing, and defining a current blur difference image as: deltax = | xsrc-xblur |. The function of extracting the fuzzy difference map is that the fuzzy characteristic of the image area attached by rainwater is not sensitive to a filtering smoothing algorithm; the image area to which rainwater is attached can be distinguished with less variation than the rainwater-free attaching area.
Deltax is a current frame blur difference map, xsrc is a current frame original map, and xblur is a map obtained by filtering and smoothing a current frame image.
The blurring process can be one or more of Gaussian filtering, median filtering and mean filtering for smoothing.
The dimensional relationships among the gaussian filter, median filter, mean filter and filter kernel are shown in table 1 below:
TABLE 1
Preferably, as can be seen from table 1, in the case of the current image size, the effect of using the mean filtering is best when the filter kernel size is 5.
Further, the operation of performing multi-map superimposition in step S013 is expressed as follows: n-frame accumulation xaccum = delatxk + deltaxk +n of the blurred difference map obtained in S012
Xaccum is a fusion feature map obtained by multi-frame accumulation, which is an accumulation result from k frames to k+n frames, deltaxk is a fuzzy difference map at the kth frame time, and deltaxk +n is a fuzzy difference map at the kth+n frame time.
The effect of accumulating the obtained fusion characteristic images is that the distribution and transformation of the form position of the rainwater in a short time are less, and the accumulated continuous multi-frame fuzzy difference images can enhance the contrast between the rainwater adhesion fuzzy region and the background and highlight the rainwater adhesion region on the image.
Further, the operation of the step S014 using binarization processing, neighborhood filtering processing, morphology processing is expressed as follows:
The binarization effect on the fusion characteristic map obtained by multi-frame accumulation is that an automatic threshold dividing algorithm is utilized to convert a gray level map into a binary map, and the image is divided into an interested region of suspected attachments and a region without attachments;
the neighborhood filtering function is to count pixel distribution conditions in a binary image neighborhood, and remove isolated noise points so as to reduce the influence of the noise points on an attachment region of interest;
The morphological filtering function is to remove a small noise area by performing corrosion operation on the binary image, fill and extract holes in the suspected area by performing expansion operation, and repair the suspected area.
Further, in the step S01, the image segmentation method adopting the deep learning method includes the following steps:
Pretreatment: the image is downsampled to M x N size, and the image data storage format is converted to a three-way BGR format.
Image segmentation: and sending the input image data into a semantic segmentation convolutional neural network, and outputting the classification of each pixel point through forward propagation to obtain a pixel point set of the suspected attachment region.
Further, when the image segmentation result is obtained by adopting the deep learning mode, the network model used is a convolution neural network for semantic segmentation, and a backbone network for feature extraction can adopt resnet, squeezenet1.1, mobilenent and other networks; the semantic segmentation deconvolution part adopts PSPnet frames, merges the feature images of the last 4 layers of different scales of the backbone network, and finally outputs a segmentation result image with the same size as the original image.
Further, in the step S02 of feature extraction, the specific expression of the extraction contour of the extraction region is:
the contour extraction function is to extract the contour of the processed fusion feature map to obtain pixel sets with different contours, and the pixel sets are used for carrying out feature extraction calculation aiming at different contour areas to evaluate the definition of each set and the credibility of the rainwater attachment area.
Further, in the step S02 of feature extraction, the specific expression of performing statistical calculation on the multiple sharpness evaluation indexes in the contour is as follows:
And carrying out statistical calculation on the divided outlines by using one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different kinds of evaluation index values.
Image statistics feature: gray, grads gradient, laplas quadratic gradient, mean/variance/max/min mathematical statistics, mean variance, maximum, minimum
Shape texture features: round/Area circularity and Area, wavelet_f Wavelet transform operator,
Sharpness evaluation feature: variance, EVA, hist, laplas A
Value=F(area,vector)。
Further, when the statistics calculation is performed on the divided contours in the step S02 to obtain different kinds of evaluation index values, two evaluation modes are included: and accumulating and evaluating the credibility value, and classifying and judging the outline area by using a classifier.
Further, when the contour region is classified and judged by using the classifier, N evaluation index values are calculated and obtained for a certain contour region, and the evaluation index values in N are integrated into a feature vector of the region; and (5) taking the feature vector of the rainwater area obtained through statistics as a training sample, and sending the training sample into a classifier for training. The classifier can select decision trees, SVMs, BP networks and the like to realize classification and judgment of whether the segmented outline area is a rainwater area or not.
Further, when judging by using the reliability value accumulation evaluation mode, setting N evaluation indexes for a certain contour area, wherein each evaluation index is provided with a judgment selection threshold value for expressing whether the evaluation index value of the certain contour area can be considered as a rainwater area;
Respectively calculating each evaluation index for the contour region to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold corresponding to the evaluation index, and adding a score to the credibility of the contour region if the evaluation selection threshold of the evaluation index is exceeded; if the evaluation index is not exceeded, the contour area is removed or the credibility of the contour area is not divided;
And finally, counting outline areas with evaluation indexes higher than the evaluation selection threshold in the image, and marking the positions and area information of the areas.
Further, the specific manner of determining the region accumulation mark in the step S03 is as follows: gridding a detection picture and dividing the detection picture into MxN grids; mapping the output result accumulated by a plurality of frames to the corresponding grid position, counting the number of attached objects of the grid, and giving out quantitative shielding condition.
A night-mode lens attachment detection system comprising the following modules:
The image segmentation module is used for searching one or more frames of images with light sources on the background, and processing and separating out suspected areas in the images by utilizing the characteristic that the light sources in the images enable the attachments to appear bright spots;
The feature extraction module is used for extracting a contour area suspected of being blurred due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and comprehensively judging whether the area is the attachment according to various index results;
and the early warning judging module is used for judging the area by the statistical feature extracting module, marking the accumulated value and triggering the alarm by the judging area with the accumulated times exceeding the alarm threshold.
Further, the image segmentation module continuously acquires a plurality of frames of captured images, searches one or more frames of images with a background having a light source, converts the images into a gray level image, calculates a gray level histogram of the gray level image, and obtains a statistical array containing the gray level value of pixel points, gray level distribution and occurrence frequency; classifying pixel points in the gray level histogram, calculating the space distances between different gray levels and other gray levels, and calculating the sum of the distance values between the gray levels and the products of probability coefficients; thereby obtaining saliency maps of different gray values and separating out the attached suspicious region.
A mobile terminal, which may be a vehicle-mounted terminal or a mobile phone mobile terminal,
The vehicle-mounted terminal can execute the detection method of the night-mode lens attached object or carry the detection system of the night-mode lens attached object;
The mobile terminal of the mobile phone can execute the detection method of the night-mode lens attached object or carry the detection system of the night-mode lens attached object.
A computer storage medium is a computer program written in accordance with the method of detecting night-mode lens attachments as described above.
As described above, the present invention has the following advantageous effects:
1) In the rainy day driving process, rainwater is attached to the surface of the camera in different modes, and the situations that stains shield the lens, the defects inside the lens and the like exist. In this case, not only the imaging effect of the camera is affected, but also the accuracy and effectiveness of the algorithm are reduced. In order to ensure the normal operation of the AVP system and the safety of a user vehicle body, the problem that the lens is blocked and polluted needs to be detected and timely alarmed.
2) Under night mode or the extremely low environment of light intensity, can discern the attachment such as the water droplet that gathers on the lens, fuzzy water stain, dark stain, after will detecting the attachment region, verify, discriminate this region, judge and report to the police to the system in time after confirming the attachment, reduce the influence of camera lens attachment to computer vision detection.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows images of several situations in which problems are encountered in the background art.
Fig. 2 is a schematic flow chart of the image segmentation step of the image according to the present invention.
Fig. 3 is a flowchart illustrating an image segmentation step according to another embodiment of the present invention.
Fig. 4 is a flowchart illustrating an image segmentation step according to another embodiment of the present invention.
Fig. 5 is a flowchart illustrating an image segmentation step according to another embodiment of the present invention.
Fig. 6 is a flowchart illustrating an image segmentation step according to another embodiment of the present invention.
Fig. 7 shows a flow chart of the present invention.
Fig. 8 shows a flow chart of the image segmentation step of the present invention.
Fig. 9 is a flowchart showing an image segmentation step in another embodiment of the present invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that it can be practiced, since modifications, changes in the proportions, or otherwise, used in the practice of the invention, are not intended to be critical to the essential characteristics of the invention, but are intended to fall within the spirit and scope of the invention. Also, the terms such as "upper," "lower," "left," "right," "middle," and "a" and the like recited in the present specification are merely for descriptive purposes and are not intended to limit the scope of the invention, but are intended to provide relative positional changes or modifications without materially altering the technical context in which the invention may be practiced.
With reference to figures 1 to 9 of the drawings,
A method for detecting night mode lens attachment comprises the following steps:
s01: image segmentation: performing suspected region segmentation on the input image;
s02: feature extraction: extracting a fuzzy contour area caused by attachments, carrying out statistical calculation on one or more evaluation indexes in the contour, and judging whether the area is the attachment according to a comprehensive index result;
S03: early warning judgment: and (3) performing accumulated marking on the judging area, and performing early warning triggering on the processing result of which the accumulated times exceed the alarm threshold.
As a preferred embodiment, the image segmentation procedure of step S01 in the night mode is:
Continuously acquiring a plurality of frames of captured images, searching one or more frames of images with a background having a light source, converting the images into a gray level image, calculating a gray level histogram of the gray level image, and obtaining a statistical array containing the gray level value, gray level distribution and occurrence frequency of pixel points; classifying pixel points in the gray level histogram, calculating the space distances between different gray levels and other gray levels, and calculating the sum of the distance values between the gray levels and the products of probability coefficients; thereby obtaining saliency maps of different gray values and separating out the attached suspicious region.
As a preferred embodiment, the imaging characteristics of the attached object area attached by the lens at night are as follows: when the background has a light source, the light source appears as a bright spot for refraction of the light source. The color image is converted into gray scale image, and the brightness value of the image is used as the main distinguishing basis to reduce the calculated amount. A statistical gray image gray histogram is calculated, which is a function of the gray level distribution, which is a statistic of the gray level distribution in the image. The gray level histogram is a method for counting the occurrence frequency of all pixels in the digital image according to the gray level value:
P(k)=nk/N
wherein N is the total number of pixels, nk is the number of pixels of gray level k;
on the basis of the gray level histogram, calculating gray space distances between different gray levels and other gray values;
D(k,i)=|k-i|
Wherein D (k, i) represents the gray space distance of gray level k to gray level i;
The saliency value of the gray level k is calculated by the sum of products of probability coefficients of gray distance values of the gray level k to other gray levels and other gray levels:
s (k) represents a pixel saliency value of a gray level k, P (i) is a probability value of a gray level i, and D (k, i) is a gray space distance value from the gray level k to the gray level i.
Thereby obtaining saliency maps of different gray values and separating out the attached suspicious region.
In a preferred embodiment, in the step S01, the image segmentation may be implemented by using a suspected region segmentation algorithm or by using a deep learning method.
Further, in the step S01, the image segmentation method adopting the suspected region segmentation algorithm includes the following steps:
s011: the image is segmented and downsampled,
S012: a fuzzy difference map is extracted and a fuzzy difference map is obtained,
S013: and performing multi-graph superposition;
s014: the suspicious region image segmentation is realized by one or more operations of binarization, filtering, morphology and threshold.
As a preferred embodiment, in the step S011, the image segmentation and downsampling operations are expressed as follows: the original captured image is assumed to be an image with a pixel value of N x M, the downsampling coefficient is assumed to be k, the downsampling operation is to take a point every k pixel points every row and every column in the original captured image to form a downsampled image, and the downsampled image is used for reducing the calculated amount of image processing and ensuring the instantaneity of image processing.
As a preferred embodiment, in the step S012, the operation of extracting the blur difference map is expressed as follows: and (3) carrying out fuzzy processing on the captured image by using a filtering operator, subtracting the original image from the image after the fuzzy processing, and taking the absolute value to obtain a fuzzy difference image.
Assuming xsrc as an original image, xblur is an image obtained by blur smoothing, and defining a current blur difference image as: deltax = | xsrc-xblur |. The function of extracting the fuzzy difference map is that the fuzzy characteristic of the image area attached by rainwater is not sensitive to a filtering smoothing algorithm; the image area to which rainwater is attached can be distinguished with less variation than the rainwater-free attaching area.
Deltax is a current frame blur difference map, xsrc is a current frame original map, and xblur is a map obtained by filtering and smoothing a current frame image.
The blurring process can be one or more of Gaussian filtering, median filtering and mean filtering for smoothing.
The dimensional relationships among the gaussian filter, median filter, mean filter and filter kernel are shown in table 1 below:
TABLE 1
Preferably, as can be seen from table 1, in the case of the current image size, the effect of using the mean filtering is best when the filter kernel size is 5.
As a preferred embodiment, the operation of performing multi-map superimposition in step S013 is expressed as follows: n-frame accumulation xaccum = delatxk + deltaxk +n of the blurred difference map obtained in S012
Xaccum is a fusion feature map obtained by multi-frame accumulation, which is an accumulation result from k frames to k+n frames, deltaxk is a fuzzy difference map at the kth frame time, and deltaxk +n is a fuzzy difference map at the kth+n frame time.
The effect of accumulating the obtained fusion characteristic images is that the distribution and transformation of the form position of the rainwater in a short time are less, and the accumulated continuous multi-frame fuzzy difference images can enhance the contrast between the rainwater adhesion fuzzy region and the background and highlight the rainwater adhesion region on the image.
As a preferred embodiment, the operation of the step S014 using binarization processing, neighborhood filtering processing, morphology processing is expressed as follows:
The binarization effect on the fusion characteristic map obtained by multi-frame accumulation is that an automatic threshold dividing algorithm is utilized to convert a gray level map into a binary map, and the image is divided into an interested region of suspected attachments and a region without attachments;
the neighborhood filtering function is to count pixel distribution conditions in a binary image neighborhood, and remove isolated noise points so as to reduce the influence of the noise points on an attachment region of interest;
The morphological filtering function is to remove a small noise area by performing corrosion operation on the binary image, fill and extract holes in the suspected area by performing expansion operation, and repair the suspected area.
As a preferred embodiment, in the step S01, the image segmentation method using the deep learning method includes the following steps:
Pretreatment: the image is downsampled to M x N size, and the image data storage format is converted to a three-way BGR format.
Image segmentation: and sending the input image data into a semantic segmentation convolutional neural network, and outputting the classification of each pixel point through forward propagation to obtain a pixel point set of the suspected attachment region.
As an preferred embodiment, when the image segmentation result is obtained by adopting the deep learning mode, the network model used is a convolution neural network for semantic segmentation, and the backbone network for feature extraction can adopt resnet, squeezenet1.1, mobilenent and other networks; the semantic segmentation deconvolution part adopts PSPnet frames, merges the feature images of the last 4 layers of different scales of the backbone network, and finally outputs a segmentation result image with the same size as the original image.
As a preferred embodiment, in the step S02 of feature extraction, the specific expression of the extraction contour of the extraction region is:
the contour extraction function is to extract the contour of the processed fusion feature map to obtain pixel sets with different contours, and the pixel sets are used for carrying out feature extraction calculation aiming at different contour areas to evaluate the definition of each set and the credibility of the rainwater attachment area.
As a preferred embodiment, in the step S02 of feature extraction, the specific expression for performing statistical calculation on the various sharpness evaluation indexes in the contour is:
And carrying out statistical calculation on the divided outlines by using one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different kinds of evaluation index values.
Image statistics feature: gray, grads gradient, laplas quadratic gradient, mean/variance/max/min mathematical statistics, mean variance, maximum, minimum
Shape texture features: round/Area circularity and Area, wavelet_f Wavelet transform operator,
Sharpness evaluation feature: variance, EVA, hist, laplas A
Value=F(area,vector)。
As a preferred embodiment, when the statistics calculation is performed on the divided contours in the step S02 to obtain different kinds of evaluation index values, two evaluation methods are included: and accumulating and evaluating the credibility value, and classifying and judging the outline area by using a classifier.
As a preferred embodiment, when the contour region is classified and judged by using the classifier, N kinds of evaluation index values are calculated for a certain contour region, and the N kinds of evaluation index values are integrated into a feature vector of the region; and (5) taking the feature vector of the rainwater area obtained through statistics as a training sample, and sending the training sample into a classifier for training. The classifier can select decision trees, SVMs, BP networks and the like to realize classification and judgment of whether the segmented outline area is a rainwater area or not.
As a preferred embodiment, when judging by using the reliability value accumulation evaluation mode, N evaluation indexes are set for a certain contour area, each of which is set with a evaluation selection threshold for expressing whether the evaluation index value of the certain contour area can be regarded as a rainwater area;
Respectively calculating each evaluation index for the contour region to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold corresponding to the evaluation index, and adding a score to the credibility of the contour region if the evaluation selection threshold of the evaluation index is exceeded; if the evaluation index is not exceeded, the contour area is removed or the credibility of the contour area is not divided;
And finally, counting outline areas with evaluation indexes higher than the evaluation selection threshold in the image, and marking the positions and area information of the areas.
As a preferred embodiment, the specific manner of determining the region accumulation mark in the step S03 is as follows: gridding a detection picture and dividing the detection picture into MxN grids; mapping the output result accumulated by a plurality of frames to the corresponding grid position, counting the number of attached objects of the grid, and giving out quantitative shielding condition.
A night-mode lens attachment detection system comprising the following modules:
The image segmentation module is used for searching one or more frames of images with light sources on the background, and processing and separating out suspected areas in the images by utilizing the characteristic that the light sources in the images enable the attachments to appear bright spots;
The feature extraction module is used for extracting a contour area suspected of being blurred due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and comprehensively judging whether the area is the attachment according to various index results;
and the early warning judging module is used for judging the area by the statistical feature extracting module, marking the accumulated value and triggering the alarm by the judging area with the accumulated times exceeding the alarm threshold.
As a preferred embodiment, the image segmentation module continuously acquires a plurality of frames of captured images, searches for one or more frames of images with a background having a light source, converts the images into a gray scale image, calculates a gray scale histogram of the gray scale image, and obtains a statistical array containing the gray scale value of a pixel point, the gray scale distribution and the occurrence frequency; classifying pixel points in the gray level histogram, calculating the space distances between different gray levels and other gray levels, and calculating the sum of the distance values between the gray levels and the products of probability coefficients; thereby obtaining saliency maps of different gray values and separating out the attached suspicious region.
As a preferred embodiment, the technical parameters of the lens attached object detection system are shown in table 2:
TABLE 2
As a preferred embodiment, the configuration of the lens deposit detection system requires:
The detection system of the lens attached object can be configured to independently operate in the background or be matched with other algorithms to trigger operation when operating, and a multi-frame interval detection method is adopted. The detection inputs of the lens attachment include: four paths of camera image original pictures, vehicle body CAN signals: speed of vehicle, ambient brightness information bright. When the car body moves, the detection system of the lens attached object is triggered to detect the four paths of camera images.
As a preferred embodiment, the detection alarm performance of the lens attached object detection system requires:
(1) The application range is as follows: in indoor and outdoor environments and different road conditions, the device can detect raindrops and stains in different forms, and has certain detection capability on serious defects of water mist and lenses;
(2) Stability: the device is not influenced by factors such as weather, environmental changes and the like, and has good reliability;
(3) The algorithm running time and occupied resources meet the requirements.
As a preferred embodiment, the specific application scenario and special scenario of the lens attached object detection system are shown in the following table 3:
TABLE 3 Table 3
A mobile terminal, which may be a vehicle-mounted terminal or a mobile phone mobile terminal,
The vehicle-mounted terminal can execute the detection method of the night-mode lens attached object or carry the detection system of the night-mode lens attached object;
The mobile terminal of the mobile phone can execute the detection method of the night-mode lens attached object or carry the detection system of the night-mode lens attached object.
A computer storage medium is a computer program written in accordance with the method of detecting night-mode lens attachments as described above.
As a preferred embodiment, the present embodiment further provides a terminal device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted cloud, a blade cloud, a tower cloud, or a rack-mounted cloud (including an independent cloud or a cloud cluster formed by multiple clouds) capable of executing a program, and so on. The terminal device of this embodiment includes at least, but is not limited to: a memory, a processor, and the like, which may be communicatively coupled to each other via a system bus. It should be noted that a terminal device having a component memory, a processor, but it should be understood that not all of the illustrated components are required to be implemented, and that alternative night mode lens deposit detection methods may implement more or fewer components.
As a preferred embodiment, the memory (i.e., readable storage medium) includes flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory may be an internal storage unit of a computer device, such as a hard disk or memory of the computer device. In other embodiments, the memory may also be an external storage device of the computer device, such as a plug-in hard disk provided on the computer device, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Of course, the memory may also include both internal storage units of the computer device and external storage devices. In this embodiment, the memory is typically used to store an operating system and various application software installed on the computer device, such as program codes of a method for detecting lens attachment in the night mode in the embodiment. In addition, the memory can be used to temporarily store various types of data that have been output or are to be output.
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a cloud, an App application store, etc., on which a computer program is stored, which when executed by a processor, performs a corresponding function. The computer readable storage medium of the present embodiment is used for a method program for detecting a lens deposit in a night mode, and when executed by a processor, implements the method for detecting a lens deposit in the method program embodiment of the lens deposit in the night mode.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims of this invention, which are within the skill of those skilled in the art, be included within the spirit and scope of this invention.

Claims (6)

1. The night mode lens attachment detection system is characterized by comprising the following modules:
The image segmentation module is used for searching one or more frames of images with light sources on the background, and processing and separating out suspected areas in the images by utilizing the characteristic that the light sources in the images enable the attachments to appear bright spots; the image segmentation module is realized by adopting a suspected region segmentation algorithm;
The feature extraction module is used for extracting a contour area suspected of being blurred due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and comprehensively judging whether the area is the attachment according to various index results;
The early warning judging module is used for counting the judging areas of the feature extraction module, marking the accumulated value and triggering the alarm for the judging areas with the accumulated times exceeding the alarm threshold value;
The suspected region segmentation algorithm comprises the following steps: continuously acquiring a plurality of frames of captured images, searching one or more frames of images with a background having a light source, converting the images into a gray level image, calculating a gray level histogram of the gray level image, and obtaining a statistical array containing the gray level value, gray level distribution and occurrence frequency of pixel points; classifying pixel points in the gray level histogram, calculating the space distances between different gray levels and other gray levels, and calculating the sum of the distance values between the gray levels and the products of probability coefficients; thereby obtaining saliency maps of different gray values and separating out the attached suspicious areas;
the obtaining the statistical array comprising the gray value size, the gray level distribution and the occurrence frequency of the pixel points comprises the following steps: counting the occurrence frequency of all pixels in the digital image according to the gray value:
P(k)=nk/N
wherein N is the total number of pixels, nk is the number of pixels of gray level k;
on the basis of the gray level histogram, calculating gray space distances between different gray levels and other gray values;
D(k,i)=|k-i|
Wherein D (k, i) represents the gray space distance of gray level k to gray level i;
The saliency value of the gray level k is calculated by summing the products of probability coefficients of the gray distance values of the gray level k to other gray levels and other gray levels:
s (k) represents a pixel saliency value of a gray level k, P (i) is a probability value of the gray level i, and D (k, i) is a gray space distance value from the gray level k to the gray level i;
Thereby obtaining saliency maps of different gray values and separating out the attached suspicious region.
2. The night-mode lens attachment detection system according to claim 1, wherein the feature extraction module performs statistical calculation on a plurality of sharpness evaluation indexes in the outline, and the method specifically comprises:
The index evaluation module performs statistical calculation on the divided outlines by using one or more of image statistical features, shape texture features and definition evaluation features to obtain different types of evaluation index values;
And (3) carrying out statistical calculation on the divided outlines by using one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different kinds of evaluation index values:
Image statistics feature: gray scale, gradient, secondary gradient, mathematical statistics, mean variance, maximum, minimum;
Shape texture features: roundness and area, wavelet transform operators;
sharpness evaluation feature: mean, point sharpness, histogram, secondary gradient.
3. The system for detecting night-mode lens attachment according to claim 2, wherein the feature extraction module performs statistical calculation on the divided profiles to obtain different kinds of evaluation index values, and comprises two evaluation modes: and accumulating and evaluating the credibility value, and classifying and judging the outline area by using a classifier.
4. The night-mode lens attachment detection system according to claim 3, wherein N kinds of evaluation indexes are set for a certain contour area when judging by a reliability value accumulation evaluation mode, each evaluation index being set with a judgment selection threshold value for expressing whether or not an evaluation index value of a certain contour area can be regarded as a rainwater area;
Respectively calculating each evaluation index for the contour region to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold corresponding to the evaluation index, and adding a score to the credibility of the contour region if the evaluation selection threshold of the evaluation index is exceeded; if the evaluation index is not exceeded, the contour area is removed or the credibility of the contour area is not divided;
And finally, counting outline areas with evaluation indexes higher than the evaluation selection threshold in the image, and marking the positions and area information of the areas.
5. The night-mode lens attachment detection system according to claim 3, wherein when the contour region is classified and judged by the classifier, N kinds of evaluation index values are calculated for a certain contour region, and the N kinds of evaluation index values are integrated into a feature vector of the region; the feature vector of the rainwater area obtained through statistics is used as a training sample to be sent into a classifier for training; the classifier can select decision trees, SVMs, BP networks and the like to realize classification and judgment of whether the segmented outline area is a rainwater area or not.
6. A mobile terminal, characterized by: which may be a car-mounted terminal or a mobile phone mobile terminal implementing the night-mode lens attachment detection system according to any of the preceding claims 1-5.
CN201910679980.5A 2019-07-26 2019-07-26 Night mode lens attachment detection system, terminal and storage medium Active CN110532875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910679980.5A CN110532875B (en) 2019-07-26 2019-07-26 Night mode lens attachment detection system, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910679980.5A CN110532875B (en) 2019-07-26 2019-07-26 Night mode lens attachment detection system, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN110532875A CN110532875A (en) 2019-12-03
CN110532875B true CN110532875B (en) 2024-06-21

Family

ID=68661700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910679980.5A Active CN110532875B (en) 2019-07-26 2019-07-26 Night mode lens attachment detection system, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN110532875B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object
CN111815556B (en) * 2020-05-28 2024-01-16 北京易航远智科技有限公司 Vehicle-mounted fisheye camera self-diagnosis method based on texture extraction and wavelet transformation
CN112464947B (en) * 2020-10-30 2021-09-28 深圳市路远智能装备有限公司 Visual identification method of tripod lens
CN113409271B (en) * 2021-06-21 2022-02-11 广州文远知行科技有限公司 Method, device and equipment for detecting oil stain on lens
CN113362326B (en) * 2021-07-26 2023-10-03 广东奥普特科技股份有限公司 Method and device for detecting defects of welding spots of battery

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529639A (en) * 2012-07-03 2014-01-22 歌乐牌株式会社 Lens-attached matter detector, lens-attached matter detection method, and vehicle system
JP2018148345A (en) * 2017-03-03 2018-09-20 株式会社デンソーアイティーラボラトリ On-vehicle camera system, adhered matter detecting apparatus, adhered matter removing method, and adhered matter detecting program
CN109118498A (en) * 2018-08-22 2019-01-01 科大讯飞股份有限公司 A kind of camera head stain detection method, device, equipment and storage medium
CN109241818A (en) * 2017-07-11 2019-01-18 松下电器(美国)知识产权公司 Attachment detection method and device and system, attachment learning method and device, program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004304578A (en) * 2003-03-31 2004-10-28 Seiko Epson Corp Region division method, region division program, image extraction method, and image extraction program
CN102111532B (en) * 2010-05-27 2013-03-27 周渝斌 Camera lens occlusion detecting system and method
JP2012150730A (en) * 2011-01-20 2012-08-09 Panasonic Corp Feature extraction device, feature extraction method, feature extraction program and image processing device
JP6245875B2 (en) * 2013-07-26 2017-12-13 クラリオン株式会社 Lens dirt detection device and lens dirt detection method
JP6281291B2 (en) * 2014-01-16 2018-02-21 大日本印刷株式会社 Image feature point extraction method, defect inspection method, defect inspection apparatus
CN108629805B (en) * 2017-03-15 2021-12-14 纵目科技(上海)股份有限公司 Salient object detection method and system based on image layering technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529639A (en) * 2012-07-03 2014-01-22 歌乐牌株式会社 Lens-attached matter detector, lens-attached matter detection method, and vehicle system
JP2018148345A (en) * 2017-03-03 2018-09-20 株式会社デンソーアイティーラボラトリ On-vehicle camera system, adhered matter detecting apparatus, adhered matter removing method, and adhered matter detecting program
CN109241818A (en) * 2017-07-11 2019-01-18 松下电器(美国)知识产权公司 Attachment detection method and device and system, attachment learning method and device, program
CN109118498A (en) * 2018-08-22 2019-01-01 科大讯飞股份有限公司 A kind of camera head stain detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN110532875A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110544211B (en) Method, system, terminal and storage medium for detecting lens attached object
CN110532875B (en) Night mode lens attachment detection system, terminal and storage medium
CN110487562B (en) Driveway keeping capacity detection system and method for unmanned driving
CN106652468B (en) The detection and from vehicle violation early warning alarm set and method in violation of rules and regulations of road vehicle front truck
Wu et al. Lane-mark extraction for automobiles under complex conditions
US8797417B2 (en) Image restoration method in computer vision system, including method and apparatus for identifying raindrops on a windshield
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
TWI607901B (en) Image inpainting system area and method using the same
US7231288B2 (en) System to determine distance to a lead vehicle
CN108021856B (en) Vehicle tail lamp identification method and device and vehicle
CN111860120B (en) Automatic shielding detection method and device for vehicle-mounted camera
CN110532876A (en) Night mode camera lens pays detection method, system, terminal and the storage medium of object
WO2016014930A2 (en) A vision-based system for dynamic weather detection
JP2019029940A (en) Accretion detector and vehicle system comprising the same
CN108197523B (en) Night vehicle detection method and system based on image conversion and contour neighborhood difference
CN110659547B (en) Object recognition method, device, vehicle and computer-readable storage medium
CN109359593B (en) Rain and snow environment picture fuzzy monitoring and early warning method based on image local grid
CN111046741A (en) Method and device for identifying lane line
CN110276318A (en) Nighttime road rains recognition methods, device, computer equipment and storage medium
Ghahremannezhad et al. Automatic road detection in traffic videos
FAN et al. Robust lane detection and tracking based on machine vision
CN115761668A (en) Camera stain recognition method and device, vehicle and storage medium
CN111915634A (en) Target object edge detection method and system based on fusion strategy
CN111815556A (en) Vehicle-mounted fisheye camera self-diagnosis method based on texture extraction and wavelet transformation
Helala et al. Road boundary detection in challenging scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant