CN110544211B - Method, system, terminal and storage medium for detecting lens attached object - Google Patents

Method, system, terminal and storage medium for detecting lens attached object Download PDF

Info

Publication number
CN110544211B
CN110544211B CN201910679991.3A CN201910679991A CN110544211B CN 110544211 B CN110544211 B CN 110544211B CN 201910679991 A CN201910679991 A CN 201910679991A CN 110544211 B CN110544211 B CN 110544211B
Authority
CN
China
Prior art keywords
image
area
evaluation
contour
attachment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910679991.3A
Other languages
Chinese (zh)
Other versions
CN110544211A (en
Inventor
罗亮
唐锐
张笑东
于璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zongmu Technology Shanghai Co Ltd
Original Assignee
Zongmu Technology Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zongmu Technology Shanghai Co Ltd filed Critical Zongmu Technology Shanghai Co Ltd
Priority to CN201910679991.3A priority Critical patent/CN110544211B/en
Publication of CN110544211A publication Critical patent/CN110544211A/en
Application granted granted Critical
Publication of CN110544211B publication Critical patent/CN110544211B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method, a system, a terminal and a storage medium for detecting lens attached objects, which comprise the following steps: s01: image segmentation: performing suspected region segmentation on the input image; s02: feature extraction: extracting a fuzzy contour area caused by attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and judging whether the area is the attachment according to various index results; s03: early warning judgment: and (3) performing accumulated marking on the judging area, and performing early warning triggering judgment on the processing result of which the accumulated times exceed the alarm threshold. The invention aims to ensure the normal operation of an auxiliary system, ensure the safety of a user vehicle body, solve the problems that the lens is blocked by water drops with different forms or the lens is blocked by dark stains, light spots, refracted light and the like for detection, and give an alarm in time.

Description

Method, system, terminal and storage medium for detecting lens attached object
Technical Field
The present invention relates to the field of automotive electronics, and in particular, to a method, a system, a terminal, and a storage medium for detecting a lens attachment.
Background
In the prior art, "automated valet parking" (Auto Valet Parking) is one of the hot technologies in the field of automated driving, and will also be an important milestone on automated driving mass-production roads. As a complete set of automated unmanned car systems, AVP systems drive cars at low speeds or stop cars in limited areas such as parking lots or surrounding roads. In addition, as a functional extension of the parking assist, it is one of the earliest commercialized full-automatic driving functions.
In the running process of the vehicle, accidental reasons such as road conditions, weather and the like are frequently met, so that the situation that dirt and rainwater are blocked on a lens is caused, and the situation has destructive influence on the normal operation of an AVP system. Therefore, during the running of the vehicle, it is necessary to detect the camera screen and determine whether the imaging effect is reliable.
Disclosure of Invention
In order to solve the above and other potential technical problems, the invention provides a method, a system, a terminal and a storage medium for detecting lens attached objects, which are used for ensuring the normal operation of an auxiliary system and ensuring the safety of a user vehicle body, and detecting the problems that the lens is blocked by water drops with different forms, such as condensed water drops and fuzzy water stains, or the lens is blocked and polluted by dark stains, light spots, refracted light and the like, and giving an alarm in time.
A method for detecting lens attached objects comprises the following steps:
s01: image segmentation: performing suspected region segmentation on the input image;
s02: feature extraction: extracting a fuzzy contour area caused by attachments, carrying out statistical calculation on one or more evaluation indexes in the contour, and judging whether the area is the attachment according to a comprehensive index result;
s03: early warning judgment: and (3) performing accumulated marking on the judging area, and performing early warning triggering on the processing result of which the accumulated times exceed the alarm threshold.
Further, in the step S01, the image segmentation may be implemented by using a suspected region segmentation algorithm or by using a deep learning method.
Further, in the step S01, the image segmentation method adopting the suspected region segmentation algorithm includes the following steps:
s011: the image is segmented and downsampled,
s012: a fuzzy difference map is extracted and a fuzzy difference map is obtained,
s013: and performing multi-graph superposition;
s014: the suspicious region image segmentation is realized by one or more operations of binarization, filtering, morphology and threshold.
Further, in the step S011, the image segmentation and downsampling operations are expressed as follows: the original captured image is assumed to be an image with a pixel value of N x M, the downsampling coefficient is assumed to be k, the downsampling operation is to take a point every k pixel points every row and every column in the original captured image to form a downsampled image, and the downsampled image is used for reducing the calculated amount of image processing and ensuring the instantaneity of image processing.
Further, in the step S012, the operation of extracting the blur difference map is expressed as follows: and (3) carrying out fuzzy processing on the captured image by using a filtering operator, subtracting the original image from the image after the fuzzy processing, and taking the absolute value to obtain a fuzzy difference image.
Assuming xsrc as an original image, xblur as an image obtained by fuzzy smoothing, and defining a current fuzzy difference image as follows: deltax= |xsrc-xblur|. The function of extracting the fuzzy difference map is that the fuzzy characteristic of the image area attached by rainwater is not sensitive to a filtering smoothing algorithm; the image area to which rainwater is attached can be distinguished with less variation than the rainwater-free attaching area.
Deltax is a current frame blur difference map, xsrc is a current frame original map, and xblur is a map after filtering and smoothing of a current frame image.
The blurring process can be one or more of Gaussian filtering, median filtering and mean filtering for smoothing.
The dimensional relationships among the gaussian filter, median filter, mean filter and filter kernel are shown in table 1 below:
TABLE 1
Preferably, as can be seen from table 1, in the case of the current image size, the effect of using the mean filtering is best when the filter kernel size is 5.
Further, the operation of performing multi-map superimposition in step S013 is expressed as follows: n-frame cumulative xaccum=delatxk+deltaxk+n for the blurred difference map obtained in S012
Xaocum is a fusion feature map obtained by multi-frame accumulation, is an accumulation result from k frames to k+n frames, deltaxk is a fuzzy difference map at the kth frame time, and deltaxk+n is a fuzzy difference map at the kth+n frame time.
The effect of accumulating the obtained fusion characteristic images is that the distribution and transformation of the form position of the rainwater in a short time are less, and the accumulated continuous multi-frame fuzzy difference images can enhance the contrast between the rainwater adhesion fuzzy region and the background and highlight the rainwater adhesion region on the image.
Further, the operation of the step S014 using binarization processing, neighborhood filtering processing, morphology processing is expressed as follows:
the binarization effect on the fusion characteristic map obtained by multi-frame accumulation is that an automatic threshold dividing algorithm is utilized to convert a gray level map into a binary map, and the image is divided into an interested region of suspected attachments and a region without attachments;
the neighborhood filtering function is to count pixel distribution conditions in a binary image neighborhood, and remove isolated noise points so as to reduce the influence of the noise points on an attachment region of interest;
the morphological filtering function is to remove a small noise area by performing corrosion operation on the binary image, fill and extract holes in the suspected area by performing expansion operation, and repair the suspected area.
Further, in the step S01, the image segmentation method adopting the deep learning method includes the following steps:
pretreatment: the image is downsampled to M x N size, and the image data storage format is converted to a three-way BGR format.
Image segmentation: and sending the input image data into a semantic segmentation convolutional neural network, and outputting the classification of each pixel point through forward propagation to obtain a pixel point set of the suspected attachment region.
Further, when the image segmentation result is obtained by adopting the deep learning mode, the network model used is a convolution neural network for semantic segmentation, and a backbone network for feature extraction can adopt networks such as a network 18, a network 1.1, a network of mobile and the like; the semantic segmentation deconvolution part adopts a PSPnet framework, merges the feature images of the last 4 layers of different scales of the backbone network, and finally outputs a segmentation result image with the same size as the original image.
Further, in the step S02 of feature extraction, the specific expression of the extraction contour of the extraction region is:
the contour extraction function is to extract the contour of the processed fusion feature map to obtain pixel sets with different contours, and the pixel sets are used for carrying out feature extraction calculation aiming at different contour areas to evaluate the definition of each set and the credibility of the rainwater attachment area.
Further, in the step S02 of feature extraction, the specific expression of performing statistical calculation on the multiple sharpness evaluation indexes in the contour is as follows:
and carrying out statistical calculation on the divided outlines by using one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different kinds of evaluation index values.
Image statistics feature: gray, grads gradient, laplas quadratic gradient, mean/variance/max/min mathematical statistics, mean variance, maximum, minimum
Shape texture features: round/Area circularity and Area, wavelet_f Wavelet transform operator,
Sharpness evaluation feature: variance, EVA, hist, laplas
Value=F(area,vector)。
Further, when the statistics calculation is performed on the divided contours in the step S02 to obtain different kinds of evaluation index values, two evaluation modes are included: and accumulating and evaluating the credibility value, and classifying and judging the outline area by using a classifier.
Further, when the contour region is classified and judged by using the classifier, N evaluation index values are calculated and obtained for a certain contour region, and the evaluation index values in N are integrated into a feature vector of the region; and (5) taking the feature vector of the rainwater area obtained through statistics as a training sample, and sending the training sample into a classifier for training. The classifier can select decision trees, SVMs, BP networks and the like to realize classification and judgment of whether the segmented outline area is a rainwater area or not.
Further, when judging by using the reliability value accumulation evaluation mode, setting N evaluation indexes for a certain contour area, wherein each evaluation index is provided with a judgment selection threshold value for expressing whether the evaluation index value of the certain contour area can be considered as a rainwater area;
respectively calculating each evaluation index for the contour region to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold corresponding to the evaluation index, and adding a score to the credibility of the contour region if the evaluation selection threshold of the evaluation index is exceeded; if the evaluation index is not exceeded, the contour area is removed or the credibility of the contour area is not divided;
and finally, counting outline areas with evaluation indexes higher than the evaluation selection threshold in the image, and marking the positions and area information of the areas.
Further, the specific manner of determining the region accumulation mark in the step S03 is as follows: gridding a detection picture and dividing the detection picture into MxN grids; mapping the output result accumulated by a plurality of frames to the corresponding grid position, counting the number of attached objects of the grid, and giving out quantitative shielding condition.
A detection system of lens attachment comprises the following modules:
the image segmentation module is used for carrying out downsampling, blurring and difference processing on the captured image so as to distinguish a contour area suspected to be blurred due to the attachment from a contour area suspected not to be blurred due to the attachment in the image;
the feature extraction module is used for extracting a contour area suspected of being blurred due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and comprehensively judging whether the area is the attachment according to various index results;
and the early warning judging module is used for judging the area by the statistical feature extracting module, marking the accumulated value and triggering the alarm by the judging area with the accumulated times exceeding the alarm threshold.
Further, the image segmentation module further comprises a downsampling module, a fuzzy processing module, a superposition module and a post-processing module;
the downsampling module is used for reducing image pixels and guaranteeing the instantaneity of image processing;
the blurring processing module is used for carrying out blurring processing on the captured image by using a filtering operator, subtracting an original image from the image after blurring processing, taking an absolute value, and obtaining a blurring difference image, wherein the blurring difference image is used for distinguishing a contour area suspected to be blurred due to attachment from a contour area suspected not to be blurred due to attachment in the image;
the superposition module is used for fusing the characteristics of the multi-frame continuous fuzzy difference images to form a fusion characteristic image, and has the functions of less distribution and transformation of the form and the position of the attachments in a short time, accumulating the continuous multi-frame fuzzy difference images, enhancing the contrast ratio between the fuzzy areas of the attachments and the background and highlighting the outline of the attachments on the images;
the post-processing module comprises a binarization module, a neighborhood filtering module and a morphology processing module, and is used for removing isolated noise points, removing small noise areas, filling holes in expansion operation and repairing area areas.
A mobile terminal, which may be a vehicle-mounted terminal or a mobile phone mobile terminal,
the vehicle-mounted terminal can execute the detection method of the lens attached object or carry the detection system of the lens attached object;
the mobile terminal of the mobile phone can execute the method for detecting the lens attached object or is provided with the system for detecting the lens attached object.
A computer storage medium is a computer program written in accordance with the method for detecting a lens deposit as described above.
As described above, the present invention has the following advantageous effects:
in the rainy day driving process, rainwater is attached to the surface of the camera in different modes, and the situations that stains shield the lens, the defects inside the lens and the like exist. In this case, not only the imaging effect of the camera is affected, but also the accuracy and effectiveness of the algorithm are reduced. In order to ensure the normal operation of the AVP system and the safety of a user vehicle body, the problem that the lens is blocked and polluted needs to be detected and timely alarmed.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows images of several situations in which problems are encountered in the background art.
Fig. 2 is a schematic flow chart of the image segmentation step of the image according to the present invention.
FIG. 3 is a schematic diagram of the early warning step of the present invention.
FIG. 4 is a schematic diagram of the neighborhood filtering of the present invention.
Fig. 5 shows an enlarged view of the partial area outline of fig. 2 and a comparison after void filling.
Fig. 6 is a schematic diagram of fig. 2 after feature extraction.
Fig. 7 is a schematic diagram showing the detection result of the lens attachment in the case of focusing-free by the method of the present invention.
Fig. 8 is a schematic diagram showing the detection result of the lens attachment in the case of water mist by the method of the present invention.
Fig. 9 shows a schematic diagram of a detection result of a lens attachment in the case of having a stain according to the present invention.
FIG. 10 is a diagram showing the result of detecting the attachment of a lens in a focusing situation.
FIG. 11 is a schematic diagram showing the result of detecting attachments on a lens in a large-area scene.
Fig. 12 shows an original image, a feature fusion image, a feature extraction image, and a comparison display image of the same frame image when using deep learning as an image segmentation method in the process of the method of the present invention.
Fig. 13 is a diagram showing the original image, the feature fusion image, the feature extraction image, and the contrast of the feature extraction image of the same frame image when using the deep learning as the image segmentation method in the process of the method according to another embodiment of the present invention.
Detailed Description
Other advantages and effects of the present invention will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present invention with reference to specific examples. The invention may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present invention. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that it can be practiced, since modifications, changes in the proportions, or otherwise, used in the practice of the invention, are not intended to be critical to the essential characteristics of the invention, but are intended to fall within the spirit and scope of the invention. Also, the terms such as "upper," "lower," "left," "right," "middle," and "a" and the like recited in the present specification are merely for descriptive purposes and are not intended to limit the scope of the invention, but are intended to provide relative positional changes or modifications without materially altering the technical context in which the invention may be practiced.
With reference to figures 1 to 13 of the drawings,
a method for detecting lens attached objects comprises the following steps:
s01: image segmentation: performing suspected region segmentation on the input image;
s02: feature extraction: extracting a fuzzy contour area caused by attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and judging whether the area is the attachment according to various index results;
s03: early warning judgment: and (3) performing accumulated marking on the judging area, and performing early warning triggering judgment on the processing result of which the accumulated times exceed the alarm threshold.
In a preferred embodiment, in the step S01, the image segmentation may be implemented by using a suspected region segmentation algorithm or by using a deep learning method.
In a preferred embodiment, in the step S01, the image segmentation method adopts a suspected region segmentation algorithm, which includes the following steps:
s011: the image is segmented and downsampled,
s012: a fuzzy difference map is extracted and a fuzzy difference map is obtained,
s013: and performing multi-graph superposition;
s014: the suspected region image segmentation is achieved by one or more operations of filtering, morphology and threshold.
As a preferred embodiment, in the step S011, the image segmentation and downsampling operations are expressed as follows: the original captured image is assumed to be an image with a pixel value of N x M, the downsampling coefficient is assumed to be k, the downsampling operation is to take a point every k pixel points every row and every column in the original captured image to form a downsampled image, and the downsampled image is used for reducing the calculated amount of image processing and ensuring the instantaneity of image processing.
As a preferred embodiment, in the step S012, the operation of extracting the blur difference map is expressed as follows: and (3) carrying out fuzzy processing on the captured image by using a filtering operator, subtracting the original image from the image after the fuzzy processing, and taking the absolute value to obtain a fuzzy difference image.
Assuming xsrc as an original image, xblur as an image obtained by fuzzy smoothing, and defining a current fuzzy difference image as follows: deltax= |xsrc-xblur|. The function of extracting the fuzzy difference map is that the fuzzy characteristic of the image area attached by rainwater is not sensitive to a filtering smoothing algorithm; the image area to which rainwater is attached can be distinguished with less variation than the rainwater-free attaching area.
Deltax is a current frame blur difference map, xsrc is a current frame original map, and xblur is a map after filtering and smoothing of a current frame image.
The blurring process can be one or more of Gaussian filtering, median filtering and mean filtering for smoothing.
The dimensional relationships among the gaussian filter, median filter, mean filter and filter kernel are shown in table 1 below:
TABLE 1
Preferably, as can be seen from table 1, in the case of the current image size, the effect of using the mean filtering is best when the filter kernel size is 5.
As a preferred embodiment, the operation of performing multi-map superimposition in step S013 is expressed as follows: n-frame cumulative xaccum=delatxk+deltaxk+n for the blurred difference map obtained in S012
Xaocum is a fusion feature map obtained by multi-frame accumulation, is an accumulation result from k frames to k+n frames, deltaxk is a fuzzy difference map at the kth frame time, and deltaxk+n is a fuzzy difference map at the kth+n frame time.
The effect of accumulating the obtained fusion characteristic images is that the distribution and transformation of the form position of the rainwater in a short time are less, and the accumulated continuous multi-frame fuzzy difference images can enhance the contrast between the rainwater adhesion fuzzy region and the background and highlight the rainwater adhesion region on the image.
As a preferred embodiment, the operation of the step S014 using binarization processing, neighborhood filtering processing, morphology processing is expressed as follows:
the binarization effect on the fusion characteristic map obtained by multi-frame accumulation is that an automatic threshold dividing algorithm is utilized to convert a gray level map into a binary map, and the image is divided into a region of interest to which rainwater is attached and a rainwater-free region; the neighborhood filtering function is to count the pixel distribution condition in the neighborhood of the binary image and remove isolated noise points; the morphological filtering function is to remove a small noise area by corroding the binary image, fill the cavity by expanding, and repair the area of the area.
As a preferred embodiment, in the step S01, the image segmentation method using the deep learning method includes the following steps:
pretreatment: the image is downsampled to MxN size and the image data storage format is converted to a 3-channel BGR format.
Image segmentation: and sending the input image data into a semantic segmentation convolutional neural network, and outputting the classification of each pixel point through forward propagation to obtain a pixel point set of the suspected rainwater region.
As a preferred embodiment, when the image segmentation result is obtained by adopting the deep learning mode, the network model used is a convolution neural network for semantic segmentation, and the backbone network for feature extraction can adopt networks such as a resnet18, a squeezenet1.1, a mobile and the like; the semantic segmentation deconvolution part adopts a PSPnet framework, merges the feature images of the last 4 layers of different scales of the backbone network, and finally outputs a segmentation result image with the same size as the original image.
As a preferred embodiment, in the step S02 of feature extraction, the specific expression of the extraction contour of the extraction region is:
the contour extraction function is to extract the contour of the processed fusion feature map to obtain pixel sets with different contours, and the pixel sets are used for carrying out feature extraction calculation aiming at different contour areas to evaluate the definition of each set and the credibility of the rainwater attachment area.
As a preferred embodiment, in the step S02 of feature extraction, the specific expression for performing statistical calculation on the various sharpness evaluation indexes in the contour is:
and carrying out statistical calculation on the divided outlines by using one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different kinds of evaluation index values.
Image statistics feature: gray scale (Gray), gradient (Grads), quadratic gradient (Laplas), mathematical statistics, mean variance, maximum, minimum (mean/variance/max/min)
Shape texture features: roundness and Area (Round/Area), wavelet transform operator (wavelet_f),
Sharpness evaluation feature: average (Variance), EVA, hist, quadratic gradient (Laplas)
Value=F(area,vector)。
As a preferred embodiment, when the statistics calculation is performed on the divided contours in the step S02 to obtain different kinds of evaluation index values, two evaluation methods are included: and accumulating and evaluating the credibility value, and classifying and judging the outline area by using a classifier.
As a preferred embodiment, when the contour region is classified and judged by using the classifier, N kinds of evaluation index values are calculated for a certain contour region, and the N kinds of evaluation index values are integrated into a feature vector of the region; and (5) taking the feature vector of the rainwater area obtained through statistics as a training sample, and sending the training sample into a classifier for training. The classifier can select decision trees, SVMs, BP networks and the like to realize classification and judgment of whether the segmented outline area is a rainwater area or not.
As a preferred embodiment, when judging by using the reliability value accumulation evaluation mode, N evaluation indexes are set for a certain contour area, each of which is set with a evaluation selection threshold for expressing whether the evaluation index value of the certain contour area can be regarded as a rainwater area;
respectively calculating each evaluation index for the contour region to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold corresponding to the evaluation index, and adding a score to the credibility of the contour region if the evaluation selection threshold of the evaluation index is exceeded; if the evaluation index is not exceeded, the contour area is removed or the credibility of the contour area is not divided;
and finally, counting outline areas with evaluation indexes higher than the evaluation selection threshold in the image, and marking the positions and area information of the areas.
As a preferred embodiment, the specific manner of determining the region accumulation mark in the step S03 is as follows: gridding a detection picture and dividing the detection picture into MxN grids; mapping the output result accumulated by a plurality of frames to the corresponding grid position, counting the number of attached objects of the grid, and giving out quantitative shielding condition.
A detection system of lens attachment comprises the following modules:
the image segmentation module is used for carrying out downsampling, blurring and difference processing on the captured image so as to distinguish a contour area suspected to be blurred due to the attachment from a contour area suspected not to be blurred due to the attachment in the image;
the feature extraction module is used for extracting a contour area suspected of being blurred due to attachments, carrying out statistical calculation on various definition evaluation indexes in the contour, and comprehensively judging whether the area is the attachment according to various index results;
and the early warning judging module is used for judging the area by the statistical feature extracting module, marking the accumulated value and triggering the alarm by the judging area with the accumulated times exceeding the alarm threshold.
As a preferred embodiment, the image segmentation module further comprises a downsampling module, a blurring processing module, a superposition module and a post-processing module;
the downsampling module is used for reducing image pixels and guaranteeing the instantaneity of image processing;
the blurring processing module is used for carrying out blurring processing on the captured image by using a filtering operator, subtracting an original image from the image after blurring processing, taking an absolute value, and obtaining a blurring difference image, wherein the blurring difference image is used for distinguishing a contour area suspected to be blurred due to attachment from a contour area suspected not to be blurred due to attachment in the image;
the superposition module is used for fusing the characteristics of the multi-frame continuous fuzzy difference images to form a fusion characteristic image, and has the functions of less distribution and transformation of the form and the position of the attachments in a short time, accumulating the continuous multi-frame fuzzy difference images, enhancing the contrast ratio between the fuzzy areas of the attachments and the background and highlighting the outline of the attachments on the images;
the post-processing module comprises a binarization module, a neighborhood filtering module and a morphology processing module, and is used for removing isolated noise points, removing small noise areas, filling holes in expansion operation and repairing area areas.
As a preferred embodiment, the technical parameters of the lens attached object detection system are shown in table 2:
TABLE 2
As a preferred embodiment, the configuration of the lens deposit detection system requires:
the detection system of the lens attached object can be configured to independently operate in the background or be matched with other algorithms to trigger operation when operating, and a multi-frame interval detection method is adopted. The detection inputs of the lens attachment include: four paths of camera image original pictures, vehicle body CAN signals: speed of vehicle, ambient brightness information bright. When the car body moves, the detection system of the lens attached object is triggered to detect the four paths of camera images.
As a preferred embodiment, the detection alarm performance of the lens attached object detection system requires:
(1) The application range is as follows: in indoor and outdoor environments and different road conditions, the device can detect raindrops and stains in different forms, and has certain detection capability on serious defects of water mist and lenses;
(2) Stability: the device is not influenced by factors such as weather, environmental changes and the like, and has good reliability;
(3) The algorithm running time and occupied resources meet the requirements.
As a preferred embodiment, the specific application scenario and special scenario of the lens attached object detection system are shown in the following table 3:
TABLE 3 Table 3
A mobile terminal, which may be a vehicle-mounted terminal or a mobile phone mobile terminal,
the vehicle-mounted terminal can execute the detection method of the lens attached object or carry the detection system of the lens attached object;
the mobile terminal of the mobile phone can execute the method for detecting the lens attached object or is provided with the system for detecting the lens attached object.
A computer storage medium is a computer program written in accordance with the method for detecting a lens deposit as described above.
As a preferred embodiment, the present embodiment further provides a terminal device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack-mounted cloud, a blade cloud, a tower cloud, or a rack-mounted cloud (including an independent cloud or a cloud cluster formed by multiple clouds) capable of executing a program, and so on. The terminal device of this embodiment includes at least, but is not limited to: a memory, a processor, and the like, which may be communicatively coupled to each other via a system bus. It should be noted that a terminal device having a component memory, a processor, but it should be understood that not all of the illustrated components are required to be implemented, and that alternative methods of detecting lens stickers may implement more or fewer components.
As a preferred embodiment, the memory (i.e., readable storage medium) includes flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the memory may be an internal storage unit of a computer device, such as a hard disk or memory of the computer device. In other embodiments, the memory may also be an external storage device of a computer device, such as a plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card) or the like, which are provided on the computer device. Of course, the memory may also include both internal storage units of the computer device and external storage devices. In this embodiment, the memory is typically used to store an operating system and various application software installed on the computer device, such as the program code of the method for detecting lens attachment in the embodiment. In addition, the memory can be used to temporarily store various types of data that have been output or are to be output.
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a cloud, an App application store, etc., on which a computer program is stored, which when executed by a processor, performs a corresponding function. The computer readable storage medium of the present embodiment is used for a method program for detecting a lens attachment, and when executed by a processor, implements the method for detecting a lens attachment in the method program embodiment.
The above embodiments are merely illustrative of the principles of the present invention and its effectiveness, and are not intended to limit the invention. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the invention. Accordingly, it is intended that all equivalent modifications and variations of the invention be covered by the claims of this invention, which are within the skill of those skilled in the art, be included within the spirit and scope of this invention.

Claims (8)

1. The method for detecting the lens attachments is characterized by comprising the following steps of:
s01: image segmentation: dividing suspected attachment areas of the input image;
s02: feature extraction: extracting a contour region in the suspected attachment region, carrying out statistical calculation on one or more evaluation indexes of the contour region, and judging whether the contour region is attached according to a comprehensive index result;
s03: early warning judgment: performing accumulated marking on the outline area determined to be the attachment, and performing early warning triggering on a processing result that the accumulated number exceeds an alarm threshold;
the method comprises the steps of carrying out suspected attachment region segmentation on an input image, wherein the step of carrying out suspected attachment region segmentation on the input image comprises the step of adopting a suspected region segmentation algorithm; the suspected region segmentation algorithm comprises:
s011: image segmentation downsampling;
s012: extracting a fuzzy difference map;
s013: performing multi-graph superposition;
s014: image segmentation of the suspected region is realized by utilizing one or more operations of binarization, filtering, morphology and threshold value;
wherein extracting the fuzzy differential map comprises: performing fuzzy processing on the captured image by using a filtering operator, subtracting the original image from the image after the fuzzy processing, and taking an absolute value to obtain a fuzzy difference image;
defining a current blurred differential image as: deltax= |xsrc-xblur|; wherein deltax is a current frame fuzzy differential diagram, xsrc is a current frame original diagram, and xbur is a diagram after current frame image fuzzy processing;
wherein, carry out the multi-map and overlap and include: performing n-frame accumulation on the fuzzy difference graph, wherein xaccum=delatxk+deltaxk+n; wherein xaccum is a fusion feature map obtained by multi-frame accumulation, is an accumulation result from k frames to k+n frames, deltaxk is a fuzzy difference map at the kth frame time, and deltaxk+n is a fuzzy difference map at the kth+n frame time.
2. The method for detecting lens attachment according to claim 1, wherein the operations of the step S014 using binarization processing, neighborhood filtering processing, morphological processing are expressed as follows:
the binarization effect on the fusion characteristic map obtained by multi-frame accumulation is that an automatic threshold dividing algorithm is utilized to convert a gray level map into a binary map, and an image is divided into the suspected attachment area and the attachment-free area;
the neighborhood filtering function is to count pixel distribution conditions in a binary image neighborhood, and remove isolated noise points so as to reduce the influence of the noise points on the suspected attachment area;
the morphological filtering function is to remove a small noise area by performing corrosion operation on the binary image, fill and extract cavities existing in the suspected attachment area by performing expansion operation, and repair the area of the suspected attachment area.
3. The method for detecting lens attachment according to claim 1, wherein in the step S02 of feature extraction, the specific expression of performing statistical calculation on a plurality of sharpness evaluation indexes in the outline is:
carrying out statistical calculation on the divided outlines by using one or more of image statistical characteristics, shape texture characteristics and definition evaluation characteristics to obtain different kinds of evaluation index values;
wherein, image statistics feature: gray scale, gradient, secondary gradient, mean variance, maximum, minimum;
shape texture features: roundness and area, wavelet transform operators;
sharpness evaluation feature: mean, point sharpness, histogram, secondary gradient.
4. The method for detecting lens attachment according to claim 3, wherein the step S02 is characterized in that when different kinds of evaluation index values are obtained by performing statistical calculation on the divided contour regions, two evaluation methods are included: and the credibility value is accumulated and evaluated or the outline area is classified and judged by using a classifier.
5. The method for detecting lens attachments according to claim 4, wherein,
when judging by utilizing a credibility value accumulation evaluation mode, setting N evaluation indexes for a certain contour area, wherein each evaluation index is provided with a judgment selection threshold value which is used for expressing whether the evaluation index value of the certain contour area can be considered as a rainwater area or not;
respectively calculating each evaluation index for the contour region to obtain N evaluation index values, respectively comparing each evaluation index with a judgment selection threshold corresponding to the evaluation index, and adding a score to the credibility of the contour region if the evaluation selection threshold of the evaluation index is exceeded; if the evaluation index is not exceeded, the contour area is removed or the credibility of the contour area is not divided;
and finally, counting outline areas with evaluation indexes higher than the evaluation selection threshold in the image, and marking the positions and area information of the areas.
6. The method for detecting lens attachments according to claim 4, wherein,
when the contour region is classified and judged by using a classifier, N evaluation index values are calculated and obtained for a certain contour region, and the N evaluation index values are integrated into a feature vector of the region; the feature vector of the rainwater area obtained through statistics is used as a training sample to be sent into a classifier for training;
the classifier comprises a decision tree, an SVM and a BP network, and is used for classifying and judging whether the segmented outline area is a rainwater area or not.
7. A mobile terminal, characterized by: the mobile terminal comprises a vehicle-mounted terminal or a mobile phone mobile terminal, which executes the method for detecting lens attachments according to any one of claims 1 to 6.
8. A computer storage medium which is a computer program written in accordance with the method for detecting lens attachments according to any one of claims 1 to 6.
CN201910679991.3A 2019-07-26 2019-07-26 Method, system, terminal and storage medium for detecting lens attached object Active CN110544211B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910679991.3A CN110544211B (en) 2019-07-26 2019-07-26 Method, system, terminal and storage medium for detecting lens attached object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910679991.3A CN110544211B (en) 2019-07-26 2019-07-26 Method, system, terminal and storage medium for detecting lens attached object

Publications (2)

Publication Number Publication Date
CN110544211A CN110544211A (en) 2019-12-06
CN110544211B true CN110544211B (en) 2024-02-09

Family

ID=68709857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910679991.3A Active CN110544211B (en) 2019-07-26 2019-07-26 Method, system, terminal and storage medium for detecting lens attached object

Country Status (1)

Country Link
CN (1) CN110544211B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532876A (en) * 2019-07-26 2019-12-03 纵目科技(上海)股份有限公司 Night mode camera lens pays detection method, system, terminal and the storage medium of object
CN111091097A (en) * 2019-12-20 2020-05-01 ***通信集团江苏有限公司 Method, device, equipment and storage medium for identifying remnants
CN111325715A (en) * 2020-01-21 2020-06-23 上海悦易网络信息技术有限公司 Camera color spot detection method and device
CN111932596B (en) * 2020-09-27 2021-01-22 深圳佑驾创新科技有限公司 Method, device and equipment for detecting camera occlusion area and storage medium
CN112288691A (en) * 2020-10-16 2021-01-29 国电大渡河枕头坝发电有限公司 Method for detecting water drops in hydraulic power plant based on image processing
CN112348784A (en) * 2020-10-28 2021-02-09 北京市商汤科技开发有限公司 Method, device and equipment for detecting state of camera lens and storage medium
WO2022198508A1 (en) * 2021-03-24 2022-09-29 深圳市大疆创新科技有限公司 Lens abnormality prompt method and apparatus, movable platform, and readable storage medium
CN114170226B (en) * 2022-01-24 2022-08-19 谱为科技(常州)有限公司 Linen detection method and device based on image enhancement and convolutional neural network
CN114589160B (en) * 2022-01-25 2023-05-16 深圳大方智能科技有限公司 Camera protection method for indoor construction
CN116071657B (en) * 2023-03-07 2023-07-25 青岛旭华建设集团有限公司 Intelligent early warning system for building construction video monitoring big data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07262384A (en) * 1994-03-23 1995-10-13 Nippon Telegr & Teleph Corp <Ntt> Method and device for dividing image area
CN102111532A (en) * 2010-05-27 2011-06-29 周渝斌 Camera lens occlusion detecting system and method
WO2014003782A1 (en) * 2012-06-29 2014-01-03 Analogic Corporation Automatic occlusion region identification using radiation imaging modality
CN104601965A (en) * 2015-02-06 2015-05-06 巫立斌 Camera shielding detection method
CN105828068A (en) * 2016-05-06 2016-08-03 北京奇虎科技有限公司 Method and device for carrying out occlusion detection on camera and terminal device
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
CN109118498A (en) * 2018-08-22 2019-01-01 科大讯飞股份有限公司 A kind of camera head stain detection method, device, equipment and storage medium
JP2019061505A (en) * 2017-09-27 2019-04-18 株式会社デンソー Information processing system, control system, and learning method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8818109B2 (en) * 2006-07-04 2014-08-26 Hewlett-Packard Development Company, L.P. Context-aware image processing
US10678257B2 (en) * 2017-09-28 2020-06-09 Nec Corporation Generating occlusion-aware bird eye view representations of complex road scenes

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07262384A (en) * 1994-03-23 1995-10-13 Nippon Telegr & Teleph Corp <Ntt> Method and device for dividing image area
CN102111532A (en) * 2010-05-27 2011-06-29 周渝斌 Camera lens occlusion detecting system and method
WO2014003782A1 (en) * 2012-06-29 2014-01-03 Analogic Corporation Automatic occlusion region identification using radiation imaging modality
CN104601965A (en) * 2015-02-06 2015-05-06 巫立斌 Camera shielding detection method
CN105828068A (en) * 2016-05-06 2016-08-03 北京奇虎科技有限公司 Method and device for carrying out occlusion detection on camera and terminal device
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
JP2019061505A (en) * 2017-09-27 2019-04-18 株式会社デンソー Information processing system, control system, and learning method
CN109118498A (en) * 2018-08-22 2019-01-01 科大讯飞股份有限公司 A kind of camera head stain detection method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于模糊综合评判的遥感图像变化检测方法;全吉成;刘一超;薛峰;;现代电子技术(08);全文 *

Also Published As

Publication number Publication date
CN110544211A (en) 2019-12-06

Similar Documents

Publication Publication Date Title
CN110544211B (en) Method, system, terminal and storage medium for detecting lens attached object
CN110532875B (en) Night mode lens attachment detection system, terminal and storage medium
Wu et al. Lane-mark extraction for automobiles under complex conditions
US8797417B2 (en) Image restoration method in computer vision system, including method and apparatus for identifying raindrops on a windshield
TWI607901B (en) Image inpainting system area and method using the same
Negru et al. Image based fog detection and visibility estimation for driving assistance systems
CN111860120B (en) Automatic shielding detection method and device for vehicle-mounted camera
CN110532876A (en) Night mode camera lens pays detection method, system, terminal and the storage medium of object
CN112446246B (en) Image occlusion detection method and vehicle-mounted terminal
CN109359593B (en) Rain and snow environment picture fuzzy monitoring and early warning method based on image local grid
Niksaz Automatic traffic estimation using image processing
CN110276318A (en) Nighttime road rains recognition methods, device, computer equipment and storage medium
Ghahremannezhad et al. Automatic road detection in traffic videos
Helala et al. Road boundary detection in challenging scenarios
FAN et al. Robust lane detection and tracking based on machine vision
CN110544232A (en) detection system, terminal and storage medium for lens attached object
CN110705553A (en) Scratch detection method suitable for vehicle distant view image
JP2020109542A (en) Deposition substance detection device and deposition substance detection method
JP7264428B2 (en) Road sign recognition device and its program
EP3392800A1 (en) Device for determining a weather state
Hsieh et al. A real-time mobile vehicle license plate detection and recognition for vehicle monitoring and management
Fung et al. Towards detection of moving cast shadows for visual traffic surveillance
CN112052768A (en) Urban illegal parking detection method and device based on unmanned aerial vehicle and storage medium
Hamzeh et al. Effect of adherent rain on vision-based object detection algorithms
US10650250B2 (en) Determination of low image quality of a vehicle camera caused by heavy rain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant