CN111369491B - Image stain detection method, device, system and storage medium - Google Patents

Image stain detection method, device, system and storage medium Download PDF

Info

Publication number
CN111369491B
CN111369491B CN201811590124.4A CN201811590124A CN111369491B CN 111369491 B CN111369491 B CN 111369491B CN 201811590124 A CN201811590124 A CN 201811590124A CN 111369491 B CN111369491 B CN 111369491B
Authority
CN
China
Prior art keywords
map
image
luminance
brightness
segmentation map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811590124.4A
Other languages
Chinese (zh)
Other versions
CN111369491A (en
Inventor
吴高德
马江敏
黄宇
廖海龙
陈婉婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Sunny Opotech Co Ltd
Original Assignee
Ningbo Sunny Opotech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Sunny Opotech Co Ltd filed Critical Ningbo Sunny Opotech Co Ltd
Priority to CN201811590124.4A priority Critical patent/CN111369491B/en
Publication of CN111369491A publication Critical patent/CN111369491A/en
Application granted granted Critical
Publication of CN111369491B publication Critical patent/CN111369491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/0008Industrial image inspection checking presence/absence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides an image stain detection method, an image stain detection device, an image stain detection system and a storage medium. The image stain detection method comprises the following steps: extracting a luminance map from the acquired image; obtaining a first segmentation map by homomorphic filtering of the luminance map; obtaining a second segmentation map by performing surface fitting on the luminance map; combining the first segmentation map and the second segmentation map into a final segmentation map; and identifying the stain information based on the final segmentation map. The scheme provided by the application can realize high-efficiency accurate stain detection.

Description

Image stain detection method, device, system and storage medium
Technical Field
The present application relates to the field of measurement testing, and in particular, to an image stain detection method, apparatus, system, and storage medium.
Background
There is a need in the industry to detect screen displayed images. Such a need may arise from a variety of objectives. For example, there may be some adverse factors in the screen generation and assembly process that may cause defects such as stains to be present on the screen display. As another example, imperfections in the camera module may result in defects in the captured image itself. In this case, the image displayed through the screen also restores such defects as various shades. Therefore, a fast and stable image blur detection scheme is needed.
Disclosure of Invention
The application provides an image stain detection scheme.
An aspect of the present application provides an image blur detection method. The image stain detection method comprises the following steps: extracting a luminance map from the acquired image; obtaining a first segmentation map by homomorphic filtering of the luminance map; obtaining a second segmentation map by performing surface fitting on the luminance map; combining the first segmentation map and the second segmentation map into a final segmentation map; and identifying the stain information based on the final segmentation map.
According to an embodiment of the present application, extracting a luminance map from an acquired image includes: extracting a luminance component map from the image, the luminance component map having the same size as the image; and reducing the dimension of the luminance component map to obtain a luminance map.
According to an embodiment of the present application, reducing the dimension of the luminance component map to obtain the luminance map includes: generating a blank image; determining a mapping position corresponding to each pixel point of the blank image in the brightness component diagram; acquiring image data of the mapping position based on pixel values of the luminance component map; and converting the blank image into a luminance map by assigning the image data to the blank image.
According to an embodiment of the present application, obtaining the first segmentation map by homomorphism filtering the luminance map includes: decomposing the brightness value of each pixel in the brightness map into an illumination component and a reflection component; applying high pass filtering to the reflected component to obtain an enhanced reflected component; combining the luminance component and the enhanced reflection component to generate an enhanced luminance map; and binarizing the enhanced luminance map to obtain a first segmentation map.
According to an embodiment of the present application, obtaining the second segmentation map by performing surface fitting on the luminance map includes: the second segmentation map is obtained by performing a B-Spline (B-Spline) fit to the luminance map.
According to an embodiment of the present application, obtaining the second segmentation map by performing B-spline fitting on the luminance map includes: selecting a first number of control points in the horizontal direction to perform first B spline fitting; selecting a second number of control points in the vertical direction to perform second B spline fitting; the brightness curved surface graph obtained through the first B spline fitting and the second B spline fitting is subjected to difference between the brightness curved surface graph and the brightness graph to obtain a difference brightness graph; and binarizing the differential brightness map to obtain a second segmentation map.
According to an embodiment of the present application, merging the first segmentation map and the second segmentation map into a final segmentation map includes: respectively carrying out noise reduction on the first segmentation map and the second segmentation map; and combining the first segmentation map and the second segmentation map after noise reduction to obtain a final segmentation map.
According to an embodiment of the present application, identifying stain information based on the final segmentation map includes: the location and size of the smudge area is identified in the final segmentation map by a stroking method.
Another aspect of the present application provides an image stain detection device, characterized in that the image stain detection device includes: a brightness extractor extracting a brightness map from the acquired image; the first image divider is used for obtaining a first division image by homomorphic filtering of the brightness image; the second image divider is used for obtaining a second segmentation map by performing surface fitting on the brightness map; a combiner combining the first segmentation map and the second segmentation map into a final segmentation map; and an identifier that identifies stain information based on the final segmentation map.
According to an embodiment of the present application, a brightness extractor includes: a luminance component extractor extracting a luminance component map from the image, the luminance component map having the same size as the image; and a dimension-reducing unit which reduces the dimension of the luminance component map to obtain the luminance map.
According to an embodiment of the present application, a dimension reducer includes: a generator that generates a blank image; the mapper is used for determining a mapping position corresponding to each pixel point of the blank image in the brightness component diagram; an interpolator for obtaining image data of the mapping position based on pixel values of the luminance component map; and an assigner converting the blank image into a luminance map by assigning the image data to the blank image.
According to an embodiment of the present application, the first image divider includes: a decomposer for decomposing the luminance value of each pixel in the luminance map into an illuminance component and a reflection component; a filter that applies high-pass filtering to the reflected component to obtain an enhanced reflected component; an enhancer that combines the illumination component and the enhanced reflection component to generate an enhanced luminance map; and a first binarizer binarizing the enhanced luminance map to obtain a first segmentation map.
According to the embodiment of the application, the second image divider acquires the second segmentation map by performing B-spline fitting on the luminance map.
According to an embodiment of the present application, the second image divider includes: the first B spline fitting device is used for selecting a first number of control points in the horizontal direction to perform first B spline fitting; a second B spline fitting device for selecting a second number of control points in the vertical direction to perform second B spline fitting; the difference device is used for performing difference between a brightness curved surface graph obtained through the first B spline fitting and the second B spline fitting and a brightness graph to obtain a difference brightness graph; and a second binarizer binarizing the differential brightness map to obtain a second divided map.
According to an embodiment of the present application, a combiner includes: the noise reducer is used for respectively reducing noise of the first segmentation map and the second segmentation map; and the adder combines the first segmentation map and the second segmentation map after noise reduction to obtain a final segmentation map.
According to the embodiment of the application, the identifier identifies the position and the size of the stain region in the final segmentation map through a travel method.
Another aspect of the present application provides an image blur detection system, comprising: a processor; and a memory coupled to the processor and storing machine-readable instructions executable by the processor to perform operations comprising: extracting a luminance map from the acquired image; obtaining a first segmentation map by homomorphic filtering of the luminance map; obtaining a second segmentation map by performing surface fitting on the luminance map; combining the first segmentation map and the second segmentation map into a final segmentation map; and identifying the stain information based on the final segmentation map.
Another aspect of the present application provides a non-transitory machine-readable storage medium having machine-readable instructions stored thereon, wherein the machine-readable instructions are executable by a processor to: extracting a luminance map from the acquired image; obtaining a first segmentation map by homomorphic filtering of the luminance map; obtaining a second segmentation map by performing surface fitting on the luminance map; combining the first segmentation map and the second segmentation map into a final segmentation map; and identifying the stain information based on the final segmentation map.
The technical scheme provided by the application can realize the efficient and accurate detection of the image stain.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
fig. 1 is a flowchart showing an image blur detection method according to an embodiment of the present application;
FIG. 2 is a flow chart illustrating an image dimension reduction method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating bilinear interpolation according to an embodiment of the present application;
FIG. 4 is a flow chart illustrating the acquisition of a first segmentation map according to an embodiment of the present application;
FIG. 5 is a flow chart illustrating the acquisition of a second segmentation map according to an embodiment of the present application;
FIG. 6 is a block diagram illustrating an image blur detection device according to an embodiment of the present application; and
fig. 7 is a schematic diagram showing an image noise reduction system according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are merely illustrative of the related art concept and not restrictive of the technical concept. It should be further noted that, for convenience of description, only a portion related to the technical concept of the present application is shown in the drawings. It will be appreciated that ordinal terms such as "first," "second," and the like, as used herein, unless otherwise indicated, are merely used to distinguish one element from another element and do not indicate importance or priority. For example, the first and second split graphs only represent that they are different split graphs.
In addition, embodiments and features of embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a flowchart illustrating an image blur detection method 1000 according to an embodiment of the present application.
In step S1010, a luminance map is extracted from the acquired image. The acquired image is characterized in terms of image data, which in turn may take on a variety of forms. Such as RGB format or YCbCr format. For ease of description, the following description is in YCbCr format. However, it will be readily appreciated by those skilled in the art that any suitable image data representation may be employed without departing from the inventive concepts of the present application. Since stains tend to cause a drop in image brightness, it is advantageous in terms of a stain recognition task to perform subsequent processing based on the brightness component. According to the present application, a luminance component may be extracted from an acquired image and a luminance map may be generated therefrom for subsequent processing.
In step S1020, a first segmentation map is obtained by homomorphically filtering the luminance map. Because the production environment is complex and the stain causes are different, the degree and the kind of the stain are different. In the case of image stains, there are both deep and shallow stains. Shallow spots cause less drop in image brightness and are therefore harder to detect. Therefore, improving the contrast of an image is beneficial for stain detection. According to the application, the luminance map is homomorphically filtered in one parallel branch to improve the contrast of the luminance map. A first segmentation map, e.g., a binarized image, is then acquired based on the homomorphically filtered image.
In step S1030, a second segmentation map is acquired by performing surface fitting on the luminance map. For example, the fitted luminance curved surface may be obtained according to a certain rule. The luminance curved surface may be regarded as a three-dimensional curved surface having a row and column as an abscissa and a luminance value as an ordinate. A second segmentation map, such as a binarized image, may then be obtained based on the difference between the fitted luminance curved surface and the luminance map extracted in step S1010. The second segmentation map may also contain enhanced stain information. According to the embodiment of the present application, the step S1020 and the step S1030 may be performed in parallel to improve the detection efficiency, and the two steps may not be necessarily synchronized in time sequence. In other words, step S1020 and step S1030 may be performed in two threads that are asynchronous with each other.
In step S1040, the first and second segmentation maps are combined into a final segmentation map. The first segmentation map and the second segmentation map are segmentation maps obtained based on different image enhancement principles and obtained after stain information enhancement. However, the first segmentation map and the second segmentation map have a certain risk of spot omission due to the limitations of the processing principles thereof. Therefore, the scheme of combining the first segmentation map and the second segmentation map is adopted to improve the detection efficiency and reduce the detection omission risk.
Finally, in step S1050, the stain information is identified based on the final segmentation map. For example, the location and size information of the stain may be obtained based on the final segmentation map by various algorithms.
In general, the luminance image directly obtained by extracting the Y component in step S1010 is identical to the original image size. Spot identification based on such dimensions tends to consume significant computing resources. This consumption increases the equipment cost and reduces the detection efficiency. Based on the method, the scheme of performing dimension reduction processing on the acquired image to save computing resources is provided. According to an embodiment of the present application, step S1010 may include: extracting a luminance component map from the image, the luminance component map having the same size as the image; and reducing the dimension of the luminance component map to obtain a luminance map.
More specifically, the dimensionality reduction of the luminance component map to obtain the luminance map may include: generating a blank image; determining a mapping position corresponding to each pixel point of the blank image in the brightness component diagram; acquiring image data of the mapping position based on pixel values of the luminance component map; and converting the blank image into a luminance map by assigning the image data to the blank image.
Fig. 2 is a flowchart illustrating an image dimension reduction method 2000 according to an embodiment of the present application.
For example, the down-maintenance coefficient may be first determined according to the size of the luminance component map (i.e., the size of the acquired original image). Then, a blank image may be generated in step S2010. The pixel value of each pixel of the blank image may be set to be blank.
In step S2020, a mapping position corresponding to each pixel point of the blank image in the luminance component map may be determined. The luminance component map and the blank image may have a positional mapping relationship according to the down-maintenance coefficient. For example, it is assumed that the luminance component map and the blank image each have a (0, 0) point as the origin of coordinates, and that (i, j) and (x, y) are coordinates of the pixel point of the blank image and the corresponding map point of the luminance component map, respectively. In this case, the coordinates (i, j) and coordinates (x, y) may satisfy the following relation:
x=i×n; and
y=y×M。
in the above formula, N and M are dimension reduction coefficients in the horizontal direction and the vertical direction, respectively.
However, it should be noted that since the images before and after the dimension reduction are each a discrete pixel matrix composed of non-continuous (discrete) pixels, each pixel of the image after the dimension reduction is not necessarily mapped to a specific pixel of the image before the dimension reduction. For example, a pixel of the reduced-size image may fall among pixels of the image before the reduction of the size. In other words, the mapping point of the luminance component map corresponding to the pixel point of the blank image is not necessarily exactly at the position of the pixel point of the luminance component map. For this, image data of the mapping position may be acquired based on the pixel values of the luminance component map at step S2030. For example, image data of the mapping position may be acquired based on pixel values of the luminance component map using a bilinear interpolation method.
The process of obtaining image data by bilinear interpolation is described below with reference to fig. 3.
Fig. 3 shows a pixel space of a luminance component map, in which P points (x, y) represent mapped positions corresponding to a certain pixel of a blank image. As shown, the P point is not at any pixel point position of the luminance component map. The nearest pixels to the P-point are I11 (x 1, y 1), I12 (x 1, y 2), I21 (x 2, y 1), and I22 (x 2, y 2). In this case, the pixel value of the P point may be acquired based on the pixel values of the four pixel points.
First, the first linear interpolation may be performed in the x direction to obtain image data of R1 and R2. The image data may be represented as luminance values.
Then, a second linear interpolation may be performed in the y direction to obtain image data of P. However, the order of interpolation in the x-direction and in the y-direction may be reversed, which does not affect the image data of the P-point finally obtained. Interpolation calculation may be performed for each mapping position corresponding to each pixel point of the blank image with respect to the luminance component map, thereby obtaining the entire mapped image data.
In step S2040, the blank image is converted into a luminance map by assigning image data to the blank image. In this case, the assigned image is the luminance map obtained after the dimension reduction described above.
As described above, the shallow stain causes a smaller drop in image brightness and is therefore harder to detect. Therefore, improving the contrast of an image is beneficial for stain detection. Accordingly, a solution is presented herein that implements homomorphic filtering to enhance image contrast.
The specific operation of step S1020 described above is described below with reference to fig. 4.
In operation S4100, the luminance value of each pixel in the luminance map is decomposed into an illuminance component and a reflection component. In general, the data of the luminance map may be expressed as a product of an illuminance (illumination) component and a reflection (reflection) component. Such data is inseparable in the time domain. But when it is logarithmically operated on, the two components can be decomposed. The two components can be transformed from the time domain into the frequency domain by fourier transformation. The illumination component has small relative variation and can be regarded as a low-frequency component; and the reflected component varies greatly and can be regarded as a high-frequency component. The illumination component and the reflection component can be better controlled using homomorphic filtering. This control requires the specification of a filter function that affects the low and high frequency components of the fourier transform in different controllable ways. For example, the filter function may approximate the contribution of the attenuated low frequencies (illumination components) while enhancing the contribution of the high frequencies (reflection components). The end result is simultaneous dynamic range compression and contrast enhancement.
In operation S4200, high pass filtering is applied to the reflection component to obtain an enhanced reflection component. For example, a 7 x 7 high pass filter transfer function may be applied to the reflected components to enhance their characteristics so that the details of the shadow region of the restored luminance map are more apparent. Then, in operation S4300, the illuminance component and the enhanced reflection component are combined to generate an enhanced luminance map. For example, two components in logarithmic form may be added and converted back into the time domain and then indexed to recover luminance data. Operations S4200 and S4300 may be described by the following formulas:
conv _image =exp(H hp (u,v)*log(Y_image+eps));
Figure BDA0001920045680000081
in the above formula, Y_image is a luminance image, H hp (u, v) is a 7×7 high-pass filter function, conv_image is a convolved image, and out_image is an enhanced luminance map.
Finally, in operation S4400, the enhanced luminance map is binarized to obtain a first segmentation map. The binarization process may be performed in blocks. For example, the enhanced luminance map is divided into a plurality of image blocks, and each image block is binarized. Through the binarization process, the foreground stain can be easily separated from the background, and the detection accuracy is improved. The binarized segmented image is well coded, and subsequent data analysis and processing are facilitated.
The present application takes another approach to obtain a segmentation map containing spot information, as described in step S1030 above. According to an embodiment of the present application, the surface fitting in step S1030 may include at least one of polynomial surface fitting, gaussian surface fitting, least square surface fitting, and B-spline fitting.
The specific operation of step S1030 described above when B-spline fitting is employed is described below with reference to fig. 5.
In step S5100, a first number of control points are selected in the horizontal direction for a first B-spline fit. For example, 30 points may be selected in the horizontal direction as control points, and B-spline fitting may be performed with these control points as nodes. At these control points, the first and second derivatives of the curve equation are continuous.
In step S5200, a second number of control points in the vertical direction is selected for a second B-spline fit. Similarly, 30 points can be selected in the vertical direction as control points, and B-spline fitting can be performed with the control points as nodes. The following formula exemplarily shows such a B-spline fit:
Figure BDA0001920045680000091
in the above, p i-1 ,p i ,p i+1 I is the control point number, S i The points obtained after fitting are used.
In step S5300, the luminance curved surface map obtained by the first B-spline fitting and the second B-spline fitting is differenced from the luminance map to obtain a differential luminance map. Specifically, the luminance surface map obtained by the first B-spline fit and the second B-spline fit attempts to restore a clean "luminance map without stains. The difference between the brightness curved surface graph obtained by fitting and the brightness graph directly extracted by the acquired image strengthens the stain characteristics. Thus, the differential brightness map described above may better characterize stains.
Finally, in step S5400, the differential luminance map is binarized to obtain a second divided map. The binarization process may be performed in blocks. For example, the differential luminance map is divided into a plurality of image blocks, and each image block is binarized. Through the binarization process, the foreground stain can be easily separated from the background, and the detection accuracy is improved. The binarized segmented image is well coded, and subsequent data analysis and processing are facilitated.
Both the first divided map obtained by step S1020 and the second divided map obtained by step S1030 contain some noise points. In order to improve the image segmentation effect, the first segmentation map and the second segmentation map may be respectively subjected to noise reduction, and then the noise-reduced first segmentation map and second segmentation map may be combined to obtain a final segmentation map. Any suitable noise reduction algorithm may be employed.
For example, the noise of the first segmentation map obtained after homomorphic filtering is more, and the noise reduction processing can be performed by adopting a method of performing Minkofsky addition and subtraction and then performing median filtering. However, noise reduction processing such as erosion, expansion, open operation, closed operation, gaussian filtering, mean filtering, and the like may also be employed. For another example, the second segmentation map obtained after the B-spline fitting process has fewer noise points, and noise reduction processing such as median filtering may be adopted.
According to the embodiment of the application, the segmentation map of the enhanced stain features is obtained through homomorphic filtering and B-spline fitting which can be performed asynchronously. However, due to the difference in processing principles, there is a possibility that there is a false drop of the stain in the divided map obtained by both the processing. In addition, the problem of low search efficiency exists in performing stain recognition on the two types of segmentation graphs respectively. In order to improve the detection efficiency, the application proposes a method for combining two segmentation graphs, which is more beneficial to detecting shallow stains. Such a combination may mathematically appear as an additive operation. However, the specific operation is not limited thereto, and for example, weights may be set for the first and second divided maps, respectively, to perform weighted combination.
According to an embodiment of the present application, identifying the stain information based on the final segmentation map may include: the location and size of the smudge area is identified in the final segmentation map by a stroking method. The smudge area may occupy a plurality of pixels, and thus adjacent (e.g., interconnected) pixels with smudge features may be identified as a block of smudge area.
For example, the final segmentation map may be scanned line by line, marking a sequence of consecutive first pixels (e.g., pixel values of 1) in a line as a group, and noting its start and end points, as well as the line number in which it is located. Then, for a group in all rows except the first row, if it does not overlap with all groups of the previous row, it is marked with a new reference numeral. However, if it only overlaps with a group of the previous row, it is given the designation of that group of the previous row. Furthermore, if it overlaps more than two groups of the previous row, the smallest of the several groups is assigned to it and the several numbers are written into equivalent pairs, indicating that they belong to the same area. The pixels of each row are scanned in a traversal manner, and all newly scanned discrete areas are given new labels. Finally, the labels of each group are filled into the final segmentation map. In this way, the location and size of the stained area can be identified.
Fig. 6 is a block diagram illustrating an image blur detection device 6000 according to an embodiment of the present application.
The image stain detection device 6000 includes: a luminance extractor 6100 that extracts a luminance map from the acquired image; a first image divider 6200 for obtaining a first divided image by homomorphically filtering the luminance image; a second image divider 6300 that acquires a second divided image by performing surface fitting on the luminance image; a combiner 6400 that combines the first segmentation map and the second segmentation map into a final segmentation map; and an identifier 6500 that identifies stain information based on the final segmentation map.
The luminance extractor 6100 may include: a luminance component extractor extracting a luminance component map from the image, the luminance component map having the same size as the image; and a dimension-reducing unit which reduces the dimension of the luminance component map to obtain the luminance map.
The dimension-reducing device may include: a generator that generates a blank image; the mapper is used for determining a mapping position corresponding to each pixel point of the blank image in the brightness component diagram; an interpolator for obtaining image data of the mapping position based on pixel values of the luminance component map; and an assigner converting the blank image into a luminance map by assigning the image data to the blank image.
The first image divider 6200 may include: a decomposer for decomposing the luminance value of each pixel in the luminance map into an illuminance component and a reflection component; a filter that applies high-pass filtering to the reflected component to obtain an enhanced reflected component; an enhancer that combines the illumination component and the enhanced reflection component to generate an enhanced luminance map; and a first binarizer binarizing the enhanced luminance map to obtain a first segmentation map.
The second image divider 6300 may obtain a second segmentation map by B-spline fitting the luminance map. The second image divider 6300 may include: the first B spline fitting device is used for selecting a first number of control points in the horizontal direction to perform first B spline fitting; a second B spline fitting device for selecting a second number of control points in the vertical direction to perform second B spline fitting; the difference device is used for performing difference between a brightness curved surface graph obtained through the first B spline fitting and the second B spline fitting and a brightness graph to obtain a difference brightness graph; and a second binarizer binarizing the differential brightness map to obtain a second divided map.
The combiner 6400 may include: the noise reducer is used for respectively reducing noise of the first segmentation map and the second segmentation map; and the adder combines the first segmentation map and the second segmentation map after noise reduction to obtain a final segmentation map.
The identifier 6500 can identify the positions and sizes of the stain areas communicating with each other in the final segmentation map by a travel method.
The present application also provides a computer system, which may be, for example, a mobile terminal, a Personal Computer (PC), a tablet computer, a server, etc. Referring now to FIG. 7, there is illustrated a schematic diagram of a computer system suitable for use in implementing the terminal device or server of the present application: as shown in fig. 7, the computer system includes one or more processors, such as: one or more Central Processing Units (CPUs) 701, and/or one or more image processors (GPUs) 713, etc., which may perform various suitable actions and processes in accordance with executable instructions stored in a Read Only Memory (ROM) 702 or loaded from a storage section 708 into a Random Access Memory (RAM) 703. The communication portion 712 may include, but is not limited to, a network card, which may include, but is not limited to, a IB (Infiniband) network card.
The processor may be in communication with the rom 702 and/or the ram 703 to execute executable instructions, and is connected to the communication portion 712 through the bus 704, and is in communication with other target devices through the communication portion 712, so as to perform operations corresponding to any of the methods provided in the embodiments of the present application, for example: extracting a luminance map from the acquired image; obtaining a first segmentation map by homomorphic filtering of the luminance map; obtaining a second segmentation map by performing surface fitting on the luminance map; combining the first segmentation map and the second segmentation map into a final segmentation map; and identifying the stain information based on the final segmentation map. The scheme provided by the application can realize high-efficiency accurate stain detection.
In addition, in the RAM 703, various programs and data necessary for the operation of the device can also be stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. In the case of RAM 703, ROM 702 is an optional module. The RAM 703 stores executable instructions that cause the processor 701 to execute operations corresponding to the communication methods described above, or write executable instructions to the ROM 702 at the time of execution. An input/output (I/O) interface 705 is also connected to bus 704. The communication unit 712 may be provided integrally or may be provided with a plurality of sub-modules (for example, a plurality of IB network cards) and connected to a bus link.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
It should be noted that the architecture shown in fig. 7 is only an alternative implementation, and in a specific practical process, the number and types of components in fig. 7 may be selected, deleted, added or replaced according to actual needs; in the setting of different functional components, implementation manners such as separation setting or integration setting can also be adopted, for example, the GPU and the CPU can be separated or the GPU can be integrated on the CPU, the communication part can be separated or the communication part can be integrated on the CPU or the GPU, and the like. These alternative embodiments all fall within the scope of the present disclosure.
In addition, according to embodiments of the present application, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, the present application provides a non-transitory machine-readable storage medium storing machine-readable instructions executable by a processor to perform instructions corresponding to the method steps provided herein, such as: extracting a luminance map from the acquired image; obtaining a first segmentation map by homomorphic filtering of the luminance map; obtaining a second segmentation map by performing surface fitting on the luminance map; combining the first segmentation map and the second segmentation map into a final segmentation map; and identifying the stain information based on the final segmentation map. The scheme provided by the application can realize high-efficiency accurate stain detection. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 709, and/or installed from the removable medium 711. The above-described functions defined in the method of the present application are performed when the computer program is executed by a Central Processing Unit (CPU) 701.
The methods and apparatus, devices, and apparatus of the present application may be implemented in numerous ways. For example, the methods and apparatus, devices of the present application may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present application are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present application may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present application. Thus, the present application also covers a recording medium storing a program for executing the method according to the present application.
The description of the present application has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, and to enable others of ordinary skill in the art to understand the application for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (18)

1. An image stain detection method, characterized in that the image stain detection method comprises:
extracting a luminance map from the acquired image;
obtaining a first segmentation map by homomorphic filtering and binarization of the luminance map;
performing surface fitting on the brightness map, and acquiring a binarized second segmentation map based on the difference between the fitted brightness curved surface and the brightness map;
combining the first segmentation map and the second segmentation map into a final segmentation map in a weighting manner; and
and identifying stain information based on the final segmentation map.
2. The image spot detection method as set forth in claim 1, wherein extracting a luminance map from the acquired image includes:
extracting a luminance component map from the image, the luminance component map having the same size as the image; and
and reducing the dimension of the brightness component graph to obtain the brightness graph.
3. The image stain detection method of claim 2, wherein the dimension-reducing the luminance component map to obtain the luminance map comprises:
generating a blank image;
determining a mapping position corresponding to each pixel point of the blank image in the brightness component diagram;
acquiring image data of the mapping position based on pixel values of the luminance component map; and
the blank image is converted into the luminance map by assigning the image data to the blank image.
4. The image blur detection method of claim 1, wherein acquiring a first segmentation map by homomorphically filtering and binarizing the luminance map comprises:
decomposing the brightness value of each pixel in the brightness map into an illumination component and a reflection component;
applying high pass filtering to the reflected component to obtain an enhanced reflected component;
combining the illumination component and the enhanced reflection component to generate an enhanced luminance map; and
and carrying out binarization on the enhanced brightness map to obtain the first segmentation map.
5. The image stain detection method of claim 1, wherein obtaining a binarized second segmentation map by performing surface fitting on the luminance map and based on a difference between a luminance surface after fitting and the luminance map comprises:
a second segmentation map is obtained by B-spline fitting the luminance map.
6. The image blur detection method of claim 5 wherein obtaining a second segmentation map by B-spline fitting the luminance map comprises:
selecting a first number of control points in the horizontal direction to perform first B spline fitting;
selecting a second number of control points in the vertical direction to perform second B spline fitting;
the brightness curved surface graph obtained through the first B spline fitting and the second B spline fitting is subjected to difference with the brightness graph to obtain a difference brightness graph; and
and binarizing the differential brightness map to obtain the second segmentation map.
7. The image blur detection method of claim 1 wherein combining the first segmentation map and the second segmentation map into a final segmentation map comprises:
respectively carrying out noise reduction on the first segmentation map and the second segmentation map; and
and combining the first segmentation map and the second segmentation map after noise reduction to obtain the final segmentation map.
8. The image blur detection method according to claim 1, characterized in that identifying blur information based on the final segmentation map comprises:
the positions and the sizes of the stain areas are identified in the final segmentation map by a stroke method.
9. An image stain detection device, characterized in that the image stain detection device comprises:
a brightness extractor extracting a brightness map from the acquired image;
the first image divider is used for obtaining a first division image by homomorphic filtering and binarizing the brightness image;
the second image divider is used for carrying out surface fitting on the brightness map and obtaining a binarized second division map based on the difference between the brightness curved surface after fitting and the brightness map;
a combiner for weighting and combining the first segmentation map and the second segmentation map into a final segmentation map; and
and an identifier for identifying stain information based on the final segmentation map.
10. The image spot detection apparatus as set forth in claim 9, wherein the brightness extractor comprises:
a luminance component extractor extracting a luminance component map from the image, the luminance component map having the same size as the image; and
and the dimension reducing device is used for reducing the dimension of the brightness component diagram so as to acquire the brightness diagram.
11. The image spot detection apparatus as set forth in claim 10, wherein the dimension-reducer includes:
a generator that generates a blank image;
a mapper for determining a mapping position corresponding to each pixel point of the blank image in the brightness component map;
an interpolator for obtaining image data of the mapping position based on pixel values of the luminance component map; and
and an assigner converting the blank image into the luminance map by assigning the image data to the blank image.
12. The image spot detection apparatus as set forth in claim 9, wherein the first image divider comprises:
a decomposer for decomposing the brightness value of each pixel in the brightness map into an illumination component and a reflection component;
a filter that applies high-pass filtering to the reflected component to obtain an enhanced reflected component;
an enhancer that combines the illumination component and the enhanced reflection component to generate an enhanced luminance map; and
and a first binarizer for binarizing the enhanced luminance map to obtain the first segmentation map.
13. The image spot detection apparatus as set forth in claim 9 wherein the second image divider obtains a second segmentation map by B-spline fitting the luminance map.
14. The image spot detection apparatus as set forth in claim 13 wherein the second image divider comprises:
the first B spline fitting device is used for selecting a first number of control points in the horizontal direction to perform first B spline fitting;
a second B spline fitting device for selecting a second number of control points in the vertical direction to perform second B spline fitting;
the difference device is used for performing difference between a brightness curved surface graph obtained through the first B spline fitting and the second B spline fitting and the brightness graph to obtain a difference brightness graph; and
and the second binarizer binarizes the differential brightness map to obtain the second segmentation map.
15. The image spot detection apparatus as set forth in claim 9, wherein the combiner includes:
a noise reducer that respectively performs noise reduction on the first divided map and the second divided map; and
and the adder combines the first segmentation map and the second segmentation map after noise reduction to obtain the final segmentation map.
16. The image spot detection apparatus as set forth in claim 9, wherein the identifier identifies the position and size of the spot region in the final segmentation map by a stroking method.
17. An image blur detection system, the image blur detection system comprising:
a processor; and
a memory coupled to the processor and storing machine-readable instructions executable by the processor to perform operations comprising:
extracting a luminance map from the acquired image;
obtaining a first segmentation map by homomorphic filtering and binarization of the luminance map;
performing surface fitting on the brightness map, and acquiring a binarized second segmentation map based on the difference between the fitted brightness curved surface and the brightness map;
combining the first segmentation map and the second segmentation map into a final segmentation map in a weighting manner; and
and identifying stain information based on the final segmentation map.
18. A non-transitory machine-readable storage medium storing machine-readable instructions executable by a processor to perform operations comprising:
extracting a luminance map from the acquired image;
obtaining a first segmentation map by homomorphic filtering and binarization of the luminance map;
performing surface fitting on the brightness map, and acquiring a binarized second segmentation map based on the difference between the fitted brightness curved surface and the brightness map;
combining the first segmentation map and the second segmentation map into a final segmentation map in a weighting manner; and
and identifying stain information based on the final segmentation map.
CN201811590124.4A 2018-12-25 2018-12-25 Image stain detection method, device, system and storage medium Active CN111369491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811590124.4A CN111369491B (en) 2018-12-25 2018-12-25 Image stain detection method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811590124.4A CN111369491B (en) 2018-12-25 2018-12-25 Image stain detection method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN111369491A CN111369491A (en) 2020-07-03
CN111369491B true CN111369491B (en) 2023-06-30

Family

ID=71209936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811590124.4A Active CN111369491B (en) 2018-12-25 2018-12-25 Image stain detection method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN111369491B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2466289A1 (en) * 1996-08-23 1998-02-26 Her Majesty The Queen In Right Of Canada As Represented By The Department Of Agriculture And Agrifood Canada Method and apparatus for using image analysis to determine meat and carcass characteristics
JP2005165387A (en) * 2003-11-28 2005-06-23 Seiko Epson Corp Method and device for detecting stripe defective of picture and display device
CN102061517A (en) * 2010-12-13 2011-05-18 浙江长兴众成电子有限公司 Czochralski single crystal silicon diameter measurement method
CN105787921A (en) * 2015-08-19 2016-07-20 南京大学 Method for reconstructing large-scale complex flyover 3D model by using airborne LiDAR data
WO2018040118A1 (en) * 2016-08-29 2018-03-08 武汉精测电子集团股份有限公司 Gpu-based tft-lcd mura defect detection method
CN108710883A (en) * 2018-06-04 2018-10-26 国网辽宁省电力有限公司信息通信分公司 A kind of complete conspicuousness object detecting method using contour detecting

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5151814B2 (en) * 2008-08-28 2013-02-27 株式会社デンソー Traveling zone detection device and program for traveling zone detection device
CN101354785B (en) * 2008-09-04 2011-07-06 湖南大学 Method for accurately positioning vision in cleaning robot of condenser
CN103364410B (en) * 2013-07-23 2015-07-08 三峡大学 Crack detection method of hydraulic concrete structure underwater surface based on template search
CN103792699A (en) * 2013-09-09 2014-05-14 中华人民共和国四川出入境检验检疫局 TFT-LCD Mura defect machine vision detecting method based on B spline surface fitting
US10373312B2 (en) * 2016-11-06 2019-08-06 International Business Machines Corporation Automated skin lesion segmentation using deep side layers
CN108319973B (en) * 2018-01-18 2022-01-28 仲恺农业工程学院 Detection method for citrus fruits on tree

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2466289A1 (en) * 1996-08-23 1998-02-26 Her Majesty The Queen In Right Of Canada As Represented By The Department Of Agriculture And Agrifood Canada Method and apparatus for using image analysis to determine meat and carcass characteristics
JP2005165387A (en) * 2003-11-28 2005-06-23 Seiko Epson Corp Method and device for detecting stripe defective of picture and display device
CN102061517A (en) * 2010-12-13 2011-05-18 浙江长兴众成电子有限公司 Czochralski single crystal silicon diameter measurement method
CN105787921A (en) * 2015-08-19 2016-07-20 南京大学 Method for reconstructing large-scale complex flyover 3D model by using airborne LiDAR data
WO2018040118A1 (en) * 2016-08-29 2018-03-08 武汉精测电子集团股份有限公司 Gpu-based tft-lcd mura defect detection method
CN108710883A (en) * 2018-06-04 2018-10-26 国网辽宁省电力有限公司信息通信分公司 A kind of complete conspicuousness object detecting method using contour detecting

Also Published As

Publication number Publication date
CN111369491A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN108805023B (en) Image detection method, device, computer equipment and storage medium
JP6100744B2 (en) Color document image segmentation and binarization using automatic restoration
JP6507846B2 (en) Image noise removing method and image noise removing apparatus
KR101795823B1 (en) Text enhancement of a textual image undergoing optical character recognition
US9036905B2 (en) Training classifiers for deblurring images
JP2014057306A (en) Document image binarization and segmentation using image phase congruency
JP2010525486A (en) Image segmentation and image enhancement
CN112101386B (en) Text detection method, device, computer equipment and storage medium
CN114444565A (en) Image tampering detection method, terminal device and storage medium
US11430130B2 (en) Image processing method and computer-readable recording medium having recorded thereon image processing program
Zhao et al. Efficient image decolorization with a multimodal contrast-preserving measure
CN111369491B (en) Image stain detection method, device, system and storage medium
US10909701B2 (en) Method for data acquisition and image processing for reconstructing a super-resolved image
CN111415365B (en) Image detection method and device
CN106846366B (en) TLD video moving object tracking method using GPU hardware
CN114994098A (en) Foreign matter detection method and device
CN113393405B (en) Image processing method, image processing device, image recognition system and storage medium
Kaur et al. Hand Gesture Image Enhancement for Improved Recognition and Subsequent Analysis
CN117094974A (en) Image-based defect detection method and device, electronic equipment and storage medium
CN115512129A (en) Feature extraction method and device, storage medium and electronic equipment
CN111415322A (en) Screen display abnormality detection method, device, system and storage medium
Yang et al. Stain removal in 2D images with globally varying textures
Faisal et al. Hybrid Edge Detection and Singular Value Decomposition for Image Background Removal
CN116542994A (en) Image segmentation method and device
Hou et al. Image splicing region localization with adaptive multi-feature filtration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant