CN110349132B - Fabric flaw detection method based on light field camera depth information extraction - Google Patents

Fabric flaw detection method based on light field camera depth information extraction Download PDF

Info

Publication number
CN110349132B
CN110349132B CN201910552640.6A CN201910552640A CN110349132B CN 110349132 B CN110349132 B CN 110349132B CN 201910552640 A CN201910552640 A CN 201910552640A CN 110349132 B CN110349132 B CN 110349132B
Authority
CN
China
Prior art keywords
image
light field
depth
window
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910552640.6A
Other languages
Chinese (zh)
Other versions
CN110349132A (en
Inventor
袁理
程哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Original Assignee
Wuhan Textile University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University filed Critical Wuhan Textile University
Priority to CN201910552640.6A priority Critical patent/CN110349132B/en
Publication of CN110349132A publication Critical patent/CN110349132A/en
Application granted granted Critical
Publication of CN110349132B publication Critical patent/CN110349132B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to a fabric flaw detection method based on light field camera depth information extraction, which can be used in the field of fabric detection and is a method for detecting fabric flaws from a 3-dimensional space. The method utilizes a light field camera to generate a multi-view image sequence of the target fabric. The depth map is obtained by using the sub-pixel shift method by replacing the parallax estimation depth value with the slope. In order to avoid interference of vignetting effect of the light field camera micro-lens, a weak influence area is extracted, and noise reduction processing is carried out on the part. The noise reduction of the invention uses an image adaptive window filtering noise reduction method. Errors caused by overlarge and undersize median filtering windows are effectively avoided. And finally, carrying out binarization to obtain a segmentation graph. The method of the invention is used for processing the fabric, and the flaw part of the fabric can be effectively detected.

Description

Fabric flaw detection method based on light field camera depth information extraction
Technical Field
The invention relates to a method for detecting fabric flaws, in particular to processing, extracting and detecting three-dimensional depth information of fabric
Background
The earliest depth processing is to use a camera array to solve parallax, a light field camera replaces a complex camera array with a main lens and a micro lens array, and one camera can realize the function of solving depth by one set of camera array. Most of the existing light field camera depth processing technologies are used for processing some large objects or shooting targets far away from an imaging surface, for example, depth processing is performed on light field cameras for statues and lego cars, because of a light field camera microlens array, a vignetting effect is generated, and a large influence is generated on close-range high-precision high-texture-complexity targets, such as fabric textures and the like. The existing light field camera depth map construction technology is developed, three-dimensional reconstruction can be realized clearly by using a light field camera, but the vignetting effect of the light field camera is not solved well at present. The existing solutions are estimated according to the specificity of the object, and the estimation result cannot be used in the detection field.
Disclosure of Invention
The technical problem of the invention is mainly solved by the following technical scheme:
a fabric flaw detection method based on light field camera depth information extraction is characterized by comprising
Step 1: a set of multi-view images is acquired with a light field camera.
Step 1.1: the method comprises the steps of shooting a target fabric in a uniform light field by using a light field camera, extracting a RAW file and a white image file in the light field camera, decoding the extracted RAW file, and then correcting colors, wherein a Matlab light field tool pack is required to be used, the tool pack is developed by D.G. Dansereau and the like, two versions of a toolbox0.3 and a toolbox0.4 are provided at present, the toolbox0.4 is used in the embodiment, the white image file is used in image decoding, the white image file is carried in each light field camera, the whiteImageDataBase mapping table is read by the tool pack, and the most appropriate white image and microlens grid model are selected by the tool pack, so that a light field image in a Bayer format is obtained. And then, carrying out frequency domain filtering on the image so as to carry out demosaicing operation on the image to obtain an RGB color image, and carrying out color correction on the image obtained in the last step to obtain an image after color correction and five-dimensional light field data.
Step 1.2: decomposing 5-dimensional light field data, wherein the 5-dimensional light field data is represented as LF (x, y, row. col, channel), wherein x and y respectively represent the size of a graphic sequence, namely, xy images are shared, row and col represent the size of horizontal and vertical pixels of each image, and the channel stores color information and can be decomposed into a plurality of multi-view sub-image sequences after x and y pass.
Step 2: and performing sub-pixel offset on the multi-view image to obtain a depth map.
Step 2.1: as with the current theory of depth of construction, the disparity features are extracted first. In the description of the light field, 2 parallel planes Π Ω are usually used to describe a four-dimensional light field L (x, y, u, v), 2 points C1 and C2 are selected on the Ω plane, the object is connected with C1 and C2, the two planes are respectively intersected with the Π plane at P1 and P2, the C1.C2 distance is defined as B, C1, the distances from C2 to the projection point of the object are respectively B1 and B2, the distance between the two planes is f, the distance from P1 to C1 to the projection point of the Π plane is L1, the distance from P2 to the projection point of the C2 to the Π plane is L2, the parallax is L1-L2, and γ is the object distance, namely the final required depth, and the final required depth is obtained through similarity triangle calculation
Figure BDA0002105953590000021
Step 2.2: in the multi-view image sequence, the same object point has different coordinates in the image due to different views, so a straight line is vertically inserted into the pixel point of the same depth layer of the image, and then sub-pixel shift is performed, at this time, a linear relationship is formed between the shift amount and the slope of the straight line, for example, a straight line is vertically inserted between the images of 2 connected sequences, then the object point is aligned through sub-pixel shift, at this time, the straight line has a certain slope, and the slope is in a linear relationship with the horizontal distance between the shifted 2 pictures and the intersection point of the straight line, and the distance is also the horizontal distance of the object point in the 2 pictures, namely, the parallax, so that the depth information can be estimated through the slope. M is the number of multi-view images, and M is √ M. According to the formula
Figure BDA0002105953590000031
Wherein u and v are coordinates of the lens in the array, x and y are pixel coordinates, the central lens u is 0, and v is 0; siThe number of the multi-view image sequences is set as M. Therefore we can get the angular variance
Figure BDA0002105953590000032
Step 2.3: after calculating the variance of all candidate slopes, selecting the slope with the minimum variance to recover the depth; to improve robustness, we calculate the mean standard deviation of the domain to represent the degree of blur
Figure BDA0002105953590000033
Figure BDA0002105953590000034
In the formula WDIs a window centered at (x, y) | WDI represents the size of the window, i.e. the number of all pixels in the window class, and D (x, y) is the estimated local disparity. According to the formula in step 2.1
Figure BDA0002105953590000035
It is possible to obtain an estimate of the depth,
Figure BDA0002105953590000036
wherein gamma is the object distance, i.e. the required depth, so that the local depth can be obtained, and then the image is circularly processed by a window of 3X3, and the depth map can be obtained after the whole image is processed.
And step 3: preprocessing a depth map of a fabric
Step 3.1: the light field camera generates a vignetting effect due to the use of the micro lens array, the vignetting effect is not particularly obvious when the depth of field exceeding 1m away from an imaging plane is processed, but for an object which needs to be shot at a short distance and needs a high-resolution image for extracting texture information, the vignetting effect of the light field camera has a particularly large influence on the processing result of the method, in order to weaken the influence, the depth image is firstly cut, the middle part is taken, and the influence of the gradually changed darkness at the periphery of a vignetting ring on the depth information is avoided;
step 3.2: smoothing, namely performing simple smoothing on the depth map;
and 4, step 4: adaptive window filtering
Step 4.1: a zero matrix M of the same size as the image is created to record the location of the noise.
Figure BDA0002105953590000041
n is the number of image pixel points, and x and y are the horizontal and vertical coordinates of the pixels respectively.
Figure BDA0002105953590000042
V is the variance of the image.
Figure BDA0002105953590000043
M(x,y)=1
The pixel point whose pixel value is greater than the average value h times the standard deviation is marked as 1 in the M matrix, which is a noise point, and is selected as h-3 times in this embodiment.
Step 4.2: and circularly using the m-order window to judge the noise point of each pixel of the image, wherein m is 2n +1, and n is 1, 2, 3. If the difference between the gray value of the pixel and the mean value in the window is larger than h times of the standard deviation of the pixel in the window, the judgment method judges the pixel as a noise point, and marks the corresponding position in the M matrix as 1 in step 4.1.
Step 4.3: filtering the image, establishing a 3X3 cross window by taking a central point and upper, lower, left and right 4 points adjacent to the central point, retrieving the M matrix, if the number of 0 in the window is more than 1, namely representing that the number of effective pixels in the window is more than the number of noise pixels, performing mean filtering on the pixel points at the corresponding position of the depth map by using the window, if the side window is changed from 3X3 to 5X5, retrieving the M matrix, if the number of 0 in the window is more than 1, namely representing that the number of effective pixels in the window is more than the number of noise pixels, performing mean filtering on the pixel points at the corresponding position of the depth map by using the window, and otherwise, respectively adding 2 to the horizontal and vertical sizes of the window until the size of the window is 15X 15. .
Step 4.4: and performing M matrix construction and then filtering on the image for multiple times according to the filtering result so as to obtain the optimal result. In this embodiment, the filtering process is performed 2 times.
And 5: and (4) carrying out binarization processing on the depth map subjected to noise reduction processing, and selecting a proper threshold value to obtain a clear binarization image. The threshold is selected according to the depth of the image and the complexity of the scene depth, and the threshold is between 0.25 and 0.3.
Therefore, the invention has the following advantages: the 3-dimensional fabric detection is performed by using the light field camera, which is more convenient than a camera array. Wherein the vignetting effect of the light field camera is avoided, and the self-adaptive window algorithm has good effect on depth map filtering
Drawings
Fig. 1 is a schematic diagram illustrating a relationship between parallax and depth.
Fig. 2 is a schematic diagram of a slope and parallax relationship.
FIG. 3 is a flow chart of a light field camera for detecting fabric defects.
Fig. 4 is a flowchart of the light field camera for obtaining the depth map.
Fig. 5 is a flow chart of an adaptive filtering algorithm.
Fig. 6 is a 2-dimensional view of the fabric.
Figure 7 is a graph of the initial depth of the fabric.
Fig. 8 is a depth map after filtering by the adaptive algorithm.
Fig. 9 is a binarization defect detection diagram.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
Example (b):
the invention comprises the following steps:
step 1: a set of multi-view images is acquired with a light field camera.
Step 1.1: the method comprises the steps of shooting a target fabric in a uniform light field by using a light field camera, extracting a RAW file and a white image file in the light field camera, decoding the extracted RAW file, and then correcting colors, wherein a Matlab light field tool pack is required to be used, the tool pack is developed by D.G. Dansereau and the like, two versions of a toolbox0.3 and a toolbox0.4 are provided at present, the toolbox0.4 is used in the embodiment, the white image file is used in image decoding, the white image file is carried in each light field camera, the whiteImageDataBase mapping table is read by the tool pack, and the most appropriate white image and microlens grid model are selected by the tool pack, so that a light field image in a Bayer format is obtained. And then, carrying out frequency domain filtering on the image so as to carry out demosaicing operation on the image to obtain an RGB color image, and carrying out color correction on the image obtained in the last step to obtain an image after color correction and five-dimensional light field data.
Step 1.2: 5-dimensional light field data are decomposed, and the 5-dimensional light field data can be represented as LF (x, y, row. col, channel), wherein x and y respectively represent the size of a graphic sequence, namely, xy images are shared, row and col represent the size of horizontal and vertical pixels of each image, and the channel stores color information, and x and y can be decomposed into a plurality of multi-view sub-image sequences.
Step 2: and performing sub-pixel offset on the multi-view image to obtain a depth map.
Step 2.1: as with the current theory of depth of construction, the disparity features are extracted first. In the description of the light field, 2 parallel planes Π Ω are usually used to describe a four-dimensional light field L (x, y, u, v), 2 points C1 and C2 are selected on the Ω plane, the object is connected with C1 and C2, the two planes are respectively intersected with the Π plane at P1 and P2, the C1.C2 distance is defined as B, C1, the distances from C2 to the projection point of the object are respectively B1 and B2, the distance between the two planes is f, the distance from P1 to C1 to the projection point of the Π plane is L1, the distance from P2 to the projection point of the C2 to the Π plane is L2, the parallax is L1-L2, and γ is the object distance, namely the final required depth, and the final required depth is obtained through similarity triangle calculation
Figure BDA0002105953590000071
Figure BDA0002105953590000072
Step 2.2: in the multi-view image sequence, the same object point has different coordinates in the image due to different views, so a straight line is vertically inserted into the pixel point of the same depth layer of the image, and then sub-pixel shift is performed, at this time, a linear relationship is formed between the shift amount and the slope of the straight line, for example, a straight line is vertically inserted between the images of 2 connected sequences, then the object point is aligned through sub-pixel shift, at this time, the straight line has a certain slope, and the slope is in a linear relationship with the horizontal distance between the shifted 2 pictures and the intersection point of the straight line, and the distance is also the horizontal distance of the object point in the 2 pictures, namely, the parallax, so that the depth information can be estimated through the slope. In this embodiment, the slope preset range slope _ begin is 0, slope _ end is 2.5, M is the number of multi-view images, and M is √ M. According to the formula
Figure BDA0002105953590000073
Wherein u and v are coordinates of the lens in the array, x and y are pixel coordinates, the central lens u is 0, and v is 0; siThe number of the multi-view image sequences is set as M. Therefore we can get the angular variance
Figure BDA0002105953590000081
Step 2.3: after calculating the variance of all candidate slopes, selecting the slope with the minimum variance to recover the depth; to improve robustness, we calculate the mean standard deviation of the domain to represent the degree of blur
Figure BDA0002105953590000082
Figure BDA0002105953590000083
In the formula WDIs a window centered at (X, y), and the window size is set to 3X3, | W in the present embodimentDI represents the size of the window, i.e. the number of all pixels in the window class, and D (x, y) is the estimated local disparity. According to the formula in step 2.1
Figure BDA0002105953590000084
It is possible to obtain an estimate of the depth,
Figure BDA0002105953590000085
wherein gamma is the object distance, i.e. the required depth, so that the local depth can be obtained, and then the image is circularly processed by a window of 3X3, and the depth map can be obtained after the whole image is processed.
And step 3: preprocessing a depth map of a fabric
Step 3.1: the light field camera generates a vignetting effect due to the use of the micro lens array, the vignetting effect is not particularly obvious when the depth of field exceeding 1m away from an imaging plane is processed, but for an object which needs to be shot at a short distance and needs a high-resolution image for extracting texture information, the vignetting effect of the light field camera has a particularly large influence on a processing result of the method, in order to weaken the influence, the depth image is firstly cut, the middle part is taken, and the influence of the gradually changed darkness at the periphery of a vignetting ring on the depth information is avoided. In general, 1/5 is taken when clipping, namely the clipping starting point is (2m/5,2n/5), the length and width are respectively m/5 and n/5, and if m/5 is not an integer, rounding is performed downwards, wherein m and n refer to the length and width pixels of the depth map.
Step 3.2: smoothing, i.e. simply smoothing the depth map
And 4, step 4: adaptive window filtering
Step 4.1: a zero matrix M of the same size as the image is created to record the location of the noise.
Figure BDA0002105953590000091
n is the number of image pixel points, and x and y are the horizontal and vertical coordinates of the pixels respectively.
Figure BDA0002105953590000092
V is the variance of the image.
Figure BDA0002105953590000093
M(x,y)=1
The pixel point whose pixel value is greater than the average value h times the standard deviation is marked as 1 in the M matrix, which is a noise point, and is selected as h-3 times in this embodiment.
Step 4.2: and circularly using the m-order window to judge the noise point of each pixel of the image, wherein m is 2n +1, and n is 1, 2, 3. If the difference between the gray value of the pixel and the mean value in the window is larger than h times of the standard deviation of the pixel in the window, the judgment method judges the pixel as a noise point, and marks the corresponding position in the M matrix as 1 in step 4.1.
Step 4.3: filtering the image, establishing a 3X3 cross window by taking a central point and upper, lower, left and right 4 points adjacent to the central point, retrieving the M matrix, if the number of 0 in the window is more than 1, namely representing that the number of effective pixels in the window is more than the number of noise pixels, performing mean filtering on the pixel points at the corresponding position of the depth map by using the window, if the side window is changed from 3X3 to 5X5, retrieving the M matrix, if the number of 0 in the window is more than 1, namely representing that the number of effective pixels in the window is more than the number of noise pixels, performing mean filtering on the pixel points at the corresponding position of the depth map by using the window, and otherwise, respectively adding 2 to the horizontal and vertical sizes of the window until the size of the window is 15X 15. .
Step 4.4: and performing M matrix construction and then filtering on the image for multiple times according to the filtering result so as to obtain the optimal result. In this embodiment, the filtering process is performed 2 times.
And 5: and (4) carrying out binarization processing on the depth map subjected to noise reduction processing, and selecting a proper threshold value to obtain a clear binarization image. The threshold is selected according to the number of image depth layers and the complexity of the scene depth, and is generally between 0.25 and 0.3, and the threshold selected in this embodiment is 0.27.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (1)

1. A fabric flaw detection method based on light field camera depth information extraction is characterized by comprising
Step 1: acquiring a set of multi-view images with a light field camera;
step 1.1: shooting a target fabric by using a light field camera in a uniform light field, extracting a RAW file and a white image file in the light field camera, decoding the extracted RAW file, and then correcting the color, wherein a Matlab light field tool pack is required to be used, the tool pack is developed by D.G. Dansereau and the like, two versions of a toolbox0.3 and a toolbox0.4 are adopted at present, the toolbox0.4 is adopted, the white image file can be used during image decoding, the white image file can be carried in each light field camera, the WhiteImageDataBase mapping table is read by the tool pack, and the most appropriate white image and microlens grid model are selected by the tool pack, so that a light field image in a Bayer format is obtained; performing frequency domain filtering on the image to perform demosaicing operation on the image to obtain an RGB color image, and performing color correction on the image obtained in the last step to obtain a color-corrected image and five-dimensional light field data;
step 1.2: decomposing 5-dimensional light field data, wherein the 5-dimensional light field data are represented as LF (x, y, row, col, channel), x and y respectively represent the size of a graphic sequence, namely, xy images are shared, row and col represent the size of horizontal and vertical pixels of each image, and the channel stores color information and can be decomposed into a plurality of multi-view sub-image sequences after x and y pass;
step 2: performing sub-pixel offset on the multi-view image to obtain a depth map;
step 2.1: as with the current theory of depth of construction, parallax features are extracted first; in the light field description, a four-dimensional light field L (x, y, u, v) is usually described by 2 parallel planes Π Ω,2 points C1 and C2 are selected on the Ω plane, the object is connected with C1 and C2, the two planes are respectively intersected with the Π plane at P1 and P2, C1 is defined, the distance between C2 and the projection point of the object is B, C1 and C2 is B1 and B2, the distance between the two planes is f, the distance between P1 and C1 at the projection point of the Π plane is L1, the distance between P2 and the projection point of C2 at the Π plane is L2, the parallax is L1-L2, γ is the object distance, namely the final required depth, and the final required depth is obtained through similar triangle calculation
Figure FDA0003017808320000021
Step 2.2: in the multi-view image sequence, because different views of the same object point have different coordinates in the image, a straight line is vertically inserted into the pixel point of the same depth layer of the image, then sub-pixel shift is carried out, at the moment, a linear relation is formed between the shift amount and the slope of the straight line, a straight line is vertically inserted between the images of 2 continuous sequences, then the object point is aligned through the sub-pixel shift, at the moment, the straight line has a certain slope, the slope is linearly related to the horizontal distance between the shifted 2 pictures and the intersection point of the straight line, the distance is also the horizontal distance of the object point in the 2 pictures, namely parallax, so that the depth information is estimated through the slope, M is the number of the multi-view image sequence,
Figure FDA0003017808320000022
according to the formula
Figure FDA0003017808320000023
Wherein u and v are coordinates of the lens in the array, x and y are pixel coordinates, the central lens u is 0, and v is 0; siFor a predetermined slope, ns is the number of depth layers, thus obtaining the angular variance
Figure DEST_PATH_FDA0002105953580000023
Step 2.3: after calculating the variance of all candidate slopes, selecting the slope with the minimum variance to recover the depth; to improve robustness, we calculate the mean standard deviation of the domain to represent the degree of blur
Figure FDA0003017808320000025
Figure FDA0003017808320000026
In the formula WDIs a window centered at (x, y) | WDI represents the size of the window, namely the number of all pixels in the window, and D (x, y) is the estimated local parallax; according to the formula in step 2.1
Figure FDA0003017808320000031
It is possible to obtain an estimate of the depth,
Figure FDA0003017808320000032
wherein gamma is the object distance, i.e. the required depth, so that the local depth can be obtained, and then the image is circularly processed by a window of 3X3 to obtain a depth map after the whole image is processed;
and step 3: preprocessing a depth map of a fabric
Step 3.1: the light field camera generates a vignetting effect due to the use of the micro lens array, the vignetting effect is not particularly obvious when the depth of field exceeding 1m away from an imaging plane is processed, but for an object which needs to be shot at a short distance and needs a high-resolution image for extracting texture information, the vignetting effect of the light field camera has a particularly large influence on the processing result of the method, in order to weaken the influence, the depth image is firstly cut, the middle part is taken, and the influence of the gradually changed darkness at the periphery of a vignetting ring on the depth information is avoided;
step 3.2: smoothing, namely performing simple smoothing on the depth map;
and 4, step 4: adaptive window filtering
Step 4.1: creating a zero matrix M with the same size as the image for recording the position of a noise point;
Figure FDA0003017808320000033
n is the number of image pixel points, and x and y are respectively the horizontal and vertical coordinates of the pixels;
Figure FDA0003017808320000034
v is the variance of the image;
Figure FDA0003017808320000035
M(x,y)=1
marking pixel points with pixel values larger than the average value h times the standard deviation as 1 in the M matrix, wherein the pixel points are noise points, and h is 3;
step 4.2: circularly using the m-order window to judge the noise point of each pixel of the image, wherein m is 2n +1, and n is 1, 2, 3.. 7; if the difference between the gray value of the pixel and the mean value in the window is larger than h times of the standard deviation of the pixel in the window, the judgment method judges the pixel as a noise point, and marks the corresponding position in the M matrix as 1;
step 4.3: filtering the image, establishing a 3X3 cross-shaped window by taking a central point and upper, lower, left and right 4 points adjacent to the central point, retrieving the M matrix, if the number of 0 in the window is more than 1, namely the number of effective pixels in the window is more than the number of noise pixels, performing mean filtering on the pixel points at the corresponding position of the depth map by using the window, otherwise, changing the window from 3X3 to 5X5, retrieving the M matrix, if the number of 0 in the window is more than 1, namely the number of effective pixels in the window is more than the number of noise pixels, performing mean filtering on the pixel points at the corresponding position of the depth map by using the window, otherwise, respectively adding 2 to the horizontal and vertical sizes of the window until the size of the window is 15X 15;
step 4.4: according to the filtering result, M matrix construction can be carried out on the image for multiple times, and then filtering is carried out, so that an optimal result is obtained; wherein, 2 times of filtering treatment is carried out;
and 5: performing binarization processing on the depth map subjected to noise reduction processing, and selecting a proper threshold value to obtain a clear binarization image; the threshold is selected according to the depth of the image and the complexity of the scene depth, and the threshold is between 0.25 and 0.3.
CN201910552640.6A 2019-06-25 2019-06-25 Fabric flaw detection method based on light field camera depth information extraction Active CN110349132B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910552640.6A CN110349132B (en) 2019-06-25 2019-06-25 Fabric flaw detection method based on light field camera depth information extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910552640.6A CN110349132B (en) 2019-06-25 2019-06-25 Fabric flaw detection method based on light field camera depth information extraction

Publications (2)

Publication Number Publication Date
CN110349132A CN110349132A (en) 2019-10-18
CN110349132B true CN110349132B (en) 2021-06-08

Family

ID=68182951

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910552640.6A Active CN110349132B (en) 2019-06-25 2019-06-25 Fabric flaw detection method based on light field camera depth information extraction

Country Status (1)

Country Link
CN (1) CN110349132B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991082B (en) * 2019-12-19 2023-11-28 信利(仁寿)高端显示科技有限公司 Mura quantification method based on excimer laser annealing
CN112816493A (en) * 2020-05-15 2021-05-18 奕目(上海)科技有限公司 Chip routing defect detection method and device
CN114693583A (en) * 2020-12-15 2022-07-01 奕目(上海)科技有限公司 Defect layering detection method and system based on light field camera and detection production line
CN113298943A (en) * 2021-06-10 2021-08-24 西北工业大学 ESDF map construction method based on light field imaging
CN114189623B (en) * 2021-09-01 2023-03-24 深圳盛达同泽科技有限公司 Light field-based refraction pattern generation method, device, equipment and storage medium
CN114677577B (en) * 2022-03-23 2022-11-29 北京拙河科技有限公司 Motor vehicle detection method and system of light field camera
CN114511469B (en) * 2022-04-06 2022-06-21 江苏游隼微电子有限公司 Intelligent image noise reduction prior detection method
CN115953790B (en) * 2022-09-29 2024-04-02 江苏智联天地科技有限公司 Label detection and identification method and system
CN115790449B (en) * 2023-01-06 2023-04-18 威海晶合数字矿山技术有限公司 Three-dimensional shape measurement method for long and narrow space
CN116736783B (en) * 2023-08-16 2023-12-05 江苏德顺纺织有限公司 Intelligent remote control system and method for textile electrical equipment
CN116862917B (en) * 2023-09-05 2023-11-24 微山县振龙纺织品有限公司 Textile surface quality detection method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866427A (en) * 2010-07-06 2010-10-20 西安电子科技大学 Method for detecting and classifying fabric defects
CN105931246A (en) * 2016-05-05 2016-09-07 东华大学 Fabric flaw detection method based on wavelet transformation and genetic algorithm
CN106530288A (en) * 2016-11-03 2017-03-22 东华大学 Fabric defect detection method based on deep learning algorithm
CN107071233A (en) * 2015-12-15 2017-08-18 汤姆逊许可公司 The method and apparatus for correcting vignetting effect caused by the image of light-field camera capture
CN107135388A (en) * 2017-05-27 2017-09-05 东南大学 A kind of depth extraction method of light field image
CN107578437A (en) * 2017-08-31 2018-01-12 深圳岚锋创视网络科技有限公司 A kind of depth estimation method based on light-field camera, system and portable terminal
CN107787507A (en) * 2015-06-17 2018-03-09 汤姆逊许可公司 The apparatus and method for obtaining the registration error figure for the acutance rank for representing image
CN108289170A (en) * 2018-01-12 2018-07-17 深圳奥比中光科技有限公司 The camera arrangement and method of metering region can be detected
CN109410192A (en) * 2018-10-18 2019-03-01 首都师范大学 A kind of the fabric defect detection method and its device of multi-texturing level based adjustment
CN109858485A (en) * 2019-01-25 2019-06-07 东华大学 A kind of fabric defects detection method based on LBP and GLCM

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866427A (en) * 2010-07-06 2010-10-20 西安电子科技大学 Method for detecting and classifying fabric defects
CN107787507A (en) * 2015-06-17 2018-03-09 汤姆逊许可公司 The apparatus and method for obtaining the registration error figure for the acutance rank for representing image
CN107071233A (en) * 2015-12-15 2017-08-18 汤姆逊许可公司 The method and apparatus for correcting vignetting effect caused by the image of light-field camera capture
CN105931246A (en) * 2016-05-05 2016-09-07 东华大学 Fabric flaw detection method based on wavelet transformation and genetic algorithm
CN106530288A (en) * 2016-11-03 2017-03-22 东华大学 Fabric defect detection method based on deep learning algorithm
CN107135388A (en) * 2017-05-27 2017-09-05 东南大学 A kind of depth extraction method of light field image
CN107578437A (en) * 2017-08-31 2018-01-12 深圳岚锋创视网络科技有限公司 A kind of depth estimation method based on light-field camera, system and portable terminal
CN108289170A (en) * 2018-01-12 2018-07-17 深圳奥比中光科技有限公司 The camera arrangement and method of metering region can be detected
CN109410192A (en) * 2018-10-18 2019-03-01 首都师范大学 A kind of the fabric defect detection method and its device of multi-texturing level based adjustment
CN109858485A (en) * 2019-01-25 2019-06-07 东华大学 A kind of fabric defects detection method based on LBP and GLCM

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Matlab光场工具包使用、重聚焦及多视角效果展示;骑着蜗牛追梦想;《CSDN》;20190318;https://blog.csdn.net/suiyuemeng/article/details/88651222 *
Sparse Dictionary Reconstruction For Textile Defect Detection;Jian Zhou等;《2012 11th International Conference on Machine Learning and Applications》;20121231;第21-26页 *
Textile Defect Detection for Fabric Material using Texture Feature Extraction;Rakesh J.kadkol等;《International Journal of Latest Trends in Engineering and Technology (IJLTET)》;20130331;第173-176页 *
不同光照下织物瑕疵检测方法研究;张国英等;《计算机科学与应用》;20140930;第181-186页 *
基于光场多视角图像序列的深度估计算法研究;张敏;《中国优秀硕士学位论文全文数据库 信息科技辑》;20181015;I138-746 *
基于图像处理技术的纺织品瑕疵检测方法;林如意;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140715;I138-605 *
基于视觉检测技术的织物瑕疵检测与智能识别研究;兰瑜洁;《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》;20180715;B024-3 *

Also Published As

Publication number Publication date
CN110349132A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110349132B (en) Fabric flaw detection method based on light field camera depth information extraction
CN101588445B (en) Video area-of-interest exacting method based on depth
CN111066065B (en) System and method for hybrid depth regularization
US20180300937A1 (en) System and a method of restoring an occluded background region
CN102113015B (en) Use of inpainting techniques for image correction
US8805057B2 (en) Method and system for generating structured light with spatio-temporal patterns for 3D scene reconstruction
CN108088391B (en) Method and system for measuring three-dimensional morphology
TWI489418B (en) Parallax Estimation Depth Generation
CN109636732A (en) A kind of empty restorative procedure and image processing apparatus of depth image
CN107622480B (en) Kinect depth image enhancement method
KR20120068470A (en) Apparatus for matching stereo image and method thereof
CN102436671B (en) Virtual viewpoint drawing method based on depth value non-linear transformation
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN110910431B (en) Multi-view three-dimensional point set recovery method based on monocular camera
CN104065947B (en) The depth map acquisition methods of a kind of integration imaging system
CN107689050B (en) Depth image up-sampling method based on color image edge guide
CN111563908B (en) Image processing method and related device
Sharma et al. A flexible architecture for multi-view 3DTV based on uncalibrated cameras
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method
CN112529773B (en) QPD image post-processing method and QPD camera
Zabulis et al. Multi-camera reconstruction based on surface normal estimation and best viewpoint selection
CN112637582B (en) Three-dimensional fuzzy surface synthesis method for monocular video virtual view driven by fuzzy edge
CN104537637B (en) A kind of single width still image depth estimation method and device
US6751345B2 (en) Method and apparatus for improving object boundaries extracted from stereoscopic images
Chari et al. Augmented reality using over-segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant