CN110349132A - A kind of fabric defects detection method based on light-field camera extraction of depth information - Google Patents
A kind of fabric defects detection method based on light-field camera extraction of depth information Download PDFInfo
- Publication number
- CN110349132A CN110349132A CN201910552640.6A CN201910552640A CN110349132A CN 110349132 A CN110349132 A CN 110349132A CN 201910552640 A CN201910552640 A CN 201910552640A CN 110349132 A CN110349132 A CN 110349132A
- Authority
- CN
- China
- Prior art keywords
- image
- window
- depth
- light
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 239000004744 fabric Substances 0.000 title claims abstract description 26
- 238000001514 detection method Methods 0.000 title claims abstract description 12
- 230000007547 defect Effects 0.000 title claims abstract description 9
- 238000000605 extraction Methods 0.000 title claims abstract description 7
- 230000000694 effects Effects 0.000 claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 14
- 238000001914 filtration Methods 0.000 claims abstract description 13
- 238000011946 reduction process Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 17
- 241001269238 Data Species 0.000 claims description 9
- 238000012937 correction Methods 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 6
- 238000003384 imaging method Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000004040 coloring Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 3
- 238000013507 mapping Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 230000003044 adaptive effect Effects 0.000 abstract description 3
- 230000011218 segmentation Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000007792 addition Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30124—Fabrics; Textile; Paper
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The present invention relates to a kind of fabric defects detection method based on light-field camera extraction of depth information, this method can be used for Fabric Detection field, be a kind of violating the law from 3 dimension spaces detection fabric defects.This method utilizes light-field camera, generates the multi-view image sequence of target fabric.By substituting disparity estimation depth value with slope, depth map is sought using the method that sub-pix deviates.In order to avoid the vignetting effect of light-field camera lenticule is interfered, it is extracted weak influence area domain, and noise reduction process is carried out to the part.Noise reduction of the present invention has used image adaptive window filtering noise-reduction method.Effectively avoid the excessive and too small bring error of median filtering window.Binaryzation is finally carried out, segmentation figure is obtained.Fabric is handled using the method for the invention, can effectively detect the flaw part of fabric.
Description
Technical field
The present invention relates to a kind of detection methods of fabric defects, especially to the processing and extraction of fabric three-dimensional depth information
With detection
Background technique
Earliest advanced treating is that parallax is sought using camera array, and light-field camera will be based on complicated camera array replacement
The function that a set of camera array seeks depth can be realized in lens and microlens array, a camera.Existing light field phase
Machine further treatment technique is mostly that some relatively large objects of processing or photographic subjects range Imaging face are distant, such as
Advanced treating is carried out with light-field camera to statue and Le Gao trolley, because of light-field camera microlens array, can be generated gradually
Halo effect can have a huge impact the target of the high Texture complication of the high-precision of short distance, such as cloth textured etc..At present
Light-field camera depth map constructing technology it is very flourishing, the realization three-dimensional reconstruction that light-field camera is apparent can be used, still
The vignetting effect of light-field camera, there is presently no be well solved.Existing solution is all according to the special of object
Property is estimated, and estimation result can not be used for detection field.
Summary of the invention
Above-mentioned technical problem of the invention is mainly to be addressed by following technical proposals:
A kind of fabric defects detection method based on light-field camera extraction of depth information, which is characterized in that including
Step 1: acquiring one group of multi-view image using light-field camera.
Step 1.1: target fabric being shot with light-field camera in even strong laser field, extracts the RAW in light-field camera
File and white image file are first decoded the RAW file extracted, then color correction, wherein needing to use Matlab
Light field kit, the kit are developed by D.G.Dansereau et al., there is toolbox0.3, two versions of toolbox0.4 at present
This, is used in this embodiment toolbox0.4, carries out will use white image file, each light-field camera when image decoding
In can all carry white picture file, kit reads WhiteImagesDataBase mapping table, and kit is chosen most suitable white
Image and lenticule grid model, to obtain the light field image of Baeyer format.Frequency domain filtering is carried out to figure to image again
As carrying out demosaicing operation, RGB color image is obtained, colour correction is carried out to image obtained in the previous step, obtains color school
Image and five dimension light field datas after just.
Step 1.2: to 5 dimension light field datas decompose, 5 dimension light field datas be expressed as LF (x, y, row.col,
Channel), wherein x, y respectively indicate the size of figure sequence, i.e., one shared xy images, row, col indicate each image
Transverse and longitudinal pixel size, channel stores colouring information, goes through and can be analyzed to the subimage sequence of multiple multi-angle of view all over x, y.
Step 2: sub-pix offset being carried out to multi-view image, to obtain depth map.
Step 2.1: as the theory of current construction depth, first extracting parallax feature.In light field description, usually use
2 parallel plane Π Ω describe a four-dimension light field L (x, y, u, v), choose 2 point C1 in Ω plane, C2, attached article and
C1, C2 hand over Π plane in P1, P2 respectively, and definition C1.C2 spacing is B, and the distance of C1, C2 to project objects point is respectively B1,
B2, the distance between two planes are f, and P1 to C1 is L1, subpoint of P2 to the C2 in Π plane in the distance of Π plane projection point
Distance be L2, then parallax be L1-L2, γ is object distance, i.e., the depth finally required is calculated by similar triangles
Step 2.2: in multi-view image sequence, same object point is because different visual angles has different seats in the picture
It marks, therefore the pixel of the same depth layer in image is inserted perpendicularly into straight line, then carries out sub-pix offset, at this time offset
It is linear between the slope of straight line, for example, being inserted perpendicularly into straight line between the image of 2 linked sequences, then again
Object point is aligned by sub-pix offset, at this moment straight line has certain slope, and 2 pictures and straight line are handed over after the slope and offset
Horizontal distance between point is linear, which is also horizontal distance, that is, parallax of object point in 2 pictures, it is possible to logical
It crosses slope and carrys out estimating depth information., M=multi-view image number, m=√ M.According to formula
U in formula, v are coordinate of the lens in array, and x, y are pixel coordinate, center lens u=0, v=0;siIt is default
Slope, ns be the depth number of plies, M be multi-view image sequence number.Therefore we can obtain angular variance
Step 2.3: after the variance for calculating all candidate slopes, choosing the slope of minimum variance to restore depth;
In order to improve robustness, our calculating field average differences indicate fog-level
W in formulaDIt is the window centered on (x, y), | WD| expression be the window size, i.e. window class all pixels
Number, D (x, y) be estimation local parallax.According to the formula in step 2.1Available depth is estimated
Meter,Wherein γ is object distance, i.e., required depth, this makes it possible to obtain partial-depths, then are carried out by the window of 3X3 to image
Circular treatment is gone through all over whole image, depth map can be obtained.
Step 3: the depth map of fabric is pre-processed
Step 3.1: light-field camera can generate vignetting effect due to using microlens array, super from imaging plane in processing
When crossing the depth of field of 1m, vignetting effect will not be particularly evident, but it is this for fabric need shooting at close range, texture information mentions
It taking for the object for needing high-resolution image, the vignetting effect of light-field camera is especially big on the influence of the processing result of method,
In order to weaken the influence, first depth image is cut, takes middle section, avoids the shading value of vignetting circle periphery gradual change to depth
Degree information has an impact;
Step 3.2: smoothing processing does simple smoothing processing to depth map;
Step 4: self-adapting window filtering
Step 4.1: creation one and the equirotal null matrix M of image, for recording the position of noise.N is image slices vegetarian refreshments number, and x, y are respectively the transverse and longitudinal coordinate of pixel.
V is the variance of image.
M (x, y)=1
Being in the label of Metzler matrix by the pixel that pixel value is greater than h times of standard deviation of average value is noise, in this implementation
H=3 times is chosen in example.
Step 4.2: m rank window is recycled, m=2n+1, n=1,2 are judged to image each pixel progress noise,
3...7.Judgment method such as step 4.1, if the difference of mean value is greater than pixel in h times of window in pixel gray value and window
The point is then judged as noise by standard deviation, and corresponding position is labeled as 1 in Metzler matrix.
Step 4.3: image being filtered, central point and the adjacent 4 points of crosses for establishing 3X3 up and down of central point are taken
Shape window, retrieves Metzler matrix, if 0 quantity is greater than 1 quantity in window, i.e., valid pixel number is big in expression window
In noise pixel number, then mean filter is carried out with pixel of the window to depth map corresponding position, no side window is become by 3x3
For 5x5, then Metzler matrix is retrieved, if 0 quantity is greater than 1 quantity in window, i.e., valid pixel number is big in expression window
In noise pixel number, then mean filter is carried out with pixel of the window to depth map corresponding position, otherwise window transverse and longitudinal
Size adds 2 respectively, until window size is 15x15..
Step 4.4: multiple Metzler matrix building can be carried out to image according to filter result and then filtered, to most be managed
The result thought.2 filtering processings have been carried out in this embodiment.
Step 5: binary conversion treatment being done to the depth map after progress noise reduction process, chooses suitable threshold value, can be obtained clear
Clear binary image.The selection of threshold value is related with the complexity of the number of picture depth layer and scene depth, and threshold value takes
Between 0.25 to 0.3.
Therefore, the present invention has the advantage that carrying out 3 dimension Fabric Detections using light-field camera, more than camera array
It is convenient.Which avoids the vignetting effect of light-field camera, self-adapting window algorithm has good effect to depth map filtering
Detailed description of the invention
Fig. 1 is parallax and depth relationship schematic diagram.
Fig. 2 is slope, parallax relation schematic diagram.
Fig. 3 is that light-field camera detects fabric defects flow chart.
Fig. 4 is that light-field camera seeks depth map flow chart.
Fig. 5 is adaptive filter algorithm flow chart.
Fig. 6 is the dimension figure of fabric 2.
Fig. 7 is fabric initial depth figure.
Fig. 8 is depth map after adaptive algorithm filtering.
Fig. 9 is binaryzation Defect Detection figure.
Specific embodiment
Below with reference to the embodiments and with reference to the accompanying drawing the technical solutions of the present invention will be further described.
Embodiment:
The present invention the following steps are included:
Step 1: acquiring one group of multi-view image using light-field camera.
Step 1.1: target fabric being shot with light-field camera in even strong laser field, extracts the RAW in light-field camera
File and white image file are first decoded the RAW file extracted, then color correction, wherein needing to use Matlab
Light field kit, the kit are developed by D.G.Dansereau et al., there is toolbox0.3, two versions of toolbox0.4 at present
This, is used in this embodiment toolbox0.4, carries out will use white image file, each light-field camera when image decoding
In can all carry white picture file, kit reads WhiteImagesDataBase mapping table, and kit is chosen most suitable white
Image and lenticule grid model, to obtain the light field image of Baeyer format.Frequency domain filtering is carried out to figure to image again
As carrying out demosaicing operation, RGB color image is obtained, colour correction is carried out to image obtained in the previous step, obtains color school
Image and five dimension light field datas after just.
Step 1.2: to 5 dimension light field datas decompose, 5 dimension light field datas be represented by LF (x, y, row.col,
Channel), wherein x, y respectively indicate the size of figure sequence, i.e., one shared xy images, row, col indicate each image
Transverse and longitudinal pixel size, channel stores colouring information, goes through and can be analyzed to the subimage sequence of multiple multi-angle of view all over x, y.
Step 2: sub-pix offset being carried out to multi-view image, to obtain depth map.
Step 2.1: as the theory of current construction depth, first extracting parallax feature.In light field description, usually use
2 parallel plane Π Ω describe a four-dimension light field L (x, y, u, v), choose 2 point C1 in Ω plane, C2, attached article and
C1, C2 hand over Π plane in P1, P2 respectively, and definition C1.C2 spacing is B, and the distance of C1, C2 to project objects point is respectively B1,
B2, the distance between two planes are f, and P1 to C1 is L1, subpoint of P2 to the C2 in Π plane in the distance of Π plane projection point
Distance be L2, then parallax be L1-L2, γ is object distance, i.e., the depth finally required is calculated by similar triangles
Step 2.2: in multi-view image sequence, same object point is because different visual angles has different seats in the picture
It marks, therefore the pixel of the same depth layer in image is inserted perpendicularly into straight line, then carries out sub-pix offset, at this time offset
It is linear between the slope of straight line, for example, being inserted perpendicularly into straight line between the image of 2 linked sequences, then again
Object point is aligned by sub-pix offset, at this moment straight line has certain slope, and 2 pictures and straight line are handed over after the slope and offset
Horizontal distance between point is linear, which is also horizontal distance, that is, parallax of object point in 2 pictures, it is possible to logical
It crosses slope and carrys out estimating depth information.Depth number of plies ns=50 slope pre-sets range slope_begin=0 in this embodiment,
Slope_end=2.5, M=multi-view image number, m=√ M.According to formula
U in formula, v are coordinate of the lens in array, and x, y are pixel coordinate, center lens u=0, v=0;siIt is default
Slope, ns be the depth number of plies, M be multi-view image sequence number.Therefore we can obtain angular variance
Step 2.3: after the variance for calculating all candidate slopes, choosing the slope of minimum variance to restore depth;
In order to improve robustness, our calculating field average differences indicate fog-level
W in formulaDIt is the window centered on (x, y), sets 3X3 for window size in the present embodiment, | WD| expression is
The size of the window, the i.e. number of window class all pixels, D (x, y) are the local parallax of estimation.According to the public affairs in step 2.1
FormulaAvailable estimation of Depth,Wherein γ is object distance, i.e., required depth, this makes it possible to obtain parts
Depth, then circular treatment is carried out to image by the window of 3X3, it goes through all over whole image, depth map can be obtained.
Step 3: the depth map of fabric is pre-processed
Step 3.1: light-field camera can generate vignetting effect due to using microlens array, super from imaging plane in processing
When crossing the depth of field of 1m, vignetting effect will not be particularly evident, but it is this for fabric need shooting at close range, texture information mentions
It taking for the object for needing high-resolution image, the vignetting effect of light-field camera is especially big on the influence of the processing result of method,
In order to weaken the influence, first depth image is cut, takes middle section, avoids the shading value of vignetting circle periphery gradual change to depth
Degree information has an impact.1/5 is generally taken when cutting, i.e. cutting starting point is (2m/5,2n/5), and length and width take m/5, n/5 respectively, if
M/5 is not that integer is then rounded downwards, and m here, n refer to the length and width pixel of depth map.
Step 3.2: smoothing processing does simple smoothing processing to depth map
Step 4: self-adapting window filtering
Step 4.1: creation one and the equirotal null matrix M of image, for recording the position of noise.N is image slices vegetarian refreshments number, and x, y are respectively the transverse and longitudinal coordinate of pixel.V is the variance of image.
M (x, y)=1
Being in the label of Metzler matrix by the pixel that pixel value is greater than h times of standard deviation of average value is noise, in this implementation
H=3 times is chosen in example.
Step 4.2: m rank window is recycled, m=2n+1, n=1,2 are judged to image each pixel progress noise,
3...7.Judgment method such as step 4.1, if the difference of mean value is greater than pixel in h times of window in pixel gray value and window
The point is then judged as noise by standard deviation, and corresponding position is labeled as 1 in Metzler matrix.
Step 4.3: image being filtered, central point and the adjacent 4 points of crosses for establishing 3X3 up and down of central point are taken
Shape window, retrieves Metzler matrix, if 0 quantity is greater than 1 quantity in window, i.e., valid pixel number is big in expression window
In noise pixel number, then mean filter is carried out with pixel of the window to depth map corresponding position, no side window is become by 3x3
For 5x5, then Metzler matrix is retrieved, if 0 quantity is greater than 1 quantity in window, i.e., valid pixel number is big in expression window
In noise pixel number, then mean filter is carried out with pixel of the window to depth map corresponding position, otherwise window transverse and longitudinal
Size adds 2 respectively, until window size is 15x15..
Step 4.4: multiple Metzler matrix building can be carried out to image according to filter result and then filtered, to most be managed
The result thought.2 filtering processings have been carried out in this embodiment.
Step 5: binary conversion treatment being done to the depth map after progress noise reduction process, chooses suitable threshold value, can be obtained clear
Clear binary image.The selection of threshold value is related with the complexity of the number of picture depth layer and scene depth, generally exists
Between 0.25 to 0.3, the threshold value chosen in this embodiment is 0.27.
Specific embodiment described herein is only an example for the spirit of the invention.The neck of technology belonging to the present invention
The technical staff in domain can make various modifications or additions to the described embodiments or replace by a similar method
In generation, however, it does not deviate from the spirit of the invention or beyond the scope of the appended claims.
Claims (1)
1. a kind of fabric defects detection method based on light-field camera extraction of depth information, which is characterized in that including
Step 1: acquiring one group of multi-view image using light-field camera;
Step 1.1: target fabric being shot with light-field camera in even strong laser field, extracts the RAW file in light-field camera
With white image file, the RAW file extracted is first decoded, then color correction, wherein needing to use Matlab light field
Kit, the kit are developed by D.G.Dansereau et al., there is toolbox0.3, two versions of toolbox0.4, sheet at present
It is toolbox0.4 used in embodiment, carries out will use white image file when image decoding, in each light-field camera
White picture file can be carried, kit reads WhiteImagesDataBase mapping table, and kit chooses most suitable white image
With lenticule grid model, to obtain the light field image of Baeyer format;Again to image carry out frequency domain filtering to image into
The operation of row demosaicing, obtains RGB color image, colour correction is carried out to image obtained in the previous step, after obtaining colour correction
Image and five dimension light field datas;
Step 1.2: 5 dimension light field datas are decomposed, 5 dimension light field datas are expressed as LF (x, y, row.col, channel),
Middle x, y respectively indicate the size of figure sequence, i.e., one shared xy images, row, col indicate the transverse and longitudinal pixel of each image
Size, channel store colouring information, go through and can be analyzed to the subimage sequence of multiple multi-angle of view all over x, y;
Step 2: sub-pix offset being carried out to multi-view image, to obtain depth map;
Step 2.1: as the theory of current construction depth, first extracting parallax feature;In light field description, usually with 2
Parallel plane Π Ω describes a four-dimension light field L (x, y, u, v), and 2 point C1, C2, attached article and C1 are chosen in Ω plane,
C2 hands over Π plane in P1, P2 respectively, and definition C1.C2 spacing is B, and the distance of C1, C2 to project objects point is respectively B1, B2, two
The distance between plane is f, and P1 to C1 is L1, distance of P2 to the C2 in the subpoint of Π plane in the distance of Π plane projection point
For L2, then parallax is L1-L2, and γ is object distance, i.e., the depth finally required is calculated by similar triangles
Step 2.2: in multi-view image sequence, same object point has different coordinates because of different visual angles in the picture,
Therefore be inserted perpendicularly into straight line in the pixel of the same depth layer of image, then carry out sub-pix offset, at this time offset with
It is linear between the slope of straight line, for example, being inserted perpendicularly into straight line between the image of 2 linked sequences, then pass through again
Sub-pix offset is crossed to be aligned object point, at this moment straight line has certain slope, and 2 pictures and straight-line intersection after the slope and offset
Between horizontal distance it is linear, the distance be also 2 pictures in object point horizontal distance, that is, parallax, it is possible to pass through
Slope carrys out estimating depth information;, M=multi-view image number, m=√ M;According to formula
U in formula, v are coordinate of the lens in array, and x, y are pixel coordinate, center lens u=0, v=0;siIt is preset oblique
Rate, ns are the depth number of plies, and M is multi-view image sequence number;Therefore we can obtain angular variance
Step 2.3: after the variance for calculating all candidate slopes, choosing the slope of minimum variance to restore depth;In order to
Improve robustness, our calculating field average differences indicate fog-level
W in formulaDIt is the window centered on (x, y), | WD| expression be the window size, i.e., window class all pixels
Number, D (x, y) are the local parallax of estimation;According to the formula in step 2.1Available estimation of Depth,Wherein γ is object distance, i.e., required depth, this makes it possible to obtain partial-depths, then are recycled by the window of 3X3 to image
Processing goes through all over whole image, depth map can be obtained;
Step 3: the depth map of fabric is pre-processed
Step 3.1: light-field camera can generate vignetting effect due to using microlens array, be more than 1m handling from imaging plane
The depth of field when, vignetting effect will not be particularly evident, but it is this for fabric need shooting at close range, the extraction of texture information needs
For the object for wanting high-resolution image, the vignetting effect of light-field camera is especially big on the influence of the processing result of method, in order to
Weaken the influence, first depth image is cut, take middle section, the shading value of vignetting circle periphery gradual change is avoided to believe depth
Breath has an impact;
Step 3.2: smoothing processing does simple smoothing processing to depth map;
Step 4: self-adapting window filtering
Step 4.1: creation one and the equirotal null matrix M of image, for recording the position of noise;N is image slices vegetarian refreshments number, and x, y are respectively the transverse and longitudinal coordinate of pixel;V is the variance of image;
M (x, y)=1
It is in the label of Metzler matrix by the pixel that pixel value is greater than h times of standard deviation of average value, is noise, in this embodiment
It is chosen for h=3 times;
Step 4.2: m rank window is recycled, m=2n+1, n=1,2,3...7 are judged to image each pixel progress noise;Sentence
Disconnected method such as step 4.1, if the difference of mean value is greater than the standard deviation of pixel in h times of window in pixel gray value and window,
The point is then judged as noise, corresponding position is labeled as 1 in Metzler matrix;
Step 4.3: image being filtered, central point and the adjacent 4 points of cross windows for establishing 3X3 up and down of central point are taken
Mouthful, Metzler matrix is retrieved, if 0 quantity is greater than 1 quantity in window, i.e., in expression window, valid pixel number, which is greater than, makes an uproar
Sound number of pixels then carries out mean filter with pixel of the window to depth map corresponding position, and no side window is become from 3x3
5x5, then Metzler matrix is retrieved, if 0 quantity is greater than 1 quantity in window, i.e., valid pixel number is greater than in expression window
Noise pixel number then carries out mean filter with pixel of the window to depth map corresponding position, and otherwise window transverse and longitudinal is big
It is small to add 2 respectively, until window size is 15x15;.
Step 4.4: multiple Metzler matrix building can be carried out to image according to filter result and then filtered, to obtain optimal
As a result;2 filtering processings have been carried out in this embodiment;
Step 5: binary conversion treatment being done to the depth map after progress noise reduction process, suitable threshold value is chosen, can be obtained clearly
Binary image;The selection of threshold value is related with the complexity of the number of picture depth layer and scene depth, and threshold value takes 0.25
To between 0.3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910552640.6A CN110349132B (en) | 2019-06-25 | 2019-06-25 | Fabric flaw detection method based on light field camera depth information extraction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910552640.6A CN110349132B (en) | 2019-06-25 | 2019-06-25 | Fabric flaw detection method based on light field camera depth information extraction |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110349132A true CN110349132A (en) | 2019-10-18 |
CN110349132B CN110349132B (en) | 2021-06-08 |
Family
ID=68182951
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910552640.6A Active CN110349132B (en) | 2019-06-25 | 2019-06-25 | Fabric flaw detection method based on light field camera depth information extraction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110349132B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991082A (en) * | 2019-12-19 | 2020-04-10 | 信利(仁寿)高端显示科技有限公司 | Mura quantification method based on excimer laser annealing |
CN112816493A (en) * | 2020-05-15 | 2021-05-18 | 奕目(上海)科技有限公司 | Chip routing defect detection method and device |
CN113298943A (en) * | 2021-06-10 | 2021-08-24 | 西北工业大学 | ESDF map construction method based on light field imaging |
CN114189623A (en) * | 2021-09-01 | 2022-03-15 | 深圳盛达同泽科技有限公司 | Light field-based refraction pattern generation method, device, equipment and storage medium |
CN114511469A (en) * | 2022-04-06 | 2022-05-17 | 江苏游隼微电子有限公司 | Intelligent image noise reduction prior detection method |
WO2022126871A1 (en) * | 2020-12-15 | 2022-06-23 | Vomma (Shanghai) Technology Co., Ltd. | Defect layer detection method and system based on light field camera and detection production line |
CN114677577A (en) * | 2022-03-23 | 2022-06-28 | 北京拙河科技有限公司 | Motor vehicle detection method and system of light field camera |
CN115790449A (en) * | 2023-01-06 | 2023-03-14 | 威海晶合数字矿山技术有限公司 | Three-dimensional shape measurement method for long and narrow space |
CN115953790A (en) * | 2022-09-29 | 2023-04-11 | 江苏智联天地科技有限公司 | Label detection and identification method and system |
CN116736783A (en) * | 2023-08-16 | 2023-09-12 | 江苏德顺纺织有限公司 | Intelligent remote control system and method for textile electrical equipment |
CN116862917A (en) * | 2023-09-05 | 2023-10-10 | 微山县振龙纺织品有限公司 | Textile surface quality detection method and system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866427A (en) * | 2010-07-06 | 2010-10-20 | 西安电子科技大学 | Method for detecting and classifying fabric defects |
CN105931246A (en) * | 2016-05-05 | 2016-09-07 | 东华大学 | Fabric flaw detection method based on wavelet transformation and genetic algorithm |
CN106530288A (en) * | 2016-11-03 | 2017-03-22 | 东华大学 | Fabric defect detection method based on deep learning algorithm |
CN107071233A (en) * | 2015-12-15 | 2017-08-18 | 汤姆逊许可公司 | The method and apparatus for correcting vignetting effect caused by the image of light-field camera capture |
CN107135388A (en) * | 2017-05-27 | 2017-09-05 | 东南大学 | A kind of depth extraction method of light field image |
CN107578437A (en) * | 2017-08-31 | 2018-01-12 | 深圳岚锋创视网络科技有限公司 | A kind of depth estimation method based on light-field camera, system and portable terminal |
CN107787507A (en) * | 2015-06-17 | 2018-03-09 | 汤姆逊许可公司 | The apparatus and method for obtaining the registration error figure for the acutance rank for representing image |
CN108289170A (en) * | 2018-01-12 | 2018-07-17 | 深圳奥比中光科技有限公司 | The camera arrangement and method of metering region can be detected |
CN109410192A (en) * | 2018-10-18 | 2019-03-01 | 首都师范大学 | A kind of the fabric defect detection method and its device of multi-texturing level based adjustment |
CN109858485A (en) * | 2019-01-25 | 2019-06-07 | 东华大学 | A kind of fabric defects detection method based on LBP and GLCM |
-
2019
- 2019-06-25 CN CN201910552640.6A patent/CN110349132B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101866427A (en) * | 2010-07-06 | 2010-10-20 | 西安电子科技大学 | Method for detecting and classifying fabric defects |
CN107787507A (en) * | 2015-06-17 | 2018-03-09 | 汤姆逊许可公司 | The apparatus and method for obtaining the registration error figure for the acutance rank for representing image |
CN107071233A (en) * | 2015-12-15 | 2017-08-18 | 汤姆逊许可公司 | The method and apparatus for correcting vignetting effect caused by the image of light-field camera capture |
CN105931246A (en) * | 2016-05-05 | 2016-09-07 | 东华大学 | Fabric flaw detection method based on wavelet transformation and genetic algorithm |
CN106530288A (en) * | 2016-11-03 | 2017-03-22 | 东华大学 | Fabric defect detection method based on deep learning algorithm |
CN107135388A (en) * | 2017-05-27 | 2017-09-05 | 东南大学 | A kind of depth extraction method of light field image |
CN107578437A (en) * | 2017-08-31 | 2018-01-12 | 深圳岚锋创视网络科技有限公司 | A kind of depth estimation method based on light-field camera, system and portable terminal |
CN108289170A (en) * | 2018-01-12 | 2018-07-17 | 深圳奥比中光科技有限公司 | The camera arrangement and method of metering region can be detected |
CN109410192A (en) * | 2018-10-18 | 2019-03-01 | 首都师范大学 | A kind of the fabric defect detection method and its device of multi-texturing level based adjustment |
CN109858485A (en) * | 2019-01-25 | 2019-06-07 | 东华大学 | A kind of fabric defects detection method based on LBP and GLCM |
Non-Patent Citations (7)
Title |
---|
JIAN ZHOU等: "Sparse Dictionary Reconstruction For Textile Defect Detection", 《2012 11TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS》 * |
RAKESH J.KADKOL等: "Textile Defect Detection for Fabric Material using Texture Feature Extraction", 《INTERNATIONAL JOURNAL OF LATEST TRENDS IN ENGINEERING AND TECHNOLOGY (IJLTET)》 * |
兰瑜洁: "基于视觉检测技术的织物瑕疵检测与智能识别研究", 《中国优秀硕士学位论文全文数据库 工程科技Ⅰ辑》 * |
张国英等: "不同光照下织物瑕疵检测方法研究", 《计算机科学与应用》 * |
张敏: "基于光场多视角图像序列的深度估计算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
林如意: "基于图像处理技术的纺织品瑕疵检测方法", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
骑着蜗牛追梦想: "Matlab光场工具包使用、重聚焦及多视角效果展示", 《CSDN》 * |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991082A (en) * | 2019-12-19 | 2020-04-10 | 信利(仁寿)高端显示科技有限公司 | Mura quantification method based on excimer laser annealing |
CN110991082B (en) * | 2019-12-19 | 2023-11-28 | 信利(仁寿)高端显示科技有限公司 | Mura quantification method based on excimer laser annealing |
CN112816493A (en) * | 2020-05-15 | 2021-05-18 | 奕目(上海)科技有限公司 | Chip routing defect detection method and device |
WO2022126871A1 (en) * | 2020-12-15 | 2022-06-23 | Vomma (Shanghai) Technology Co., Ltd. | Defect layer detection method and system based on light field camera and detection production line |
CN113298943A (en) * | 2021-06-10 | 2021-08-24 | 西北工业大学 | ESDF map construction method based on light field imaging |
CN114189623B (en) * | 2021-09-01 | 2023-03-24 | 深圳盛达同泽科技有限公司 | Light field-based refraction pattern generation method, device, equipment and storage medium |
CN114189623A (en) * | 2021-09-01 | 2022-03-15 | 深圳盛达同泽科技有限公司 | Light field-based refraction pattern generation method, device, equipment and storage medium |
CN114677577A (en) * | 2022-03-23 | 2022-06-28 | 北京拙河科技有限公司 | Motor vehicle detection method and system of light field camera |
CN114677577B (en) * | 2022-03-23 | 2022-11-29 | 北京拙河科技有限公司 | Motor vehicle detection method and system of light field camera |
CN114511469A (en) * | 2022-04-06 | 2022-05-17 | 江苏游隼微电子有限公司 | Intelligent image noise reduction prior detection method |
CN114511469B (en) * | 2022-04-06 | 2022-06-21 | 江苏游隼微电子有限公司 | Intelligent image noise reduction prior detection method |
CN115953790A (en) * | 2022-09-29 | 2023-04-11 | 江苏智联天地科技有限公司 | Label detection and identification method and system |
CN115953790B (en) * | 2022-09-29 | 2024-04-02 | 江苏智联天地科技有限公司 | Label detection and identification method and system |
CN115790449A (en) * | 2023-01-06 | 2023-03-14 | 威海晶合数字矿山技术有限公司 | Three-dimensional shape measurement method for long and narrow space |
CN116736783A (en) * | 2023-08-16 | 2023-09-12 | 江苏德顺纺织有限公司 | Intelligent remote control system and method for textile electrical equipment |
CN116736783B (en) * | 2023-08-16 | 2023-12-05 | 江苏德顺纺织有限公司 | Intelligent remote control system and method for textile electrical equipment |
CN116862917A (en) * | 2023-09-05 | 2023-10-10 | 微山县振龙纺织品有限公司 | Textile surface quality detection method and system |
CN116862917B (en) * | 2023-09-05 | 2023-11-24 | 微山县振龙纺织品有限公司 | Textile surface quality detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN110349132B (en) | 2021-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110349132A (en) | A kind of fabric defects detection method based on light-field camera extraction of depth information | |
CN106910242B (en) | Method and system for carrying out indoor complete scene three-dimensional reconstruction based on depth camera | |
KR102504246B1 (en) | Methods and Systems for Detecting and Combining Structural Features in 3D Reconstruction | |
CN102113015B (en) | Use of inpainting techniques for image correction | |
Wang et al. | Stereoscopic inpainting: Joint color and depth completion from stereo images | |
US20200380711A1 (en) | Method and device for joint segmentation and 3d reconstruction of a scene | |
CN107622480B (en) | Kinect depth image enhancement method | |
CN110910431B (en) | Multi-view three-dimensional point set recovery method based on monocular camera | |
CN113450410B (en) | Monocular depth and pose joint estimation method based on epipolar geometry | |
CN107369204B (en) | Method for recovering basic three-dimensional structure of scene from single photo | |
CN107689050B (en) | Depth image up-sampling method based on color image edge guide | |
CN111899295B (en) | Monocular scene depth prediction method based on deep learning | |
CN103996174A (en) | Method for performing hole repair on Kinect depth images | |
WO2018133119A1 (en) | Method and system for three-dimensional reconstruction of complete indoor scene based on depth camera | |
Luo et al. | Foreground removal approach for hole filling in 3D video and FVV synthesis | |
Sharma et al. | A flexible architecture for multi-view 3DTV based on uncalibrated cameras | |
CN104778673B (en) | A kind of improved gauss hybrid models depth image enhancement method | |
CN113538569A (en) | Weak texture object pose estimation method and system | |
KR101454780B1 (en) | Apparatus and method for generating texture for three dimensional model | |
Xu et al. | A method of hole-filling for the depth map generated by Kinect with moving objects detection | |
Reinert et al. | Animated 3D creatures from single-view video by skeletal sketching. | |
CN115713469A (en) | Underwater image enhancement method for generating countermeasure network based on channel attention and deformation | |
Schmeing et al. | Depth image based rendering | |
Zabulis et al. | Multi-camera reconstruction based on surface normal estimation and best viewpoint selection | |
Nouduri et al. | Deep realistic novel view generation for city-scale aerial images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |