CN107818552A - A kind of binocular image goes reflective method - Google Patents
A kind of binocular image goes reflective method Download PDFInfo
- Publication number
- CN107818552A CN107818552A CN201711146926.1A CN201711146926A CN107818552A CN 107818552 A CN107818552 A CN 107818552A CN 201711146926 A CN201711146926 A CN 201711146926A CN 107818552 A CN107818552 A CN 107818552A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- reflective
- image
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 238000005457 optimization Methods 0.000 claims abstract description 9
- 238000001514 detection method Methods 0.000 claims abstract description 4
- 239000013598 vector Substances 0.000 claims description 37
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000000354 decomposition reaction Methods 0.000 claims description 4
- 239000007787 solid Substances 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 3
- 230000006641 stabilisation Effects 0.000 description 2
- 238000011105 stabilization Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/207—Analysis of motion for motion estimation over a hierarchy of resolutions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Reflective method is gone the invention discloses a kind of binocular image, by distinguishing background dot and reflective spot, optimization equation is recycled to calculate reflective spot reflective so as to remove, rim detection is carried out to binocular image using Sobel operators and obtains edge image, calculate the gray scale score and motion scores of each pixel in edge image, the gray scale score and motion scores utilized calculates the comprehensive score of pixel, setting one is used to distinguish the threshold values of background dot and reflective spot and compared with comprehensive score, so as to judge the pixel for background dot or reflective spot, using optimize equation reflective spot be calculated it is reflective after background dot.The beneficial effects of the invention are as follows:This programme can complete reflective processing using a binocular image, reduce the reflective limitation for handling and needing to use multiple images, improve the scope of application of reflective algorithm, solid foundation is provided for other follow-up binocular image Processing Algorithms.
Description
Technical field
It is that a kind of binocular image goes reflective method specifically the present invention relates to image processing field.
Background technology
The reflective removal of image is that original input picture (Image) is decomposed into background image (Background
Image) with iridescent image (Reflection Image), iridescent image is typically dropped, and resulting background image is to go
Except the image after reflective.This technology belongs to front end processing techniques in computer vision and image procossing, after IMAQ
To image procossing, to reduce the reflective influence to other follow-up algorithms, such as the segmentation of image, identify, classification.
As people are to the quality requirement more and more higher of video and image, binocular camera and stereoscopic display device are in recent years
Become very popular.In academia and industrial circle, binocular image and video processnig algorithms and technology are increasingly becoming popular research side
To.Such as:The panoramic mosaic (Panorama) of binocular image, stabilization (Stabilization), 3D rebuild (3D
Reconstruction), deblurring (Deblur) etc. is applied.But up to the present, algorithm can not remove effectively
It is reflective in binocular image.
Multiple images are often used in the reflective algorithm of conventional removal, this requirement, which significantly limit, removes reflective calculation
The application scenarios of method.
The content of the invention
Reflective method is gone it is an object of the invention to provide a kind of binocular image, field is used to go reflective algorithm to widen
Scape, reduce using limitation, while laid a solid foundation for other follow-up binocular image Processing Algorithms.
The present invention is achieved through the following technical solutions:A kind of binocular image goes reflective method, by distinguish background dot and
Reflective spot, recycle optimization equation to calculate reflective spot reflective so as to remove, binocular image is entered using Sobel operators
Row rim detection simultaneously obtains edge image, calculates the gray scale score and motion scores of each pixel in edge image, profit
Gray scale score and motion scores calculate the comprehensive score of pixel, and setting one is used for the valve for distinguishing background dot and reflective spot
Value and compared with comprehensive score, so as to judge the pixel for background dot or reflective spot, using optimizing equation to reflective
Point be calculated it is reflective after background dot.
Including following steps:
Step S1:Relevant parameter is initialized, described parameter includes threshold values τ;
Step S2:By the gray level image of binocular image is calculated, gray-scale Image Edge is detected using Sobel operators,
Obtain edge image and calculate the pixel value of each edge pixel point in edge image, gray scale score is used as by the use of pixel value
Sintensity(i);
Step S3:The motion vector matrix of each edge pixel point in edge image is calculated using SIFT-flow methods,
The angle matrix gone out according to motion vector matrix computations between motion vector and reference vector, then utilize angle matrix computations side
The motion scores S of edge pixelmotion(i);
Step S4:Utilize gray scale score SintensityAnd motion scores S (i)motion(i) comprehensive score S is calculatedcombine(i),
Make comprehensive score Scombine(i) compared with threshold values τ, comprehensive score Scombine(i) picture is then judged more than or equal to threshold values τ
Vegetarian refreshments is background dot, comprehensive score Scombine(i) then judge the pixel for reflective spot less than threshold values τ;
Step S5:Reflective spot is calculated using equation is optimized, solve obtain removing it is reflective after image.
In described step S2, edge image is obtained by below equation:
IL=IR+IB(1);
Wherein:
ILFor original image;
IRFor the iridescent image obtained after decomposition;
IBFor the background image obtained after decomposition;
G is edge image;
fsobelFor Sobel operator masterplates;
Represent convolution.
In described step S1, described parameter also includes consistent item λ, in described step S3, using different one
Under the conditions of causing item λ, the motion vector and structure of each pixel in edge image are calculated respectively using SIFT-flow methods
The hierarchical motion model of each pixel, defining the kth motion vector that level motion model is located at pixel i layer by layer isBase
Accurate vectorial u=(0,1), it is by below equation calculation of motion vectorsWith reference vector u angle:
To same pixelStandard deviation is asked to obtain motion scores Smotion(i)。
Edge image is divided into several SuperPixel regions using SuperPixel algorithms, solves in the region and owns
Pixel is correspondingAverage value, the motion scores S using the average value as all pixels point in SuperPixel regionsmotion
(i)。
The number of plies of described hierarchical motion model has 10 layers, consistent item λ quantity for 10 and λ is respectively 10,5,1,0.5,
0.2、0.1、0.01、0.001、0.0001、0。
In described step S1, described parameter also includes ε, and comprehensive score S is calculated by below equationcombine(i):
Scombine(i)=Sintensity(i)-εSmotion(i) (5);
Wherein:
ε=0.5;
Make the comprehensive score S being calculatedcombine(i) compared with threshold values τ:
Wherein:
τ=80;
E (i)=1 represents that ith pixel point is divided into background dot;
E (i)=0 represents that ith pixel point is divided into reflective spot.
Optimization equation in described step S5 is:
Wherein:
fnN difference filter is represented, needs to travel through all difference filters during calculating;
* convolution is represented;
EBExpression is divided into the edge pixel set of background;
ERExpression is divided into reflective edge pixel set;
X represents location of pixels;
J(IB) for remove it is reflective after image.
The present invention compared with prior art, has advantages below and beneficial effect:
This programme can complete reflective processing using a binocular image, reduce it is reflective processing need to use it is more
The limitation of image is opened, the scope of application of reflective algorithm is improved, is provided for other follow-up binocular image Processing Algorithms solid
Basis.
Brief description of the drawings
Fig. 1 is the schematic diagram for obtaining edge image;
Fig. 2 is SuperPixel algorithms to picture portion schematic diagram;
Fig. 3 is the schematic diagram for distinguishing background dot and reflective spot;
Fig. 4 is to remove reflecting effect schematic diagram;
(a) is vector direction angle calculation schematic diagram in Fig. 5;(b) it is Superpixel method schematic diagrams.
Embodiment
The present invention is described in further detail with reference to embodiment, but the implementation of the present invention is not limited to this.
Embodiment 1:
As shown in figure 1, in the present embodiment, a kind of binocular image goes reflective method, and choosing two has parallax and with reflective
Binocular image, wherein any one image is designated as original image, another is designated as target image.Carried on the back by distinguishing
Sight spot and reflective spot, optimization equation is recycled to calculate reflective spot reflective so as to remove, using Sobel operators to binocular
Original image in image carries out rim detection, and original image is decomposed to obtain iridescent image and background image and utilizes Sobel
Operator obtains edge image, calculates the gray scale score and motion scores of each pixel in edge image, the gray scale utilized
Score and motion scores calculate the comprehensive score of pixel, setting one be used for the threshold values for distinguishing background dot and reflective spot and with it is comprehensive
Close score to be compared, so as to judge that the pixel for background dot or reflective spot, is counted reflective spot using equation is optimized
Calculate obtain it is reflective after background dot.Original image is exchanged with target image, target figure can be obtained using identical method
As go it is reflective after image.
Because in edge image, if it is reflective spot that a pixel, which has reflective i.e. pixel, the pixel
Value is mixed by foreground part and background parts, and foreground part refers to reflector segment, using optimizing equation to anti-
Luminous point calculate that foreground part and background parts can be peeled off goes reflective effect to the pixel so as to reach.This is optimal
Change the standing procedure that equation is this area, come from paper Levin and Y.Weis, " User assisted
Separation of reflections from a single image using a sparsity prior ", IEEE
Transactions on Pattern Analysis and Machine intelligence, no.9, pages, 1647-
1654,2007.
Embodiment 2:
On the basis of above-described embodiment, in the present embodiment, a kind of binocular image goes reflective method, including following
Step:
Step S1:Relevant parameter is initialized, described parameter includes threshold values τ.
Step S2:As shown in figure 1, the gray level image by binocular image is calculated, gray scale is detected using Sobel operators
Image border, obtain edge image and calculate the pixel value of each edge pixel point in edge image, gray scale is used as by the use of pixel value
Score Sintensity(i).The algorithm that gray level image is calculated is the common knowledge and customary means of those skilled in the art,
The content that those skilled in the art record according to this programme can realize the effect above, herein not to calculating the specific of gray level image
Process is repeated.Reflective image is carried for one, the gray-scale intensity of most reflective edge will be less than background edge.
Based on this feature, gray scale score S that can be using the gray value of edge pixel point as the pointintensity(i), so as to subtracting
Small calculating intensity.
Step S3:As shown in figure 3, the fortune of each edge pixel point in edge image is calculated using SIFT-flow methods
Trend moment matrix, the angle matrix gone out according to motion vector matrix computations between motion vector and reference vector, then utilize folder
The motion scores S of angle matrix computations edge pixel pointmotion(i).SIFT-flow is that one kind can establish two image motion letters
The method of breath.It can calculate the motion vector (Motion Vector) of two image corresponding points.In SIFT-flow methods
Tend to one by adjusting the size of consistent item λ (Smooth Term) come the vector of the motion vector around certain point and certain point
Cause.In the picture, because background occupies leading position, therefore when consistent item is larger, the motion of background edge and reflective edge
Vector direction reaches unanimity;When consistent item is gradually reduced, the motion vector direction of background edge is held essentially constant, but instead
The motion vector direction of plain edge edge can gradually be intended to oneself direction of motion in itself because constraint weakens.Analyze motion vector
Change just can interpolate that the pixel in edge image belongs to background dot or reflective spot.
Step S4:Utilize gray scale score SintensityAnd motion scores S (i)motion(i) comprehensive score S is calculatedcombine(i),
Make comprehensive score Scombine(i) compared with threshold values τ, comprehensive score Scombine(i) picture is then judged more than or equal to threshold values τ
Vegetarian refreshments is background dot, comprehensive score Scombine(i) then judge the pixel for reflective spot less than threshold values τ.Utilize gray scale score
SintensityAnd motion scores S (i)motion(i) the comprehensive score S being calculatedcombine(i) come judge pixel be background dot also
It is the accuracy that reflective spot can improve judgement, so as to improve the accuracy of reflective processing.
Step S5:As shown in figure 4, calculated using equation is optimized reflective spot, solution obtain removing it is reflective after
Image.In the present embodiment, other parts not described are identical with the content of above-described embodiment, therefore do not repeat.
Embodiment 3:
On the basis of above-described embodiment, in the present embodiment, in described step S2, edge graph is obtained by below equation
Picture:
IL=IR+IB(1);
Wherein:
ILFor original image.
IRRepresent the iridescent image obtained after decomposing.
IBRepresent the background image obtained after decomposing.
G is edge image.
fsobelFor Sobel operator masterplates.Sobel operators are the Operators in image procossing, are made up of 2 3*3 masterplates,
Original graph and the two masterplates are done into the gradient image that convolution can be obtained by x directions and y directions respectively, then to this two images
Final edge image can just be obtained by solving Euclidean distance.In the present embodiment, described Sobel operators and solve it is European away from
From the common knowledge and customary means for those skilled in the art, content energy that those skilled in the art record according to this programme
The effect above is enough realized, the particular content of Sobel operators and solution Euclidean distance is not repeated herein.
Represent convolution.In the present embodiment, described convolution is the common knowledge and strong hand of those skilled in the art
Section, the content that those skilled in the art record according to this programme can realize the effect above, herein not to the particular content of convolution
Repeated.In the present embodiment, other parts not described are identical with the content of above-described embodiment, therefore do not repeat.
Embodiment 4:
On the basis of above-described embodiment, in the present embodiment, in described step S1, described parameter also includes consistent item
In λ, described step S3, using under the conditions of different consistent item λ, edge graph is calculated respectively using SIFT-flow methods
The motion vector of each interior pixel of picture simultaneously builds the hierarchical motion model of each pixel, defines kth layer hierarchical motion
The motion vector that model is located at pixel i isReference vector u=(0,1), it is by below equation calculation of motion vectors
With reference vector u angle:
Calculated by above-mentioned formulaScope in the range of 0~180 °, therefore can not represent to be located at reference vector two
Side angle degree identical vector.As shown in Fig. 5 (a), here by judging Outer Product of Vectors by angle spread to 0~360 ° of scope
It is interior.
To same pixelStandard deviation is asked to obtain motion scores Smotion(i).In the present embodiment, other are not described
Part it is identical with the content of above-described embodiment, therefore do not repeat.
Embodiment 5:
As shown in Fig. 2, Fig. 5 (b), on the basis of above-described embodiment 4, pass through calculatingStandard deviation moved
Divide Smotion(i) spatial continuity is have ignored, i.e., the motion scores of each marginal point are independent in space, can so be led
Cause edge pixel point that discontinuous mistake occurs when being divided into background dot and reflective spot.In the present embodiment, use
Edge image is divided into several by SuperPixel algorithms
SuperPixel regions, it is corresponding to solve all pixels point in the regionAverage value, using the average value as
The motion scores S of all pixels point in SuperPixel regionsmotion(i).SuperPixel is one kind classics to image clustering
Algorithm, the pixel for being divided into one kind have a uniformity feature, introduce SuperPixel algorithms and can solve the problem that background dot and reflective
The discontinuous mistake of point, so as to strengthen the spatial continuity of this method.In the present embodiment, described SuperPixel algorithms are this
The common knowledge and customary means of art personnel, the content that those skilled in the art record according to this programme can be realized
The effect above, the particular content of SuperPixel algorithms is not repeated herein.In the present embodiment, other parts not described
It is identical with the content of above-described embodiment, therefore do not repeat.
Embodiment 6:
On the basis of above-described embodiment, in the present embodiment, the number of plies of described hierarchical motion model has 10 layers, consistent item
λ quantity is that 10 and λ is respectively 10,5,1,0.5,0.2,0.1,0.01,0.001,0.0001,0.In the present embodiment, other are not
The part of description is identical with the content of above-described embodiment, therefore does not repeat.
Embodiment 7:
On the basis of above-described embodiment, in the present embodiment, in described step S1, described parameter also includes ε, passes through
Below equation calculates comprehensive score Scombine(i):
Scombine(i)=Sintensity(i)-εSmotion(i) (5)。
Wherein:
ε=0.5.Due to motion scores Smotion(i) with gray scale score Sintensity(i) unit is different, by introducing ε energy
Enough balance exercise score Smotion(i) with gray scale score Sintensity(i) different dimension, so as to obtain comprehensive score Scombine
(i)。
Make the comprehensive score S being calculatedcombine(i) compared with threshold values τ:
Wherein:
τ=80.
E (i)=1 represents that ith pixel point is divided into background dot.
E (i)=0 represents that ith pixel point is divided into reflective spot.In the present embodiment, other parts not described with it is above-mentioned
The content of embodiment is identical, therefore does not repeat.
Embodiment 8:
On the basis of above-described embodiment, in the present embodiment, the optimization equation in described step S5 is:
Wherein:
fnN difference filter is represented, needs to travel through all difference filters during calculating;
* convolution is represented;
EBExpression is divided into the edge pixel set of background;
ERExpression is divided into reflective edge pixel set;
X represents location of pixels;
J(IB) for remove it is reflective after image.
Section 1 in formula on the right of equal sign represents background edge pixel with reflective edge pixel and small as far as possible, i.e.,
The edge pixel set of background is unexpectedly possible sparse with reflective edge pixel set.
Section 2 represents to allow E as far as possibleRThe edge and I of positionL-IBThat is iridescent image IREdge it is as consistent as possible;
Section 3 represents to allow E as far as possibleBThe edge of position and background image IBEdge it is as consistent as possible;
Formula (7) is by constantly updating least square method (the Iterative reweighted least of weight
Square) solve.Least square method Iterative reweighted least square (IRLS) are that one kind in mathematics is normal
See Optimization solution, the content combination common knowledge that those skilled in the art record according to this programme can realize the effect above, this
Place is not repeated the specific algorithm of least square method.
Formula (7) is a kind of common optimization in reflective removal, and edge image can be reduced to original image, gone by the optimization
Carried out except reflective in edge image.This programme is also to be split in edge image, distinguishes reflective spot and background dot.
After reflective edge image and background edge image is obtained, iridescent image and background image can be reverted to by formula (7).
The content combination formula (7) that those skilled in the art record according to this programme can realize the reflective effect of above-mentioned removal, herein
The concrete principle of formula (7) is not repeated.
It is described above, be only presently preferred embodiments of the present invention, any formal limitation not done to the present invention, it is every according to
Any simply modification, the equivalent variations made according to the technical spirit of the present invention to above example, each fall within the protection of the present invention
Within the scope of.
Claims (8)
1. a kind of binocular image goes reflective method, by distinguishing background dot and reflective spot, recycle and optimize equation to reflective
Point is calculated reflective so as to remove, it is characterised in that:Rim detection is carried out to binocular image using Sobel operators and obtains side
Edge image, calculate the gray scale score and motion scores of each pixel in edge image, the gray scale score utilized and motion
Score calculates the comprehensive score of pixel, and setting one is used for the threshold values for distinguishing background dot and reflective spot and carried out with comprehensive score
Compare, so as to judge the pixel for background dot or reflective spot, reflective spot be calculated instead using equation is optimized
Background dot after light.
2. a kind of binocular image according to claim 1 goes reflective method, it is characterised in that:Including following step
Suddenly:
Step S1:Relevant parameter is initialized, described parameter includes threshold values τ;
Step S2:By the gray level image of binocular image is calculated, gray-scale Image Edge is detected using Sobel operators, is obtained
Edge image and the pixel value for calculating each edge pixel point in edge image, gray scale score S is used as by the use of pixel valueintensity
(i);
Step S3:The motion vector matrix of each edge pixel point in edge image is calculated using SIFT-flow methods, according to
The angle matrix that motion vector matrix computations go out between motion vector and reference vector, then utilize angle matrix computations edge picture
The motion scores S of vegetarian refreshmentsmotion(i);
Step S4:Utilize gray scale score SintensityAnd motion scores S (i)motion(i) comprehensive score S is calculatedcombine(i), make comprehensive
Close score Scombine(i) compared with threshold values τ, comprehensive score Scombine(i) pixel is then judged more than or equal to threshold values τ
For background dot, comprehensive score Scombine(i) then judge the pixel for reflective spot less than threshold values τ;
Step S5:Reflective spot is calculated using equation is optimized, solve obtain removing it is reflective after image.
3. a kind of binocular image according to claim 2 goes reflective method, it is characterised in that:In described step S2,
Edge image is obtained by below equation:
IL=IR+IB(1);
<mrow>
<mi>G</mi>
<mo>=</mo>
<msub>
<mi>f</mi>
<mrow>
<mi>s</mi>
<mi>o</mi>
<mi>b</mi>
<mi>e</mi>
<mi>l</mi>
</mrow>
</msub>
<mo>&CircleTimes;</mo>
<msub>
<mi>I</mi>
<mi>L</mi>
</msub>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>2</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Wherein:
ILFor original image;
IRFor the iridescent image obtained after decomposition;
IBFor the background image obtained after decomposition;
G is edge image;
fsobelFor Sobel operator masterplates;
Represent convolution.
4. a kind of binocular image according to Claims 2 or 3 goes reflective method, it is characterised in that:Described step S1
In, described parameter also includes consistent item λ, in described step S3, using under the conditions of different consistent item λ, utilizes SIFT-
Flow methods calculate the motion vector of each pixel edge image Nei and build the level fortune of each pixel respectively
Movable model, defining the kth motion vector that level motion model is located at pixel i layer by layer isReference vector u=(0,1), passes through
Below equation calculation of motion vectors isWith reference vector u angle:
<mrow>
<msubsup>
<mi>&alpha;</mi>
<mi>i</mi>
<mi>k</mi>
</msubsup>
<mo>=</mo>
<mi>a</mi>
<mi>r</mi>
<mi>c</mi>
<mi>c</mi>
<mi>o</mi>
<mi>s</mi>
<mrow>
<mo>(</mo>
<mfrac>
<mrow>
<mi>u</mi>
<mo>&times;</mo>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mi>k</mi>
</msubsup>
</mrow>
<mrow>
<mo>|</mo>
<mo>|</mo>
<mi>u</mi>
<mo>|</mo>
<mo>|</mo>
<mo>|</mo>
<mo>|</mo>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mi>k</mi>
</msubsup>
<mo>|</mo>
<mo>|</mo>
</mrow>
</mfrac>
<mo>)</mo>
</mrow>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>3</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
<mrow>
<msubsup>
<mover>
<mi>&alpha;</mi>
<mo>^</mo>
</mover>
<mi>i</mi>
<mi>k</mi>
</msubsup>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<msubsup>
<mi>&alpha;</mi>
<mi>i</mi>
<mi>k</mi>
</msubsup>
</mtd>
<mtd>
<mrow>
<mi>u</mi>
<mo>&times;</mo>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mi>k</mi>
</msubsup>
<mo>&GreaterEqual;</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mn>360</mn>
<mo>-</mo>
<msubsup>
<mi>&alpha;</mi>
<mi>i</mi>
<mi>k</mi>
</msubsup>
</mrow>
</mtd>
<mtd>
<mrow>
<mi>u</mi>
<mo>&times;</mo>
<msubsup>
<mi>V</mi>
<mi>i</mi>
<mi>k</mi>
</msubsup>
<mo><</mo>
<mn>0</mn>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>4</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
To same pixelStandard deviation is asked to obtain motion scores Smotion(i)。
5. a kind of binocular image according to claim 4 goes reflective method, it is characterised in that:Calculated using SuperPixel
Edge image is divided into several SuperPixel regions by method, and it is corresponding to solve all pixels point in the regionAverage value, with
Motion scores S of the average value as all pixels point in SuperPixel regionsmotion(i)。
6. a kind of binocular image according to claim 5 goes reflective method, it is characterised in that:Described hierarchical motion mould
The number of plies of type has 10 layers, consistent item λ quantity for 10 and λ is respectively 10,5,1,0.5,0.2,0.1,0.01,0.001,
0.0001、0。
7. a kind of binocular image according to any one of claim 2,3,5,6 goes reflective method, it is characterised in that:Institute
In the step S1 stated, described parameter also includes ε, and comprehensive score S is calculated by below equationcombine(i):
Scombine(i)=Sintensity(i)-εSmotion(i) (5);
Wherein:
ε=0.5;
Make the comprehensive score S being calculatedcombine(i) compared with threshold values τ:
<mrow>
<mi>E</mi>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>=</mo>
<mfenced open = "{" close = "">
<mtable>
<mtr>
<mtd>
<mn>1</mn>
</mtd>
<mtd>
<mrow>
<msub>
<mi>S</mi>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>m</mi>
<mi>b</mi>
<mi>i</mi>
<mi>n</mi>
<mi>e</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo>&GreaterEqual;</mo>
<mi>&tau;</mi>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mn>0</mn>
</mtd>
<mtd>
<mrow>
<msub>
<mi>S</mi>
<mrow>
<mi>c</mi>
<mi>o</mi>
<mi>m</mi>
<mi>b</mi>
<mi>i</mi>
<mi>n</mi>
<mi>e</mi>
</mrow>
</msub>
<mrow>
<mo>(</mo>
<mi>i</mi>
<mo>)</mo>
</mrow>
<mo><</mo>
<mi>&tau;</mi>
</mrow>
</mtd>
</mtr>
</mtable>
</mfenced>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>6</mn>
<mo>)</mo>
</mrow>
<mo>;</mo>
</mrow>
Wherein:
τ=80;
E (i)=1 represents that ith pixel point is divided into background dot;
E (i)=0 represents that ith pixel point is divided into reflective spot.
8. a kind of binocular image according to any one of claim 1,2,3,5,6 goes reflective method, it is characterised in that:
Optimization equation in described step S5 is:
<mrow>
<mtable>
<mtr>
<mtd>
<mrow>
<mi>J</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>B</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>=</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>x</mi>
<mo>,</mo>
<mi>n</mi>
</mrow>
</munder>
<mo>|</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mi>n</mi>
</msub>
<mo>*</mo>
<msub>
<mi>I</mi>
<mi>B</mi>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>+</mo>
<mrow>
<mo>(</mo>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>L</mi>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>B</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>f</mi>
<mi>n</mi>
</msub>
<mo>)</mo>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
<mtr>
<mtd>
<mrow>
<mo>+</mo>
<mi>&lambda;</mi>
<munder>
<mi>&Sigma;</mi>
<mrow>
<mi>x</mi>
<mo>&Element;</mo>
<msub>
<mi>E</mi>
<mi>R</mi>
</msub>
<mo>,</mo>
<mi>n</mi>
</mrow>
</munder>
<mo>|</mo>
<mrow>
<mo>(</mo>
<mo>(</mo>
<msub>
<mi>I</mi>
<mi>L</mi>
</msub>
<mo>-</mo>
<msub>
<mi>I</mi>
<mi>B</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>*</mo>
<msub>
<mi>f</mi>
<mi>n</mi>
</msub>
<mo>)</mo>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
<mo>+</mo>
<munder>
<mo>&Sigma;</mo>
<mrow>
<mi>x</mi>
<mo>&Element;</mo>
<msub>
<mi>E</mi>
<mi>B</mi>
</msub>
<mo>,</mo>
<mi>n</mi>
</mrow>
</munder>
<mo>|</mo>
<mrow>
<mo>(</mo>
<msub>
<mi>f</mi>
<mi>n</mi>
</msub>
<mo>*</mo>
<msub>
<mi>I</mi>
<mi>B</mi>
</msub>
<mo>)</mo>
</mrow>
<mrow>
<mo>(</mo>
<mi>x</mi>
<mo>)</mo>
</mrow>
<mo>|</mo>
</mrow>
</mtd>
</mtr>
</mtable>
<mo>-</mo>
<mo>-</mo>
<mo>-</mo>
<mrow>
<mo>(</mo>
<mn>7</mn>
<mo>)</mo>
</mrow>
</mrow>
Wherein:
fnN difference filter is represented, needs to travel through all difference filters during calculating;
* convolution is represented;
EBExpression is divided into the edge pixel set of background;
ERExpression is divided into reflective edge pixel set;
X represents location of pixels;
J(IB) for remove it is reflective after image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711146926.1A CN107818552A (en) | 2017-11-17 | 2017-11-17 | A kind of binocular image goes reflective method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711146926.1A CN107818552A (en) | 2017-11-17 | 2017-11-17 | A kind of binocular image goes reflective method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107818552A true CN107818552A (en) | 2018-03-20 |
Family
ID=61609317
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711146926.1A Pending CN107818552A (en) | 2017-11-17 | 2017-11-17 | A kind of binocular image goes reflective method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107818552A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019201336A1 (en) * | 2018-04-19 | 2019-10-24 | Shanghaitech University | Light field based reflection removal |
CN110827217A (en) * | 2019-10-30 | 2020-02-21 | 维沃移动通信有限公司 | Image processing method, electronic device, and computer-readable storage medium |
CN115082477A (en) * | 2022-08-23 | 2022-09-20 | 山东鲁芯之光半导体制造有限公司 | Semiconductor wafer processing quality detection method based on light reflection removing effect |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130077829A1 (en) * | 2011-09-23 | 2013-03-28 | The Boeing Company | Reflection Removal System |
CN103824061A (en) * | 2014-03-03 | 2014-05-28 | 山东大学 | Light-source-reflection-region-based iris positioning method for detecting and improving Hough conversion |
CN105678240A (en) * | 2015-12-30 | 2016-06-15 | 哈尔滨工业大学 | Image processing method for removing the reflect light of roads |
CN106228168A (en) * | 2016-07-29 | 2016-12-14 | 北京小米移动软件有限公司 | The reflective detection method of card image and device |
-
2017
- 2017-11-17 CN CN201711146926.1A patent/CN107818552A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130077829A1 (en) * | 2011-09-23 | 2013-03-28 | The Boeing Company | Reflection Removal System |
CN103824061A (en) * | 2014-03-03 | 2014-05-28 | 山东大学 | Light-source-reflection-region-based iris positioning method for detecting and improving Hough conversion |
CN105678240A (en) * | 2015-12-30 | 2016-06-15 | 哈尔滨工业大学 | Image processing method for removing the reflect light of roads |
CN106228168A (en) * | 2016-07-29 | 2016-12-14 | 北京小米移动软件有限公司 | The reflective detection method of card image and device |
Non-Patent Citations (1)
Title |
---|
CHAO SUN等: "Automatic Reflection Removal using Gradient Intensity and Motion Cues", 《PROCEEDINGS OF THE 24TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019201336A1 (en) * | 2018-04-19 | 2019-10-24 | Shanghaitech University | Light field based reflection removal |
US11880964B2 (en) | 2018-04-19 | 2024-01-23 | Shanghaitech University | Light field based reflection removal |
CN110827217A (en) * | 2019-10-30 | 2020-02-21 | 维沃移动通信有限公司 | Image processing method, electronic device, and computer-readable storage medium |
CN110827217B (en) * | 2019-10-30 | 2022-07-12 | 维沃移动通信有限公司 | Image processing method, electronic device, and computer-readable storage medium |
CN115082477A (en) * | 2022-08-23 | 2022-09-20 | 山东鲁芯之光半导体制造有限公司 | Semiconductor wafer processing quality detection method based on light reflection removing effect |
CN115082477B (en) * | 2022-08-23 | 2022-10-28 | 山东鲁芯之光半导体制造有限公司 | Semiconductor wafer processing quality detection method based on light reflection removing effect |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578418B (en) | Indoor scene contour detection method fusing color and depth information | |
CN109299274B (en) | Natural scene text detection method based on full convolution neural network | |
CN101443817B (en) | Method and device for determining correspondence, preferably for the three-dimensional reconstruction of a scene | |
CN109903331B (en) | Convolutional neural network target detection method based on RGB-D camera | |
CN105574527B (en) | A kind of quick object detecting method based on local feature learning | |
CN102708370B (en) | Method and device for extracting multi-view angle image foreground target | |
CN107292234B (en) | Indoor scene layout estimation method based on information edge and multi-modal features | |
CN104134200B (en) | Mobile scene image splicing method based on improved weighted fusion | |
CN107408211A (en) | Method for distinguishing is known again for object | |
CN111126412B (en) | Image key point detection method based on characteristic pyramid network | |
CN104156693B (en) | A kind of action identification method based on the fusion of multi-modal sequence | |
CN105975941A (en) | Multidirectional vehicle model detection recognition system based on deep learning | |
CN106354816A (en) | Video image processing method and video image processing device | |
CN105303615A (en) | Combination method of two-dimensional stitching and three-dimensional surface reconstruction of image | |
CN103903013A (en) | Optimization algorithm of unmarked flat object recognition | |
CN106156778A (en) | The apparatus and method of the known object in the visual field identifying three-dimensional machine vision system | |
CN105809173B (en) | A kind of image RSTN invariable attribute feature extraction and recognition methods based on bionical object visual transform | |
CN101945257A (en) | Synthesis method for extracting chassis image of vehicle based on monitoring video content | |
CN107818552A (en) | A kind of binocular image goes reflective method | |
CN111160373B (en) | Method for extracting, detecting and classifying defect image features of variable speed drum part | |
CN109685772B (en) | No-reference stereo image quality evaluation method based on registration distortion representation | |
CN110008833B (en) | Target ship detection method based on optical remote sensing image | |
CN104966054A (en) | Weak and small object detection method in visible image of unmanned plane | |
CN102779157A (en) | Method and device for searching images | |
CN105740751A (en) | Object detection and identification method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180320 |