CN103164855B - A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph - Google Patents

A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph Download PDF

Info

Publication number
CN103164855B
CN103164855B CN201310059707.5A CN201310059707A CN103164855B CN 103164855 B CN103164855 B CN 103164855B CN 201310059707 A CN201310059707 A CN 201310059707A CN 103164855 B CN103164855 B CN 103164855B
Authority
CN
China
Prior art keywords
image
function
foreground
pointolite
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310059707.5A
Other languages
Chinese (zh)
Other versions
CN103164855A (en
Inventor
王好谦
邓博雯
邵航
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen Graduate School Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Tsinghua University filed Critical Shenzhen Graduate School Tsinghua University
Priority to CN201310059707.5A priority Critical patent/CN103164855B/en
Publication of CN103164855A publication Critical patent/CN103164855A/en
Application granted granted Critical
Publication of CN103164855B publication Critical patent/CN103164855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph, comprise step: specified the pointolite being positioned at foreground object by user, Gray-scale Matching conversion simulation points light source irradiation is done to image, strengthens image edge information, and obtain illumination function according to front and back transfer pair ratio; Filtering and noise reduction, adopts watershed algorithm segmentation image; Adopt Bayesian formula to calculate and scratch graph parameter, and with multilayer perceptron matching <i> α </i> value function curve; Integrate described illumination function and color distribution function, complete the extraction of foreground object.User need only specified point light source position and must not provide front background edge information, the requirement of user interactions is reduced, use the time complexity of algorithm to be all simultaneously progression, avoids the general stingy shortcoming that nomography calculated amount is large, processing speed is slow; Owing to introducing illumination function and by perceptron matching <i> α </i> value, make this method also can obtain accurately complete extraction result for the foreground object that the foreground object of complex edge, particularly edge are comparatively close with background color.

Description

A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph
Technical field
The invention belongs to computer image processing technology field, particularly a kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph.
Background technology
Foreground extraction technology is that one specifies a small amount of prospect and background area in image by user, and isolates all foreground objects according to these promptings automatically, exactly according to certain decision rule.
Foreground extraction technology is requisite gordian technique in production of film and TV, is widely used in media production.Develop into and created many different algorithms today: Rotoscoping method, Autokey method, Knockout method, Ruzon-tomasi method, Hillman method, bayes method, Poisson method, Grabcut method, Lazysnapping and the stingy drawing method etc. based on perceptual color space.In natural image, carry out foreground extraction can be divided into Region dividing, color to estimate to estimate 3 steps with α, first carries out trimap division, then asks foreground composition and the α value of each point in zone of ignorance.
Based on Bayesian frame, Chuang proposes and utilizes the Bayes of Principle of Statistics to scratch drawing method, the method from the past based on the Grabcut method of graph theory knowledge or Lazysnapping different, raster scan order neither be utilized to process each pixel one by one, but utilize Bayesian frame, Using statistics principles of construction system of linear equations asks for most suitable solution, the processing sequence of the method is the process of picture stripping oniony ecto-entad from coil to coil, in fact exactly any point C in zone of ignorance to be tried to achieve to the Euclidean distance d of line segment FB connecting foreground area point F and background area point B in rgb space 1, point is to the mahalanobis distance d of F point 2, point is to the mahalanobis distance d of B point 3the minimum value of quadratic sum, namely ask min(d 1 2+ d 2 2+ d 3 2).But the method define only log probability L (C|F, B, α), L (F) and L (B), does not define L (α), when prospect and background color relatively time, this hypothesis will go wrong.Tan scratches figure to Bayes and improves, and proposes a kind of stingy drawing method fast: suppose that line segment FB is through C point, such d 1=0, Bayes is scratched figure framework and is reduced to and asks min(d 2 2+ d 3 2), and propose the approximation method of a rapid solving minimum value.Although this method speed, effect is unsatisfactory.
Summary of the invention
The Principle of Statistics of drawing method is scratched based on Bayes, for the deficiency of above-mentioned foreground extracting method, the present invention proposes a kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph, to avoid generally scratching the shortcoming that nomography calculated amount is large, processing speed is slow, and the requirement reduced user interactions, obtain accurately complete foreground extraction result.
The present invention is a kind of interactive foreground extraction technology completely newly in conjunction with the Bayesian decision foreground extracting method of reflected light photograph.The method comprises the following steps:
S1, Gray-scale Matching convert
Input original image, the pointolite being positioned at foreground object is specified by user, use BRDF(bidirectional reflectance distribution function) illumination effect of calculation level light source on front background object, introduce power transform, by controlling the parameter of transforming function transformation function, expand high tonal range, compress low tonal range, background edge information before outstanding image;
Meanwhile, according to the different light impact that pointolite produces on front background object, the illumination function that each pixel is corresponding is calculated;
S2, filtering and noise reduction
Use Hi-pass filter to carry out filtering and noise reduction process to the image after step S1 matched transform, remove picture noise impact, avoid producing over-segmentation phenomenon during follow-up watershed segmentation;
S3, Iamge Segmentation
Image after step S2 denoising, marginal information is obvious, and the image that especially, background color is comparatively close, highlights foreground object and border burr part details; Use watershed algorithm segmentation image on this basis, image is regarded as the topological landforms in geodesy, in image, the gray-scale value of every bit pixel represents the sea level elevation of this point, and each local minimum and range of influence thereof regard reception basin as, and the border of reception basin then forms watershed divide;
Watershed algorithm can regard the process that a simulation is immersed as, computation process is divided into sequence and floods the iteration mark of two steps, on each local minimum surface, pierce through an aperture, then whole model is slowly immersed in the water, along with the intensification of immersing, the domain of influence of each local minimum is slowly to external expansion, construct dam at two reception basin meets, form watershed divide, the i.e. edge of image; After completing the iteration mark of all images, just obtain image complete continuous print edge segmentation line, and the image of smoothing processing after filtering can not produce over-segmentation phenomenon;
Graph parameter (F, B and α) is scratched in S4, calculating
After Iamge Segmentation becomes background, prospect and zone of ignorance three regions, define a Bayesian frame and carry out formulistic stingy graph parameter, solve a maximum a posteriori problem, calculate F, B and the α closest to C, wherein, C is any color on known image, F, B, α are background colour, foreground and opacity respectively, the distribution Gaussian distribution of F and B is simulated, the distribution of α by the window that slides using the zone of ignorance that calculated and appointed area as sample data, with its distribution curve of multilayer perceptron matching;
S5, α value is rebuild and foreground extraction
According to stingy graph parameter F, B and α that step S4 obtains, reconstruct the α value figure of image, introduce a Markov random field, the illumination function that fusion steps S1 obtains, completes the extraction of prospect layer by minimizing the energy function constructed.
Wherein, step S2 preferably selects the ButterWorth Hi-pass filter having to seamlessly transit band between passband and stopband, sharpening image edge, outstanding boundary information, and treated image does not have ringing and produces.
Formula can be adopted in step S1 the power function represented carries out Gray-scale Matching conversion, and in formula, f is former figure gray scale, and g is gray scale after conversion, and c, b, γ are controling parameters.
For a kind of Gray-scale Matching transform method of the above-mentioned Bayesian decision foreground extracting method in conjunction with reflected light photograph, comprise the following steps: input original image, the pointolite being positioned at foreground object is specified by user, with the illumination effect of BRDF calculation level light source on front background object in Gray-scale Matching conversion, introduce power function , in formula, f is former figure gray scale, and g is gray scale after conversion, and c, b, γ are controling parameters, by controlling the parameter of transforming function transformation function, expanding high tonal range, compressing low tonal range, background edge information before outstanding image.
The inventive method is on the basis of prior art achievement, background object before in natural image is utilized to change different features under pointolite irradiates, Gray-scale Matching conversion simulation points light source irradiation is done to image, strengthen image edge information, simultaneously according to front and back transfer pair ratio, obtain with the relevant energy function of illumination; Use dividing ridge method to carry out Iamge Segmentation after filtering and noise reduction, obtain accurately complete marginal information, avoid over-segmentation phenomenon again simultaneously; Scratch with Bayes in the process of figure and use multilayer perceptron matching α value function curve; The illumination function obtained before integration and color distribution function, complete the extraction of foreground object.
In this method, user need only specified point light source position and must not provide front background edge details and just can complete foreground extraction, reduces the requirement to user interactions, uses the time complexity of algorithm to be all simultaneously progression, changes the general stingy shortcoming that nomography calculated amount is large, processing speed is slow.
This method used power function to carry out Gray-scale Matching conversion before splitting image, increased prospect brightness, reduced background luminance, the image comparison information before and after conversion was generated an energy function of following illumination relevant, was incorporated into subsequent singulation process; Meanwhile, make image edge information more obvious, also can reach desirable segmentation effect to the more close image of front background color.
Adopt watershed algorithm to carry out Iamge Segmentation, can judge cut zone number voluntarily, segmenting edge is continuous, and splitting speed is fast, can obtain the outline line closed, can accurate positioning image edge; And first filtering and noise reduction is carried out to initial pictures, make image edge acuity, outstanding boundary information, avoid because the impact of noise produces over-segmentation phenomenon meanwhile.
Adopt multilayer perceptron training study, matching α value function curve, solve α value estimation problem, the α value completed in stingy figure process is rebuild; Introduce Markov random field simultaneously, in conjunction with before the illumination function that obtains in step, increase the accuracy of foreground extraction, minimize the energy function of structure, complete the extraction of prospect layer.
Accompanying drawing explanation
Fig. 1 is main flow chart of the present invention.
Embodiment
Below in conjunction with accompanying drawing, the present invention one preferred implementation is described in further detail.
S1. Gray-scale Matching conversion
A) input picture, specifies one or several intensity in prospect layer to be the pointolite of L by user, in the briliancy that imaging surface 1 P produces , ρ is the surperficial BRDF(bidirectional reflectance distribution function under given illumination and visual angle), r is the distance with illumination point, and θ is the angle of illumination point and P point.Prospect layer and illumination point apart from each other, so foreground object can have a greater change under exposure, and the change of background object is less.
B) input the luminance variations of rear calculating according to user, put the histogram after matched transform, the histogram for regulation specifies the probability density in corresponding grey scale level.
C) the histogram array of source images is carried out to the probability density statistics in gray level, and gray balance conversion is carried out to its probability density array.
D) gray balance conversion is carried out to the histogram probability density array of regulation.
E) determine to map corresponding relation, introduce power transform: f is former figure gray scale, g is gray scale after conversion, c, b, γ are controling parameters, circulate one by one, set up grey scale mapping table to the histogram probability density array after gray balance conversion on former figure, by the adjustment of controling parameters, expand high tonal range, compress low tonal range, background edge information before outstanding image.Also generate illumination function in this step, detailed process is introduced in following step S5 simultaneously.
S2. filtering and noise reduction
Use Hi-pass filter (ButterWorth Hi-pass filter) to carry out filtering and noise reduction process, avoid producing over-segmentation phenomenon, sharpening image edge during follow-up watershed segmentation, outstanding boundary information.Cutoff frequency is D 0the transport function of n rank ButterWorth Hi-pass filter:
D (u, v) is the distance put from the initial point of frequency field to (u, v), that is: .Usually H (u, v) is dropped to original value time D (u, v) be decided to be cut-off frequency point.Significantly do not jump between passband and stopband, between both, namely have one to seamlessly transit band.Treated image will not have ringing and produce.
S3. Iamge Segmentation
A) reading images, is converted into gray level image, and utilize sobel operator to ask for the border of image, x is asked in filtering, and edge, y direction, asks gradient modulus value.
B) user in step S1 is utilized to the appointment of background before image, first to use morphology to open operation, remove the target that some are very little, then corrode successively, morphological reconstruction, morphology closed operation, expand, morphological reconstruction, image is negated, obtain image local maximum value, local maximum place pixel value is set to 255, closed operation, corrosion, open operation, prospect place is set to 255, be converted into bianry image, complete the mark to foreground object, Optimized Segmentation effect.
C) computed segmentation function, application fractional spins realizes as follows:
I. statistics connected region, mark prime area, looks for initial watershed divide;
Original image, as initial mountain, is got threshold value and is designated as Thre, and all mountains lower than this height all add water.Represent original state with Seed, have the set of water, anhydrous reset;
According to mark before, add up initial connected region by eight connectivity neighborhood method, according to this each pixel of scanning t test Seed, include all initial reception basins that marked in respective region one by one, form outer circulation.Create a temporary queue, the connection being used for processing current initial reception basin connects, and belongs to can keeping in by growing point of a specific initial reception basin region, and form an Inner eycle by what scan one by one;
During to current scan point process; first judge that whether this point is the point of certain initial reception basin; if not skipping; if; so whether there is the point (can not growing point refer in Seed not have markd point) that can not grow in its eight connected region; can add in temporary queue by growing point in the eight connectivity field of scanning, can not growing point if had, so add seed queue;
Until two complete records obtaining each and be communicated with initial reception basin of circulation, obtain initial watershed divide, the inside marked the GTG of corresponding point in the regional number of their correspondences and region simultaneously, is namely the set of point corresponding to specific region specific grey-scale.The regional number obtaining some place, watershed divide just can obtain the information of the point of all GTGs in this region;
II. flood process
Realized by an embedded loops, outer circulation does water level to rise (cycle index is not more than 256), from initial threshold Thre, allows water level rise; Inner eycle is the point of the watershed divide scanning each prime area, expand according to given water level, its four connections neighborhood is checked one by one, markd point (must be the point that GTG is higher) is not had if had in four connected region, judge whether it can grow under current level, if can grow, then join in seed queue, again will enter Inner eycle; If can not, then join in the watershed divide set queue of this neighborhood point.Circulate and so forth, until all watershed divide are scanned under completing a water level in corresponding region.So respective while expand under a water level, ensure the situation (overall situation expansion of water level all regions) not occurring jumping;
Final all regions are complete in each water level expansion, obtain segmentation figure.
S4. calculate and scratch graph parameter
A) suppose that F, B, α need the background colour, foreground and the opacity that solve, C is any color on the image known.The object of calculating parameter is under the known prerequisite of C, asks F, B, α of making probability P maximum.Mathematical description is as formula (1), and L is logarithmic function, and multiplication is turned to addition, simplifies and calculates,
(1)
B) L (C|F, B, α) is corresponding to σ cfor standard deviation, center in the Gaussian distribution of α F+ (1-α) B, as formula (2),
(2)
C) L (F) also corresponding Gaussian distribution, by the coupling in space, i.e. the color of neighbor, calculation expectation and covariance matrix, as formula (3),
(3)
L (B) and L (F) are similar, just w iα ireplace to 1-α i;
D) sampling process: the circle sampling first centered by unknown point, constantly expand search radius, until adopt enough known background and foreground point, wherein all points solved also add sample N, sample evidence color cluster.The corresponding weights of each sample point.Wherein α iopacity, g itake distance as parameter Gauss's attenuation function, as formula (4),
(4);
E) introduce multilayer perceptron training study, obtain the distribution of α value function, solve the problem of L (α):
I. determine the structure (selecting three-decker: input layer, middle layer and output layer) of multilayer perceptron, (generally get by little random number +in 0.3) carry out weight initialization, if training time t=0;
II. random selecting training sample from sample , remember that its desired output is ;
III. calculate the actual output of current perceptron under x input:
IV. from output layer, adjust weights,
To l layer, with formula correction weights the following:
for modified weight item: ; η is Learning Step (generally getting in 0.1 ~ 3) given in advance;
To output layer (l=L-1), the derivative of error to weights of current output and desired output:
To middle layer, be the error of output error backpropagation to this layer to the derivative of weights:
V. after renewal full weights, output is recalculated to all training samples, calculate the error of the output after upgrading and desired output.Check end condition: take turns the actual Mean Square Error exported between desired output in training nearest one and be less than a certain threshold value (general <0.1), or be all less than a certain threshold value (general <0.1) in nearest one change taking turns all weights in training, or reach total frequency of training upper limit (generally getting 3 times) given in advance, if meet end condition, deconditioning, Output rusults function; Otherwise put t=t+1, return II.
F) process solved is divided into two steps, first asks F, B, then asks α.First suppose that α determines, to F, B, partial derivative is asked to formula (1) right side, and makes it equal 0, obtain 6 yuan of linear function groups, as formula (5), be converted into a solve linear equations problem, ask for the solution of formula (1) maximum F, B;
(5)
Suppose that F and B determines again, the function of above-mentioned gained α is brought in formula (1), ask for the α making formula (1) maximum and separate.
S5. α value is rebuild and foreground extraction
A) introduce a Markov random field, completed the extraction of prospect layer by the energy function minimizing structure:
E srepresent the smoothness of two adjacent pixels points p, q, here, general ε value 20 ~ 40, γ fvalue 10.
B) in step sl, H is supposed f={ h f kand H nf={ h nf kbe respectively photo-irradiation treatment after and the RGB color histogram of non-photo-irradiation treatment image, h f kand h nf kfor the number of pixel in a kth color.If h nf k>h f k, illustrate at H nfin some pixels at H fin be exposed and be modified in other color, these pixels may be more foreground pixels.If h nf k<h f k, illustrate that some pixels are exposed after amendment at H fin be assigned to k group.Therefore the Illumination defining each pixel p is as follows:
Illumination value is larger, and expression pixel is that the possibility of prospect is larger.So be defined as follows an illumination function:
If rp> is ζ, then pixel p is labeled as prospect, ζ value 0.2.
C) color distribution is a gauss hybrid models:
Wherein, N () is a Gaussian distribution.Find out the some f that range points p on prospect profile line is nearest 1, this bee-line is labeled as l f; With a f 1centered by, r 1l fgrow for radius is a round F (r 1a distance parameter, 1.0<r 1<10.0).The weight of nearest known point is set to 1, and along with the increase of distance, weight also reduces thereupon.Foreground point f in image space iweighting function be expressed as:
ζ firepresent image space mid point p and foreground point f idistance.Work as ζ fiduring increase, the weight of point will reduce rapidly.Above-mentioned weighting function is more affected apart near point.

Claims (2)

1., in conjunction with a Bayesian decision foreground extracting method for reflected light photograph, it is characterized in that comprising the following steps:
S1, Gray-scale Matching convert, and input original image, is specified the pointolite being positioned at foreground object, with the illumination effect of BRDF calculation level light source on front background object by user; Introduce power transform, by controlling the parameter of transforming function transformation function, background edge information before outstanding image; Wherein, the briliancy E that produces at imaging surface 1 P of pointolite is with formula represent, L is the intensity of pointolite, and r is the distance of pointolite and P point, and θ is the angle of pointolite and P point, and ρ is the surperficial BRDF under given illumination and visual angle; Adopt power function carry out Gray-scale Matching conversion, in formula, f is former figure gray scale, and g is gray scale after conversion, and c, b, γ are controling parameters;
Meanwhile, according to the different light impact that pointolite produces on front background object, the illumination function that each pixel is corresponding is calculated;
S2, to after step S1 matched transform image use Hi-pass filter carry out filtering and noise reduction process;
S3, image after step S2 denoising use traditional watershed algorithm to split image, watershed algorithm comprises sequence and floods the iteration mark of two steps, after completing the iteration mark of image after whole denoising, obtain image complete continuous print edge segmentation line after denoising; And
Graph parameter is scratched in S4, calculating, after Iamge Segmentation becomes background, prospect and zone of ignorance three regions after step S3 iteration mark, defines a Bayesian frame and carrys out formulistic stingy graph parameter, solve a maximum a posteriori problem, calculate F, B and the α closest to C; Wherein, C is any color on known original image, F, B, α are foreground, background colour and opacity respectively, the distribution Gaussian distribution of F and B is simulated, the distribution of α by the window that slides using the zone of ignorance that calculated and appointed area as sample data, with its distribution curve of multilayer perceptron matching;
S5, stingy graph parameter F, B and α of obtaining according to step S4, reconstruct the α value figure of original image, and introduce a Markov random field, the illumination function that fusion steps S1 obtains, completes the extraction of prospect layer by minimizing the energy function constructed.
2. the method for claim 1, is characterized in that: step S2 selects the ButterWorth Hi-pass filter having to seamlessly transit band between passband and stopband.
CN201310059707.5A 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph Active CN103164855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310059707.5A CN103164855B (en) 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310059707.5A CN103164855B (en) 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph

Publications (2)

Publication Number Publication Date
CN103164855A CN103164855A (en) 2013-06-19
CN103164855B true CN103164855B (en) 2016-04-27

Family

ID=48587911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310059707.5A Active CN103164855B (en) 2013-02-26 2013-02-26 A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph

Country Status (1)

Country Link
CN (1) CN103164855B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346806A (en) * 2013-08-09 2015-02-11 联想(北京)有限公司 Image processing method and device
CN103914843B (en) * 2014-04-04 2018-04-03 上海交通大学 The image partition method marked based on watershed algorithm and morphology
CN108156370A (en) * 2017-12-07 2018-06-12 Tcl移动通信科技(宁波)有限公司 By the use of local picture as the photographic method of background, storage medium and mobile terminal
CN110349189A (en) * 2019-05-31 2019-10-18 广州铁路职业技术学院(广州铁路机械学校) A kind of background image update method based on continuous inter-frame difference
CN110298861A (en) * 2019-07-04 2019-10-01 大连理工大学 A kind of quick three-dimensional image partition method based on shared sampling
CN110399851B (en) * 2019-07-30 2022-02-15 广东工业大学 Image processing device, method, equipment and readable storage medium
CN110728061B (en) * 2019-10-16 2020-12-11 沈纪云 Ceramic surface pore detection method based on Lambert body reflection modeling
CN111696188B (en) * 2020-04-26 2023-10-03 杭州群核信息技术有限公司 Rendering graph rapid illumination editing method and device and rendering method
CN112118394B (en) * 2020-08-27 2022-02-11 厦门亿联网络技术股份有限公司 Dim light video optimization method and device based on image fusion technology
CN112132848B (en) * 2020-09-01 2023-06-06 成都运达科技股份有限公司 Preprocessing method based on image layer segmentation and extraction
CN112348826B (en) * 2020-10-26 2023-04-07 陕西科技大学 Interactive liver segmentation method based on geodesic distance and V-net
CN114782667B (en) * 2022-04-24 2024-05-28 重庆邮电大学 Method for extracting apparent characteristics of fritillary bulb
CN117078838B (en) * 2023-07-07 2024-04-19 上海散爆信息技术有限公司 Object rendering method and device, storage medium and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1941850A (en) * 2005-09-29 2007-04-04 中国科学院自动化研究所 Pedestrian tracting method based on principal axis marriage under multiple vedio cameras

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2395102B1 (en) * 2010-10-01 2013-10-18 Telefónica, S.A. METHOD AND SYSTEM FOR CLOSE-UP SEGMENTATION OF REAL-TIME IMAGES

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1941850A (en) * 2005-09-29 2007-04-04 中国科学院自动化研究所 Pedestrian tracting method based on principal axis marriage under multiple vedio cameras

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
New Models and Methods for Matting and Compositing;Yung-Yu Chuang;《Academic》;20040731;第5-73页 *
图像前景提取技术研究;谢蓉等;《信息科学》;20100123;第42页 *
基于幂次变换和区域收缩算法的运动目标检测与定位;谷井子等;《应用科技》;20110215;第38卷(第2期);第56-60,66页 *

Also Published As

Publication number Publication date
CN103164855A (en) 2013-06-19

Similar Documents

Publication Publication Date Title
CN103164855B (en) A kind of Bayesian decision foreground extracting method in conjunction with reflected light photograph
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
Golts et al. Unsupervised single image dehazing using dark channel prior loss
Wang et al. Adaptive image enhancement method for correcting low-illumination images
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
CN101299268B (en) Semantic object dividing method suitable for low depth image
WO2018000752A1 (en) Monocular image depth estimation method based on multi-scale cnn and continuous crf
CN105894484B (en) A kind of HDR algorithm for reconstructing normalized based on histogram with super-pixel segmentation
CN110119728A (en) Remote sensing images cloud detection method of optic based on Multiscale Fusion semantic segmentation network
CN111105424A (en) Lymph node automatic delineation method and device
CN108053417A (en) A kind of lung segmenting device of the 3DU-Net networks based on mixing coarse segmentation feature
CN109872285A (en) A kind of Retinex low-luminance color image enchancing method based on variational methods
De-Maeztu et al. Near real-time stereo matching using geodesic diffusion
CN112949838B (en) Convolutional neural network based on four-branch attention mechanism and image segmentation method
Liu et al. Coconet: Coupled contrastive learning network with multi-level feature ensemble for multi-modality image fusion
CN113012172A (en) AS-UNet-based medical image segmentation method and system
CN103262119A (en) Method and system for segmenting an image
CN113516659A (en) Medical image automatic segmentation method based on deep learning
CN110675411A (en) Cervical squamous intraepithelial lesion recognition algorithm based on deep learning
CN111462027B (en) Multi-focus image fusion method based on multi-scale gradient and matting
CN110853070A (en) Underwater sea cucumber image segmentation method based on significance and Grabcut
CN116681636B (en) Light infrared and visible light image fusion method based on convolutional neural network
Lepcha et al. A deep journey into image enhancement: A survey of current and emerging trends
CN109242876A (en) A kind of image segmentation algorithm based on markov random file
CN112819688A (en) Conversion method and system for converting SAR (synthetic aperture radar) image into optical image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Wang Haoqian

Inventor after: Fang Lu

Inventor after: Deng Bowen

Inventor after: Wang Shengjin

Inventor after: Shao Hang

Inventor after: Dai Qionghai

Inventor after: Guo Yuchen

Inventor before: Wang Haoqian

Inventor before: Deng Bowen

Inventor before: Shao Hang

Inventor before: Dai Qionghai

CB03 Change of inventor or designer information
CP01 Change in the name or title of a patent holder

Address after: Shenzhen Graduate School of Guangdong Province, Shenzhen City Xili 518055 Nanshan District University City Tsinghua University

Patentee after: Tsinghua Shenzhen International Graduate School

Address before: Shenzhen Graduate School of Guangdong Province, Shenzhen City Xili 518055 Nanshan District University City Tsinghua University

Patentee before: Graduate School at Shenzhen, Tsinghua University

CP01 Change in the name or title of a patent holder