CN102324093A - Image synthesis method based on grouped object mixing - Google Patents

Image synthesis method based on grouped object mixing Download PDF

Info

Publication number
CN102324093A
CN102324093A CN201110262737A CN201110262737A CN102324093A CN 102324093 A CN102324093 A CN 102324093A CN 201110262737 A CN201110262737 A CN 201110262737A CN 201110262737 A CN201110262737 A CN 201110262737A CN 102324093 A CN102324093 A CN 102324093A
Authority
CN
China
Prior art keywords
groups
image
target image
zone
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201110262737A
Other languages
Chinese (zh)
Other versions
CN102324093B (en
Inventor
胡事民
张方略
程明明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN 201110262737 priority Critical patent/CN102324093B/en
Publication of CN102324093A publication Critical patent/CN102324093A/en
Application granted granted Critical
Publication of CN102324093B publication Critical patent/CN102324093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention relates to the technical field of image processing and discloses an image synthesis method based on grouped object mixing, comprising the following steps of: S1, carrying out divisibility analysis on grouped objects in a target image to obtain a divisible graph of the grouped objects, and extracting the positions of the grouped objects and a core region occupied by the grouped objects in the target image; S2, obtaining the boundary of the region occupied by each object in the grouped objects in the target image by using a method of expanding and then cutting; and S3, replacing the regions of the grouped objects appointed to be replaced in the target image with the proper regions of the grouped objects in an external image through finding the proper regions of the grouped objects from the external image. The image synthesis method based on grouped object mixing is accurate in extracting edges and high in speed and can be used for realizing an image synthesis effect for grouped object mixing with high sense of reality.

Description

The image combining method that mixes based on object in groups
Technical field
The present invention relates to technical field of image processing, be specifically related to a kind of image combining method that mixes based on object in groups.
Background technology
In field of Computer Graphics, the analysis of image and intelligent editing technology have important use.The user can pass through this type technology, from numerous and diverse uninteresting duplication of labour, frees, and just can obtain various complicacies and have the highly effect of the sense of reality with some operations seldom.There are a lot of research work to pay close attention to image semi-automated analysis and the intelligent editing method that combines user interactions in recent years, and createed various abundant edit effects.As 2009, people such as C.Barnes proposed a kind of mode of quick searching similar image piece of utilizing at " PatchMatch:a randomized correspondence algorithm for structural image editing " and have combined user interactions to carry out the reconstruction of content editor of piece image; People such as Cheng Mingming proposed " RepFinder " in 2010, were used for the method that on image, finds the similar approximate object of shape and edit.These methods have all advanced the development of the editing technique of image object level, convenient editing technique and method are provided for the user of image processing software.
In the synthetic field of image, after number of research projects has concentrated on given to need the image-region of amalgamation, how to make the result of amalgamation that higher compatibility is arranged.In people's such as Porter and Duff the early stage work; Propose alpha-and scratched drawing method; It is more natural to make that the edge in amalgamation zone merges, and the Poisson amalgamation mode that the people proposed such as P.Perez afterwards, then through considering gradient field information; Pixel value to whole amalgamation zone is adjusted, and makes its whole compatibility higher.Aspect the picture material of selecting amalgamation; People " Sketch2photo " technology such as people's such as J.-F.Lalonde " Photo Clip Art " and Chen Tao; Be utilized in the picture material that selection is merged more easily in network image and the big image library, obtained the synthetic result of very gratifying image.
And, in research work before, some related works are arranged also through combining from the synthetic new image of the visual signature of separate sources picture material.In the texture blend method of the Pixel-level such as " Texture synthesis from multiple sources " of L.Y.Wei, similar pixels statistics characteristic is used to synthetic new texture image.And " Graphcut Textures " method that people such as V.Kwatra propose is then utilized other picture material of texture block level, has accomplished application such as texture is synthetic, the seamless amalgamation of picture material, the effect that method is difficult to accomplish before having made.Recently, people such as Risser have proposed a kind of synthetic new method that has the image that mixes visual signature in a large number of several similar samples of utilizing in " Image Hybrids " work in 2010, and its result images still keeps original geometric properties and architectural feature.
Repeat aspect the object detection at image, then mainly contain " RepFinder " technology of people such as " 2.1D Texel " method that people such as Ahuja proposed in 2007 and Cheng Mingming.These methods are all utilized automatically/automanual image partition method, the repetition object in the detected image.Yet the former can not provide object boundary more accurately, and the latter has for repeat element needs more similar restriction in shape.
Some newest fruits in above-mentioned field provide solid technical foundation for the image combining method of developing based on object mixing in groups.And these technology still are not enough to realize the realistic mixing image synthetic effect of object in groups.
Summary of the invention
The technical matters that (one) will solve
Technical matters to be solved by this invention is: how to realize having the highly image synthetic effect of the mixing of object in groups of the sense of reality.
(2) technical scheme
For solving the problems of the technologies described above, the invention provides a kind of image combining method that mixes based on object in groups, comprise the steps:
S1, the object in groups in the target image is carried out the separability analysis, obtain the separability figure of object in groups, and extract the position of object in groups and occupied nucleus in target image;
S2, the method through the back cutting of expanding earlier obtain the border in each object shared zone on target image in the object in groups;
S3, appointment in the target image needed the zone of the object in groups of replacement; Come from the appropriate area of the object in groups in the external image through searching, the said zone that needs the object in groups of replacement is replaced with the appropriate area of the object in groups in the said external image.
Preferably; Among the step S1; Through calculating the multiple dimensioned self-similarity value and the robust curvilinear characteristic conspicuousness value of each pixel in the occupied zone of object in groups in the target image; Obtain said separability figure, and extract the position of object in groups and occupied nucleus in target image through morphological image operation.
Preferably; Step S2 is specially: to said separability figure; From said nucleus, to expand earlier, remove in the zone of the pixel that then robust curvilinear characteristic conspicuousness value is surpassed dynamic threshold after expand; And this process that iterates, thereby extract the border in each object shared zone on target image in the object in groups.
Preferably, among the step S3, utilize the BBM method to seek the appropriate area that comes from the object in groups in the external image.
Preferably, among the step S3, when the said zone that needs the object in groups of replacement is replaced with the appropriate area of the object in groups in the said external image, synthesize the boundary member that has from the visual characteristic of the object boundary in groups in the external image at boundary.
Preferably; Comprise step S0 before the step S1: input comprises the target image and the external image of object in groups; In target image, set and to represent the point of the key visual features of object in groups, be used for calculating the multiple dimensioned self-similarity value of each pixel in the shared zone of object in groups of target image as reference point.
(3) beneficial effect
Beneficial effect of the present invention is: this method can realize the image synthetic effect of the object in groups of complicated mixing separate sources alternately through unusual simple user; Simultaneously also propose a kind of detection and method for distilling that repeats object fast and effectively in groups, followed with class methods and compare, had advantages such as the edge of extraction is accurate, rapid speed; In addition, the image synthetic effect that the object in groups with height sense of reality that this method realized mixes also is that prior art can't be accomplished.The present invention is to replenishing based on the picture editting of multi-source image and of synthetic technology in recent years.
Description of drawings
Fig. 1 is the image combining method process flow diagram that mixes based on object in groups of the embodiment of the invention;
Fig. 2 is image combining method intermediate result and the net result synoptic diagram based on object mixing in groups;
Fig. 3 is " expansion-cutting " algorithm flow synoptic diagram.
Embodiment
Regard to a kind of image combining method that mixes based on object in groups proposed by the invention down, in conjunction with accompanying drawing and embodiment detailed description.
With reference to Fig. 1, the process flow diagram of the image combining method that expression mixes based on object in groups, the step of representing among the figure is:
S0, input include the target image and the external image of object in groups, and given its in groups behind the zone of object content, specify one in target image, can represent the point of the key visual features of object in groups, like one of them center of one group of potato.This point will be calculated the multiple dimensioned self-similarity of each pixel of the occupied zone of object in groups as reference point.
S1, through in the combining target image in groups the multiple dimensioned self-similarity of each pixel in the occupied zone of object calculate and robust curvilinear characteristic conspicuousness is calculated, obtain image separability figure.The calculating of multiple dimensioned self-similarity is based on that the pixel value similarity degree of pixel and its neighborhood carries out.In order to extract the visual signature of each position better; The present invention has used multiple dimensioned neighborhood descriptor, in the image pyramid that picture up-sampling obtains, at the 1st layer to 3 layers 5 * 5 neighborhood piece of getting target pixel points respectively; The description vector that forms facing to 75 values then carries out PCA dimensionality reduction to 6 dimension; 6 dimensions of calculating each pixel p are then respectively described the Euclidean distance of the description vector of vector sum impact point, and the value that obtains is multiple dimensioned self-similarity value, and note is made S a(p, p 0), p wherein 0The RP of expression appointment.
The curvilinear characteristic conspicuousness of image, the simplest account form is for directly asking gradient, and the size of Grad has just been represented the obvious degree of curvilinear characteristic here to a certain extent.Yet in the mankind's visual identifying system (HVS), the size that often is not gradient influences the judgement of people to the edge, and the length at edge and average gradient magnitude have also played important effect.Therefore, the robust curvilinear characteristic conspicuousness computing method that the present invention taked are based on each bar length of a curve of extracting and gradient magnitude, and the conspicuousness of each bar curve is:
s α ( L ′ ) = N ( 1 | C | Σ p ∈ c m p ) * N ( l c ) - - - ( 1 )
Wherein N representes normalization operation, m pThe expression gradient magnitude, l cThe length of expression curve C.On every curve, distributed same curvilinear characteristic conspicuousness value.Such robust curvilinear characteristic can better embody the visual characteristic of edge and curve.
Obtain separability figure through following formula at last:
S(p)=S α(C)-ωS a(p,p 0) (2)
Wherein, ω is used to adjust the influence size of similarity, generally gets 1.
Then to this separability figure (original graph is shown in a among Fig. 2); Utilize morphological operation (threshold value generally get normalization after 0.5) after passing threshold is with its binaryzation; Corrode after expansion earlier; Through extracting each independent connected component, each the individual position of object and occupied nucleus have in groups just been obtained.The separability figure result who extracts is shown in b among Fig. 2.Object that extracts and nucleus thereof are shown in c among Fig. 2.
S2, for each object, through " expansion-cutting " (expand earlier back cutting) method at the enterprising line operate of separability figure, to extract the accurate border in each occupied zone of object.The flow process of this method is as shown in Figure 3, at first from a nucleus, through first expansion, after curvilinear characteristic conspicuousness value is surpassed dynamic threshold:
t(p)=S(p)/D(p) (3)
The zone of pixel after expand remove, and this process that iterates, the mode that restrains (nucleus no longer enlarges) up to algorithm automatically realizes the extraction on accurate border.D in the formula (p) remarked pixel point p is from the distance of the occupied regional center of object.Through this step, can obtain each zone of occupying of object in groups.
S3, at random appointment treat the alternative body region, replace with the image-region that is fit to replacement in the external image, the row bound of going forward side by side is synthetic, makes to have more high-compatibility between the object from separate sources.When seeking the image-region of the suitable replacement of external image; Adopting BBM (the Boundary Band Map) method of people's propositions in 2010 such as Cheng Mingming (is the RepFinder technology; Open on Siggraph 2010), be input with object boundary to be replaced, externally seek image block in the image with consistency curve characteristic; And copy to the corresponding region in this image block on the target image, replace the original.In order to improve compatibility, at this regional boundary member, if its Grad less than certain threshold value (getting 0.5 after the normalization), so just externally in the image, is sought the pixel that makes following formula maximum:
D ( p , q · ) = | | N q E - N p A | | + α | | ▿ q E - ▿ p A | | - - - ( 4 )
Figure BDA0000089325120000062
is illustrated in 25 dimensional vectors that the pixel value of 5 * 5 neighborhood pieces forms around the pixel q in the image B in the formula;
Figure BDA0000089325120000063
is illustrated in 25 dimensional vectors that the pixel value of 5 * 5 neighborhood pieces forms around the pixel p in the image A; The weights of the α ratio that both influence for adjustment and
Figure BDA0000089325120000065
generally get 0.5;
Figure BDA0000089325120000066
is illustrated in the gradient magnitude of pixel q position in the image B, and
Figure BDA0000089325120000067
is illustrated in the gradient magnitude of pixel p position in the image A.
The original value of pixel replacement with satisfy condition (4) has just realized the synthetic of boundary effect.
S4, through the object in the target image that replaces appointment at random successively, realize the synthetic of net result.Final synthetic result is shown in d among Fig. 2.
More than be the concrete performing step of the image combining method that mixes based on object in groups.
Above embodiment only is used to explain the present invention; And be not limitation of the present invention; The those of ordinary skill in relevant technologies field under the situation that does not break away from the spirit and scope of the present invention, can also be made various variations and modification; Therefore all technical schemes that are equal to also belong to category of the present invention, and scope of patent protection of the present invention should be defined by the claims.

Claims (6)

1. an image combining method that mixes based on object in groups is characterized in that, comprises the steps:
S1, the object in groups in the target image is carried out the separability analysis, obtain the separability figure of object in groups, and extract the position of object in groups and occupied nucleus in target image;
S2, the method through the back cutting of expanding earlier obtain the border in each object shared zone on target image in the object in groups;
S3, appointment in the target image needed the zone of the object in groups of replacement; Come from the appropriate area of the object in groups in the external image through searching, the said zone that needs the object in groups of replacement is replaced with the appropriate area of the object in groups in the said external image.
2. the method for claim 1; It is characterized in that; Among the step S1; Through calculating multiple dimensioned self-similarity value and the robust curvilinear characteristic conspicuousness value of each pixel in the occupied zone of object in groups in the target image, obtain said separability figure, and operate the position that extracts object in groups and occupied nucleus in target image through morphological image.
3. method as claimed in claim 2; It is characterized in that step S2 is specially: to said separability figure, from said nucleus; Expand earlier; Remove in the zone of the pixel that then robust curvilinear characteristic conspicuousness value is surpassed dynamic threshold after expand, and this process that iterates, thereby extract the border in each object shared zone on target image in the object in groups.
4. the method for claim 1 is characterized in that, among the step S3, utilizes the BBM method to seek the appropriate area that comes from the object in groups in the external image.
5. the method for claim 1; It is characterized in that; Among the step S3; When the said zone that needs the object in groups of replacement is replaced with the appropriate area of the object in groups in the said external image, synthesize the boundary member that has from the visual characteristic of the object boundary in groups in the external image at boundary.
6. like each described method in the claim 1~5; It is characterized in that; Comprise step S0 before the step S1: input comprises the target image and the external image of object in groups; In target image, set and to represent the point of the key visual features of object in groups, be used for calculating the multiple dimensioned self-similarity value of each pixel in the shared zone of object in groups of target image as reference point.
CN 201110262737 2011-09-06 2011-09-06 Image synthesis method based on grouped object mixing Active CN102324093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110262737 CN102324093B (en) 2011-09-06 2011-09-06 Image synthesis method based on grouped object mixing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110262737 CN102324093B (en) 2011-09-06 2011-09-06 Image synthesis method based on grouped object mixing

Publications (2)

Publication Number Publication Date
CN102324093A true CN102324093A (en) 2012-01-18
CN102324093B CN102324093B (en) 2013-08-07

Family

ID=45451832

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110262737 Active CN102324093B (en) 2011-09-06 2011-09-06 Image synthesis method based on grouped object mixing

Country Status (1)

Country Link
CN (1) CN102324093B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255807A (en) * 2017-07-13 2019-01-22 腾讯科技(深圳)有限公司 A kind of image information processing method and server, computer storage medium
CN110264546A (en) * 2019-06-24 2019-09-20 北京向上一心科技有限公司 Image synthetic method, device, computer readable storage medium and terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852392A (en) * 2006-05-11 2006-10-25 上海交通大学 Printing net-point-image dividing method based on moveable contour
CN101551904A (en) * 2009-05-19 2009-10-07 清华大学 Image synthesis method and apparatus based on mixed gradient field and mixed boundary condition
CN102129687A (en) * 2010-01-19 2011-07-20 中国科学院自动化研究所 Self-adapting target tracking method based on local background subtraction under dynamic scene

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852392A (en) * 2006-05-11 2006-10-25 上海交通大学 Printing net-point-image dividing method based on moveable contour
CN101551904A (en) * 2009-05-19 2009-10-07 清华大学 Image synthesis method and apparatus based on mixed gradient field and mixed boundary condition
CN102129687A (en) * 2010-01-19 2011-07-20 中国科学院自动化研究所 Self-adapting target tracking method based on local background subtraction under dynamic scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MING-MING CHENG, ET AL: "RepFinder: Finding Approximately Repeated Scene Elements for Image Editing", 《ACM TRANSACTIONS ON GRAPHICS》, vol. 29, no. 4, 31 July 2010 (2010-07-31) *
YUQIAN ZHAO, ET AL: "Edge Detection Based on Multi-Structure Elements Morphology", 《PROCEEDINGS OF THE 6TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION》, 23 June 2006 (2006-06-23), pages 9795 - 9798, XP010947008, DOI: doi:10.1109/WCICA.2006.1713908 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109255807A (en) * 2017-07-13 2019-01-22 腾讯科技(深圳)有限公司 A kind of image information processing method and server, computer storage medium
CN109255807B (en) * 2017-07-13 2023-02-03 腾讯科技(深圳)有限公司 Image information processing method, server and computer storage medium
CN110264546A (en) * 2019-06-24 2019-09-20 北京向上一心科技有限公司 Image synthetic method, device, computer readable storage medium and terminal
CN110264546B (en) * 2019-06-24 2023-03-21 北京向上一心科技有限公司 Image synthesis method and device, computer-readable storage medium and terminal

Also Published As

Publication number Publication date
CN102324093B (en) 2013-08-07

Similar Documents

Publication Publication Date Title
Wei et al. State of the art in example-based texture synthesis
Jobard et al. Unsteady flow visualization by animating evenly‐spaced streamlines
US8953872B2 (en) Method for editing terrain data created by procedural terrain method
CN101763657B (en) Three-dimensional terrain display method for video production
CN104167013B (en) Volume rendering method for highlighting target area in volume data
Zhang et al. Personal photograph enhancement using internet photo collections
Yu et al. Lagrangian texture advection: Preserving both spectrum and velocity field
CN103593825A (en) Image super-resolution method based on improved non-local restriction and local self similarity
CN110415284A (en) A kind of haplopia color image depth map preparation method and device
Gerl et al. Interactive example-based hatching
CN102129576B (en) Method for extracting duty ratio parameter of all-sky aurora image
US20110134128A1 (en) Visualization and Representation of Data Clusters and Relations
Paris et al. Terrain amplification with implicit 3D features
CN102324093B (en) Image synthesis method based on grouped object mixing
Jiao et al. A fast and effective deep learning approach for road extraction from historical maps by automatically generating training data with symbol reconstruction
CN113393546B (en) Fashion clothing image generation method based on clothing type and texture pattern control
Ni et al. Multi-scale line drawings from 3D meshes
CN101977311A (en) Multi-characteristic analysis-based CG animation video detecting method
CN102999660A (en) Method for converting design data into variable parquet data
Chen et al. iSlideShow: a content-aware slideshow system
Kuhn et al. Trajectory Density Projection for Vector Field Visualization.
CN111222528A (en) Improved SSD target detection algorithm with area amplification operation
Alliez et al. Efficient view-dependent refinement of 3D meshes using sqrt {3}-subdivision
Leonowicz et al. Automatic generation of hypsometric layers for small-scale maps
Pueyo et al. Shrinking city layouts

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant