CN110390657A - A kind of image interfusion method - Google Patents

A kind of image interfusion method Download PDF

Info

Publication number
CN110390657A
CN110390657A CN201810358534.XA CN201810358534A CN110390657A CN 110390657 A CN110390657 A CN 110390657A CN 201810358534 A CN201810358534 A CN 201810358534A CN 110390657 A CN110390657 A CN 110390657A
Authority
CN
China
Prior art keywords
image
target image
boundary
virtual pixel
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810358534.XA
Other languages
Chinese (zh)
Other versions
CN110390657B (en
Inventor
刘畅
陈慧慧
韩雪
周一青
石晶林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Super Media Information Technology Co Ltd
Original Assignee
Beijing Zhongke Super Media Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Super Media Information Technology Co Ltd filed Critical Beijing Zhongke Super Media Information Technology Co Ltd
Priority to CN201810358534.XA priority Critical patent/CN110390657B/en
Publication of CN110390657A publication Critical patent/CN110390657A/en
Application granted granted Critical
Publication of CN110390657B publication Critical patent/CN110390657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of image interfusion methods.This method comprises: being reference with former target image boundary to be fused, the range that the former target image boundary need to extend is determined;Virtual pixel processing is carried out to the range extended, to obtain the target image after virtual pixel processing;Target image after virtual pixel processing is merged with background image using Poisson picture editting.Image interfusion method of the invention eliminates that target image transition after image co-registration is unnatural and fuzzy problem occurs in target image boundary, to finally obtain ideal image syncretizing effect.

Description

A kind of image interfusion method
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image interfusion methods.
Background technique
Image co-registration is that two width or multiple image are synthesized a width new images, basic thought by a kind of ad hoc approach It is that different wavelength range, the various imaging sensors with different imaging mechanisms will be operate in same using certain method The information of the multiple images of a scene imaging permeates a new image, so that it is higher credible to have the image of fusion Degree, less fuzzy, better comprehensibility, are either more suitable for the vision or COMPUTER DETECTION, classification, identification, understanding of people Deng.Image fusion technology is in necks such as remote sensing image processing, computer vision, intelligent robot, military monitoring, medical scanning imagings Domain is widely used.
The principle of image co-registration is by the way that a target object in source images or a target area are embedded into background Image generates a new image, to realize the smooth transition and seamless fusion between target image and background image, to mention The visual effect of height fusion intermediate zone.Currently, image interfusion method mainly includes weighted mean method, multi-resolution method and is based on The fusion method etc. of gradient field.Weighted mean method, also known as feathering, this method is simple, calculating speed is fast, the disadvantage is that fusion effect Fruit is bad, is difficult to eliminate movement to ghost image caused by target;The principle of multi-resolution method is by picture breakdown into a system The sub-band images with different resolution are arranged, is merged in different sub-band using different size of transitional region, is then made Synthesize the image of overlapping region under former resolution ratio with restructing algorithm, but this method need to be filtered by multiple, and computationally intensive, easily Signal is caused to weaken, it is fuzzy so as to cause image;Fusion method based on gradient field is substantially by solving a Poisson's equation It realizes the gradient run of source images to target image, while ensuring the seamless fusion of boundary and adjustment luminance deviation to obtain Last blending image.Gradient reflects the most significant part of image local brightness change, and this method is regarded more suitable for the mankind Feel system changes very sensitive feature to brightness of image.
Currently, using wide technology be by some application of new technology into the image co-registration based on gradient field.For example, Poisson picture editting is most popular one of research direction.Poisson picture editting is proposed by Perez et al. based on Poisson side The image edit method of journey, this method treat integration region using gradient field and guide interpolation, by image co-registration problem It is attributed to the gradient fields for solving region to be synthesized and target image instructs the difference minimization problem of gradient fields, and utilizes This variational problem of Poisson equation solution, this method achieve good image syncretizing effect.
However, in the prior art, in the fusion for carrying out target image and background image using Poisson image edit method When, at least there are two aspects: 1) when target image to be dissolved into background image, the boundary meeting of target image There is fuzzy phenomenon;2) when being dissolved into target image close to the boundary of background image, borderline region visual effect is not It is ideal, it may appear that blooming, and when the color of target image and background image differ greatly, the side of Poisson picture editting Method not can guarantee the original color of target image usually.
Therefore, it is necessary to improve to the prior art, the quality of blending image is improved, to adapt to people to image co-registration Mode and quality increasingly higher demands.
Summary of the invention
It is an object of the invention to overcome the defect of the above-mentioned prior art, a kind of image interfusion method is provided, to improve figure As fused display effect.
According to the first aspect of the invention, a kind of image interfusion method is provided.Method includes the following steps:
Step 1: being reference with former target image boundary to be fused, determine the model that the former target image boundary need to extend It encloses;
Step 2: virtual pixel processing being carried out to the range extended, to obtain the target after virtual pixel processing Image;
Step 3: utilizing Poisson picture editting to carry out with background image the target image after virtual pixel processing Fusion.
In one embodiment, range that the former target image boundary need to extend is determined according to following steps:
Step 21: being n sub-block by the background image parallel patition, wherein n is the integer more than or equal to 1;
Step 22: using the M frame image of the former target image received, determining that each sub-block carries out at virtual pixel The frequency threshold value of reason, is respectively labeled as T1, T2..., Tn, wherein M is the integer more than or equal to 1;
Step 23: the subsequent frame for receiving the former target image is carried out virtual using identified each sub-block The frequency threshold value of processes pixel calculates the range threshold T that the former target image carries out virtual pixel processing.
In one embodiment, in step 22, for the M frame image of the former target image received, for institute One sub-block b of the background image of divisionk, execute following sub-step:
Step 221: determining that the boundary of the former target image occurs every time by optimization object function using alternative manner In sub-block bkShi Jinhang virtual pixel number of processing, is denoted as C respectively1、C2、……、Cm, the objective function expression are as follows:
Wherein, QiIt is the average pixel value on target image boundary after i-th virtual pixel is handled, Q0It indicates without virtual The borderline average pixel value of former target image of processes pixel, QiIt is expressed asΩ is sub-block bkIn The boundary of target image after the processing of i-th virtual pixel, p is borderline pixel, ftIt (p) is p in background image Color value, fsIt (p) is the color value of p in the target image;
Step 222: according to the C of acquisition1、C2、……、CmSub-block b is determined using weighted mean methodkCarry out virtual pixel Number of processing threshold value Tk, k value is 1 to n.
In one embodiment, in step 23, for receiving the subsequent frame of the former target image, according to following son Step determines that the former target image carries out the range threshold T of virtual pixel processing:
Step 231: judging that the boundary of original target image described in subsequent frame is located at the sub-block number of the background image;
Step 232: obtain the corresponding virtual pixel number of processing threshold value of sub-block number of the background image and by its In maximum value as the range threshold T for carrying out virtual pixel processing to the former target image.
In one embodiment, the value range of M is 1000 to 2000 frames.
In one embodiment, step 3 includes:
Step 31: target image boundary after virtual pixel processing is small at a distance from the background image boundary When distance threshold, from the pixel value of the target image interception presumptive area after virtual pixel processing;
Step 32: the pixel value of institute interception area being copied into the corresponding position of the background image and then utilizes pool Loose picture editting is merged.
In one embodiment, the distance threshold is 1 to 10 pixel distance.
In one embodiment, the former target image boundary is the minimum circumscribed rectangle for surrounding the former target image.
Compared with the prior art, the advantages of the present invention are as follows: it is carried out by the boundary to former target image to be fused empty Quasi- processes pixel, makes after image co-registration, the boundary of target image is not in fuzzy phenomenon;On target image boundary Boundary with background image is in the case where, by pair that the pixel value of target image appropriate area is copied to background image Position is answered, eliminates that target image transition after image co-registration is unnatural and fuzzy problem occurs in target image boundary, from And finally obtain ideal image syncretizing effect.
Detailed description of the invention
The following drawings only makees schematical description and interpretation to the present invention, is not intended to limit the scope of the present invention, in which:
Fig. 1 (a) to Fig. 1 (b) shows the schematic diagram of image co-registration process;
Fig. 2 shows the flow charts of image interfusion method according to an embodiment of the invention;
Fig. 3 shows the schematic diagram according to an embodiment of the invention that virtual pixel processing is carried out to former target image;
Fig. 4 shows the schematic diagram of Poisson image co-registration according to an embodiment of the invention;
Fig. 5 shows the stream according to an embodiment of the invention that virtual pixel process range is determined based on self-learning method Cheng Tu;
Fig. 6 (a) illustrates not carrying out former target image the effect picture of Poisson image co-registration when virtual pixel processing;
Fig. 6 (b) illustrates the effect picture to former target image Poisson image co-registration after virtual pixel is handled;
Fig. 6 (c) is illustrated to former target image while being carried out the Poisson image after virtual pixel processing and BORDER PROCESSING and melt The effect picture of conjunction.
Specific embodiment
It is logical below in conjunction with attached drawing in order to keep the purpose of the present invention, technical solution, design method and advantage more clear Crossing specific embodiment, the present invention is described in more detail.It should be appreciated that specific embodiment described herein is only to explain The present invention is not intended to limit the present invention.The principle of image co-registration of the invention will be hereafter introduced by taking Poisson picture editting as an example And preferred embodiment.
Image co-registration is that a target image in source images is embedded into background image to generate a new image.Example Such as, the basic process applied to the image co-registration in video monitoring camera is: firstly, transmitting terminal is adopted respectively under Same Scene Collect background image existing for the aimless background image of a frame and target, the aimless background image of acquisition and target are existed Image the minimum circumscribed rectangle for surrounding target image is obtained by the preprocessing process such as background subtraction;By the result of processing (target image i.e. in the minimum circumscribed rectangle of encirclement target image) is transferred to receiving end;The mesh that real-time Transmission is carried out in receiving end Logo image and background image are merged.It is shown in Figure 1, wherein Fig. 1 (a) illustrates aimless background image S, Fig. 1 (b) it illustrates for target image to be embedded into the schematic diagram after background image, region I is the region after image co-registration.In reality In, target image is usually the object moved, such as people or vehicle etc., background image are usually static object, for example, Road, building etc..Relative to the background image that will include target while being transferred to receiving end in real time, it is above-mentioned to be melted using image Conjunction process can save transmission flow come the method for showing video monitoring and save bandwidth.It, will be outside minimum in description herein Connecing rectangle is that the object region that boundary determines is known as former target image.
Fig. 2 shows the flow charts of image interfusion method according to an embodiment of the invention.In short, the image co-registration Method includes: to extend on the boundary of former target image around, virtualization processes pixel is carried out to the region expanded, to obtain Obtain the target image after virtual pixel processing;Target image after progress virtual pixel processing is utilized into Poisson image The method of editor is fused to background image, obtains fusion results.Optionally, before carrying out Poisson image co-registration, if it is determined that Boundary to carry out the target image after virtual pixel processing is then intercepted close to the boundary of background image close to background image side The object region on boundary and the corresponding position that the pixel value of interception area is copied to background image, then execute Poisson figure again As fusion.
Specifically, image interfusion method of the invention the following steps are included:
Step S210 determines that original target image is fused to the position in background image.
In this step, target image, background image are obtained and determines that target image is fused to the position in background image Coordinate, obtainable information include the pixel value of background image, the pixel value of target image and target image, background image Position etc..The step can be realized using the prior art, for example, extracting target image using background subtraction, i.e., from video image The image of present frame and preset background image are done into difference in sequence, then position and the size of former target image can be obtained Etc. information.
Step S220 is reference with former target image boundary, determines that original target image needs the range that extends, with to being expanded The region of exhibition carries out virtual pixel processing.
The boundary of former target image refers to the external figure of minimum comprising former target image, wherein minimum external figure can be with Be minimum circumscribed rectangle, minimum circumscribed circle shape or according to the shape of target image and the minimum of the irregular shape of determination is external Figure is illustrated by taking preferred minimum circumscribed rectangle as an example in the present invention.
The region expanded refers to and extends on the boundary of former target image around, is greater than original to obtain bounds The new target image of target image, and virtual pixel processing is carried out to the region extended.Referring to the signal of Fig. 3, wherein interior The rectangle of circle indicates the boundary of former target image, and gray area indicates that the range expanded (needs to carry out virtual pixel processing Range), the rectangle of outer ring indicates the boundary of the target image after virtual pixel processing.
In one embodiment, pixel value can be used to indicate range that former target image needs to extend, for example, with former mesh The boundary of logo image is reference, extends the range of predetermined threshold (such as 5-10 pixel distance) around.
In a preferred embodiment, pass through initialization and self-learning method (being also referred to as self-learning method herein) Determine that original target image needs to carry out virtual pixel number of processing, so that it is determined that needing to carry out the range of virtual pixel processing. In short, the process of self-learning method is: being determined to the preceding M frame target image received by the method for initialization and self study It needs to carry out virtual pixel number of processing;Then, the target image of subsequent frame is carried out using the value that study obtains virtual Processes pixel.The detailed process of self-learning method will be introduced below.
Step S230 carries out virtual pixel processing to the region expanded.
In this step, virtual pixel processing is carried out to the region expanded, handles it by virtual pixel to obtain Target image afterwards, herein, the process also referred to as carry out virtual pixel processing to former target image.
Virtual pixel processing refers to radiating out on the basis of original target image boundary in this patent, such as preceding M Frame is based in initial method, virtual pixel processing is carried out on the basis of original target image once, then original target image Boundary radiated out a pixel distance;It is based in self-learning method, need to only carry out primary at subsequent frame (after M frame) Virtual pixel processing, T pixel of process range threshold value, then the boundary of original target image has directly radiated out threshold value T A pixel distance becomes a new target image.Virtual pixel processing the result is that the region of target image expands.
Whether step S240 judges by the boundary of virtual pixel treated target image close to the side of background image Boundary.
It further include the target further judged after virtual pixel processing optionally after being handled by virtual pixel The relative position on boundary (boundary extended) and background image boundary of image, if the target figure after virtual pixel processing The frontier distance of picture executes step S250 close to the boundary of background image, otherwise, to the target after progress virtual pixel processing Image directly carries out graph cut operation, i.e. execution step S260.
In one embodiment, the target image after handling by virtual pixel is judged using scheduled pixel threshold It is whether close with the boundary of background image, for example, the side on target image boundary and background image after virtual pixel processing When boundary is at a distance of 1 to 10 row pixel, it is considered as close with background image, it is preferable that be judged as when at a distance of 1-6 row pixel and lean on Closely.
Step S250 intercepts the object region close to background image boundary and copies to the pixel value of interception area The corresponding position of background image.
In the case that the target image boundary and background image boundary that are judged as after processing by virtual pixel are close, A few row matrix pixel values close to background image boundary are intercepted from the object region after virtual pixel processing Then the region of interception is copied directly to the corresponding position in background image by (for example, 6 row pixel values).Pass through this side Formula can eliminate the phenomenon that target image transition is unnatural and boundary appearance is fuzzy after graph cut, to obtain ideal Syncretizing effect.
Step S260 carries out image co-registration using Poisson image edit method.
Using the method for Poisson picture editting figure will be carried out by virtual pixel treated target image and background image As fusion.
The thought of Poisson picture editting be in the case where ensuring that boundary (boundary of background image) is constant, it is specific with one group Change of gradient figure as guidance, ask fusion part image so that the closest source figure of the change of gradient trend of integration region As the variation tendency of (in the application with virtual pixel treated target image indicates source images) respective pixel.The application passes through The process of above-mentioned virtual pixel processing eliminates splicing trace when image co-registration, realizes the visual effect of seamless fusion.
Specifically, as shown in connection with fig. 4, the principle of Poisson picture editting is to introduce gradient vector field V, so that gradient fields and mesh The difference marked between gradient fields minimizes, and solves unknown scalar f with this, it may be assumed that
Wherein, Ω is the closing subset on background image, i.e. integration region,It is the marginal portion of integration region Ω, ▽ F indicates that the First-order Gradient (gradient of image namely to be asked) of f, f are blending image (image-region i.e. to be asked) inside Ω Pixel value (it is unknown scalar function), f* are the pixel value outside blending image Ω, are defined in the boundary ΩOn Known scalar function,For gradient operator, (x, y) is image slices vegetarian refreshments coordinate, and g is target image, and V is target image (the i.e. guidance field of image g) in Fig. 4.
The solution vector of formula (1) can use the Poisson side with Dirichlet boundary condition (Dirichlet boundary conditions) Journey indicates, such as following formula:
Wherein, V is the guidance field of target image,For Laplace operator, div (namely ▽) indicates V Gradient, i.e. the gradient of ▽ g (g is target image),(u, v) respectively indicates target image in x, the direction y Gradient fields.And it is solved respectively using formula (2) on tri- Color Channels of RGB.
Finite differential discretization is carried out to formula (1), enables fpIt is function f in the value of pixel p point, then target is just to solve for f|Ω={ fp,p∈Ω}.The optimal solution of formula (1) meets following equation (3):
Wherein, | Np| it is the four connection set N of pixel ppIncluded in element number, | Np| ∈ [Isosorbide-5-Nitrae],<p,q>generation Table a pair of pixel pair, and q ∈ Np,It isIn the projection value of directed edge [p, q].
Formula (3) is a linear equation, solves pixel value in you can get it Ω namely fused to formula (3) Image.When solving, for example, f can be solved with overrelaxation Gauss-Seidel iterative method or multi-grid methodp, fpAs merge The pixel value of p point, the solution procedure belong to the prior art afterwards, and details are not described herein.
Need it is once more emphasized that, step S260 execute graph cut operating process in, what the target image being related to referred to It is the target image after virtual pixel processing of the invention;Comprising step S240 and step S250, pool The background image that loose image co-registration is related to refers to the background image that will be obtained after step S250 processing.
It is described below and determines that former target image needs to carry out virtual pixel and handles model with self-learning method by initialization The process enclosed, to carry out piecemeal to background image, a pixel per treatment is when initial phase carries out virtual pixel processing Example is illustrated.
Referring to shown in Fig. 5 it is according to an embodiment of the invention based on initialization with self-learning method determine virtual representation The method of plain process range, in short, this method comprises: the background image acquired under Same Scene is divided into multiple sub-blocks;It is right Former target image boundary in each sub-block is initialized, to count each sub-block where M frame original target image boundary Need to carry out the learning value C of virtual pixel processing;It is weighted and averaged by the learning value each to each sub-block and obtains each sub-block Fixed threshold Tk, after M frame, when receiving the target image of subsequent frame again, directly analyze the subsequent frame target image Boundary be distributed in which sub-block of background image, utilize these sub-block self studies obtain historical record threshold value TkIt determines Need to carry out former target image the range of virtual pixel processing.
Specifically, Fig. 5 embodiment the following steps are included:
Background image is divided into n sub-block by step S510 parallel.
The background image acquired under Same Scene is divided into multiple sub-blocks, is labeled as b1、b2..., bn, for example, can be according to back The size of scape image determines divided sub-block quantity n to the quality requirement of blending image, and theoretically n is desirable is more than or equal to 1 arbitrary integer, n are equal to 1 i.e. to background image without piecemeal, and the value of n is bigger, and the threshold value that self study obtains is more accurate, Then the quality of blending image is also higher.
Step S520 calculates the former mesh being located in each sub-block by self-learning method for the former target image of preceding M frame The learning value C of logo image progress virtual pixel processing.
In this step, theoretically M can use the arbitrary integer more than or equal to 1, and M value is bigger, then the result of self study is got over Accurately, but the speed of self study will reduce, in a preferred embodiment, in order to balance the accuracy and study speed of self study Degree, sets 1000-2000 frame for M.
Specifically, step S520 includes following sub-step:
Step S521, initial phase
With the sub-block b of background image1For, for preceding M frame, when former target image boundary appears in sub-block b for the first time1In, To b1Target image borderline each pixel in Central Plains successively extends to the boundary where neighbor pixel, every to extend primary (example Such as, a pixel), it is denoted as and carries out virtual pixel processing once, and calculate virtual pixel and handle preceding and each virtual representation Where after element processing on boundary each pixel average pixel value, formula is expressed as follows:
Wherein, QiIndicate that the average pixel value on target image boundary after i-th virtual pixel is handled, Ω indicate in sub-block The currently boundary of (i.e. i-th virtual pixel processing after) target image, p are indicated in the back boundary of i-th virtual pixel processing Pixel, ft(p) color value of the p in background image, f are indicateds(p) indicate that the color value of p in the target image, i indicate former Target image carries out virtual pixel number of processing to surrounding, can use the arbitrary integer more than or equal to 1, in this embodiment, with i It is illustrated for=20.
In order to eliminate fusion after target image obscurity boundary the phenomenon that, reduce boundary color variation, for sub-block b1 It is preceding minimum with the absolute difference by the average pixel value in i-th virtual pixel processing back boundary that virtual pixel processing need to be met Change, it may be assumed that
Wherein, Q0Indicate the borderline average pixel value of former target image handled without virtual pixel, QiIndicate i-th Virtual pixel handles the average pixel value in back boundary.
It should be understood that in this embodiment, although with sub-block b1For be illustrated, when the Background for being divided Any sub-block b of picturek(i.e. k value is 1 to n), all needs to execute similar processing, the process is equally applicable.
Step S522, iterative optimization procedure
I), by sub-block b1In without virtual pixel processing boundary as initial boundary, by the boundary and corresponding position Each point pixel value on background image substitutes into formula (4), calculates borderline average pixel value Q at this time0
Ii), as the boundary where current border extends to neighbor pixel, i.e. progress virtual pixel processing is primary, first Beginningization stage virtual processes pixel once indicates that former target image has radiated out a pixel distance, will treated boundary And each point pixel value on corresponding position background image substitutes into formula (4), calculates borderline average pixel value Q at this time1
Iii), the result acquired in step i, step ii is substituted into above formula (5), and result is recorded as
Iv), return step ii, step iii are found out in 20 iterative processMinimum value, and record i value at this time, I pixel distance is radiated out on the basis of initial boundary, is just able to satisfy after merging and is obscured around target image boundary It disappears, sets learning value C for i at this time1
Step S523, self study stage
For preceding M frame target image, when the boundary for the target image for receiving other frames the 2nd time, the 3rd time ... the m times Independently appear in sub-block b1When, each learning value is obtained according to the process real-time update of above-mentioned steps S522, is denoted as respectively C2、C3、……、Cm
Next, determining sub-block b based on obtained learning value1Need to carry out the range threshold T of virtual pixel processing1.Example Such as, each learning value that m iteration obtains can be calculated by weighted mean method, obtains threshold value T1If T1For decimal, then to Upper rounding.T1Target image of any frame boundaries in b1 sub-block and background image graph cut as in M frame video image When, need to carry out initial boundary to surrounding the range area of virtual pixel processing to make target image obscurity boundary disappear just Domain.
Similarly, the boundary for calculating the preceding M frame target image received according to the method described above respectively appears in sub-block b2、 Sub-block b3... sub-block bnIn threshold value T2、T3……Tn, wherein the boundary of every frame target image appears in the number in each sub-block It is independent from each other, if never occurring the boundary of target image in sub-block, sets 0 for the threshold value in this sub-block.
To further understand above-mentioned self study process, for background image to be divided into 25 sub-blocks, i.e. n=25, and it is right The threshold marker of each sub-block is as follows.
T1 T6 T11 T16 T21
T2 T7 T12 T17 T22
T3 T8 T13 T18 T23
T4 T9 T14 T19 T24
T5 T10 T15 T20 T25
The corresponding threshold value T of each sub-block in the background image is obtained after the self study stage by initializingkAssuming that threshold value TkSuch as Shown in lower:
5 7 8 0 5
4 4 6 5 4
6 3 5 7 8
5 6 3 5 6
3 5 5 6 7
Step S530 determines that subsequent frame target image needs to carry out the model of virtual pixel processing according to self study acquisition value It encloses.
After the initialization and self study process of the target image to preceding M frame, the virtual pixel of each sub-block is obtained The threshold value T of processingk, when receiving subsequent frame, the boundary for directly analyzing the frame Central Plains target image is distributed in background image In which sub-block, the historical record T obtained in these sub-blocks is recordedk, and in each value TkIn find out maximum value and be denoted as T, it is as follows Shown, rectangle frame indicates to surround the minimum rectangle of target image, and the boundary of the target image is distributed in T7、T8、T9、T12、T14、 T17、T18、T19In sub-block, corresponding threshold value TkRespectively 4,3,6,6,3,5,7,5, the maximum of T in these values is 7, then most Virtual pixel is carried out on the basis of small boundary rectangle around and handles the range distance of 7 pixels (i.e. in minimum circumscribed rectangle On the basis of directly extend 7 pixel distances around), that is to say, that for subsequent frame, by former target image boundary around into Row virtual pixel handles the range distance of T pixel, to obtain the target image after virtual pixel processing.
It should be understood that in the embodiment of initialization and self study of the invention, including by the frame of former target image Number is compared with M, when the frame number of former target image is less than or equal to M, carries out virtual representation by executing step S521 and S522 Element processing, the distance of a pixel per treatment;When the frame number of former target image is greater than M, obtained according to self study virtual The range areas of processes pixel carries out virtual pixel processing, for example, if when the range areas T obtained is 7 pixels, with original Target image boundary is reference, extends 7 pixels around, at this point, a virtual pixel processing is single with 7 pixel coverages Position can be improved the efficiency of the virtual pixel processing of subsequent frame in this way.
Compared with prior art, the present invention is by carrying out virtual pixel processing to former target image and according to virtual pixel The boundary of target image and background image after processing close to when be further improved, the effect of blending image can be improved significantly Fruit.Poisson image melts when illustrating not carrying out former target image virtual pixel processing referring to Fig. 6 (a) to Fig. 6 (c), Fig. 6 (a) The effect picture of conjunction, Fig. 6 (b) illustrate the effect picture to former target image by virtual pixel treated Poisson image co-registration, Fig. 6 (c) is illustrated to former target image while being carried out the effect of the Poisson image co-registration after virtual pixel processing and BORDER PROCESSING Figure, wherein human body is target image, it is seen then that for not carried out in graph cut image graph 6 (a) using method of the invention, The boundary of target image, for example, there are bloomings for head and leg area, after using virtual pixel processing of the invention Blending image Fig. 6 (b), the blooming of head zone disappears substantially, and the leg area close to the boundary of background image There is also certain bloomings, and (ginseng after being further further improved using the boundary of the invention to target image See shown in Fig. 6 (c)), the blooming of the leg area also almost disappears, and is in when so that target image being fused to background image Showed naturally excessively, without there is blooming.
It should be noted that, although each step is described according to particular order above, it is not intended that must press Each step is executed according to above-mentioned particular order, in fact, some in these steps can concurrently execute, or even is changed suitable Sequence, as long as can be realized required function.
The present invention can be system, method and/or computer program product.Computer program product may include computer Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the invention.
Computer readable storage medium can be to maintain and store the tangible device of the instruction used by instruction execution equipment. Computer readable storage medium for example can include but is not limited to storage device electric, magnetic storage apparatus, light storage device, electromagnetism and deposit Store up equipment, semiconductor memory apparatus or above-mentioned any appropriate combination.The more specific example of computer readable storage medium Sub (non exhaustive list) include: portable computer diskette, hard disk, random access memory (RAM), read-only memory (ROM), Erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), Portable compressed disk are read-only Memory (CD-ROM), memory stick, floppy disk, mechanical coding equipment, is for example stored thereon with instruction at digital versatile disc (DVD) Punch card or groove internal projection structure and above-mentioned any appropriate combination.
Various embodiments of the present invention are described above, above description is exemplary, and non-exclusive, and It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport In principle, the practical application or to the technological improvement in market for best explaining each embodiment, or make the art its Its those of ordinary skill can understand each embodiment disclosed herein.

Claims (10)

1. a kind of image interfusion method, comprising the following steps:
Step 1: being reference with former target image boundary to be fused, determine the range that the former target image boundary need to extend;
Step 2: virtual pixel processing being carried out to the range extended, to obtain the target figure after virtual pixel processing Picture;
Step 3: the target image after virtual pixel processing is merged with background image using Poisson picture editting.
2. image interfusion method according to claim 1, wherein determine the former target image boundary according to following steps The range that need to be extended:
Step 21: being n sub-block by the background image parallel patition, wherein n is the integer more than or equal to 1;
Step 22: using the M frame image of the former target image received, determining that each sub-block carries out virtual pixel processing Frequency threshold value is respectively labeled as T1, T2..., Tn, wherein M is the integer more than or equal to 1;
Step 23: the subsequent frame for receiving the former target image carries out virtual pixel using identified each sub-block The range threshold T of the progress virtual pixel processing of original target image described in number of processing threshold calculations.
3. image interfusion method according to claim 2, wherein in step 22, for the former target received The M frame image of image, for a sub-block b of the background image dividedk, execute following sub-step:
Step 221: determining that the boundary of the former target image appears in this every time by optimization object function using alternative manner Sub-block bkShi Jinhang virtual pixel number of processing, is denoted as C respectively1、C2、…、Cm, the objective function expression are as follows:
Wherein, QiIt is the average pixel value on target image boundary after i-th virtual pixel is handled, Q0It indicates without virtual pixel The borderline average pixel value of former target image of processing, QiIt is expressed asΩ is i-th in sub-block k The boundary of target image after virtual pixel processing, p is borderline pixel, ftIt (p) is color of the p in background image Value, fsIt (p) is the color value of p in the target image;
Step 222: according to the C of acquisition1、C2、…、CmSub-block b is determined using weighted mean methodkCarry out time of virtual pixel processing Number threshold value Tk, k value is 1 to n.
4. image interfusion method according to claim 2, wherein in step 23, for receiving the former target figure The subsequent frame of picture determines that the former target image carries out the range threshold T of virtual pixel processing according to following sub-step:
Step 231: judging that the boundary of original target image described in subsequent frame is located at the sub-block number of the background image;
Step 232: obtaining the corresponding virtual pixel number of processing threshold value of sub-block number of the background image and will be therein Maximum value is as the range threshold T for carrying out virtual pixel processing to the former target image.
5. image interfusion method according to claim 2, wherein the value range of M is 1000 to 2000 frames.
6. image interfusion method according to any one of claims 1 to 4, wherein step 3 includes:
Step 31: when the virtual pixel processing after target image boundary with the background image boundary at a distance from less than away from When from threshold value, from the pixel value of the target image interception presumptive area after virtual pixel processing;
Step 32: the pixel value of institute interception area being copied into the corresponding position of the background image and then utilizes Poisson figure As editor is merged.
7. image interfusion method according to claim 6, wherein the distance threshold is 1 to 10 pixel distance.
8. image interfusion method according to any one of claims 1 to 4, wherein the original target image boundary is to surround The minimum circumscribed rectangle of the original target image.
9. a kind of computer readable storage medium, is stored thereon with computer program, wherein real when the program is executed by processor Now according to claim 1 to any one of 8 the method the step of.
10. a kind of computer equipment, including memory and processor, be stored on the memory to transport on a processor Capable computer program, which is characterized in that the processor realizes any one of claims 1 to 8 institute when executing described program The step of stating method.
CN201810358534.XA 2018-04-20 2018-04-20 Image fusion method Active CN110390657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810358534.XA CN110390657B (en) 2018-04-20 2018-04-20 Image fusion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810358534.XA CN110390657B (en) 2018-04-20 2018-04-20 Image fusion method

Publications (2)

Publication Number Publication Date
CN110390657A true CN110390657A (en) 2019-10-29
CN110390657B CN110390657B (en) 2021-10-15

Family

ID=68283561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810358534.XA Active CN110390657B (en) 2018-04-20 2018-04-20 Image fusion method

Country Status (1)

Country Link
CN (1) CN110390657B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445408A (en) * 2020-03-25 2020-07-24 浙江大华技术股份有限公司 Method, device and storage medium for performing differentiation processing on image
CN111524100A (en) * 2020-04-09 2020-08-11 武汉精立电子技术有限公司 Defect image sample generation method and device and panel defect detection method
CN112288666A (en) * 2020-10-28 2021-01-29 维沃移动通信有限公司 Image processing method and device
CN112804505A (en) * 2020-12-31 2021-05-14 上海丹诺西诚智能科技有限公司 Projection pattern splicing method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472162A (en) * 2007-12-25 2009-07-01 北京大学 Method and device for embedding and recovering prime image from image with visible watermark
CN101770649A (en) * 2008-12-30 2010-07-07 中国科学院自动化研究所 Automatic synthesis method for facial image
CN101945223A (en) * 2010-09-06 2011-01-12 浙江大学 Video consistent fusion processing method
CN102663766A (en) * 2012-05-04 2012-09-12 云南大学 Non-photorealistic based art illustration effect drawing method
CN102903093A (en) * 2012-09-28 2013-01-30 中国航天科工集团第三研究院第八三五八研究所 Poisson image fusion method based on chain code mask
CN104717574A (en) * 2015-03-17 2015-06-17 华中科技大学 Method for fusing events in video summarization and backgrounds
CN105096287A (en) * 2015-08-11 2015-11-25 电子科技大学 Improved multi-time Poisson image fusion method
CN105608716A (en) * 2015-12-21 2016-05-25 联想(北京)有限公司 Information processing method and electronic equipment
CN106056537A (en) * 2016-05-20 2016-10-26 沈阳东软医疗***有限公司 Medical image splicing method and device
CN106530265A (en) * 2016-11-08 2017-03-22 河海大学 Adaptive image fusion method based on chromaticity coordinates
CN106846241A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method of image co-registration, device and equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101472162A (en) * 2007-12-25 2009-07-01 北京大学 Method and device for embedding and recovering prime image from image with visible watermark
CN101770649A (en) * 2008-12-30 2010-07-07 中国科学院自动化研究所 Automatic synthesis method for facial image
CN101945223A (en) * 2010-09-06 2011-01-12 浙江大学 Video consistent fusion processing method
CN102663766A (en) * 2012-05-04 2012-09-12 云南大学 Non-photorealistic based art illustration effect drawing method
CN102903093A (en) * 2012-09-28 2013-01-30 中国航天科工集团第三研究院第八三五八研究所 Poisson image fusion method based on chain code mask
CN104717574A (en) * 2015-03-17 2015-06-17 华中科技大学 Method for fusing events in video summarization and backgrounds
CN105096287A (en) * 2015-08-11 2015-11-25 电子科技大学 Improved multi-time Poisson image fusion method
CN106846241A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method of image co-registration, device and equipment
CN105608716A (en) * 2015-12-21 2016-05-25 联想(北京)有限公司 Information processing method and electronic equipment
CN106056537A (en) * 2016-05-20 2016-10-26 沈阳东软医疗***有限公司 Medical image splicing method and device
CN106530265A (en) * 2016-11-08 2017-03-22 河海大学 Adaptive image fusion method based on chromaticity coordinates

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PATRICK PÉREZ 等: "Poisson image editing", 《ACM TRANSACTIONS ON GRAPHICS》 *
张满满: "基于泊松图像编辑方法的图像无缝拼合技术研究", 《广东技术师范学院学报》 *
谌明: "图像融合与修复处理关键技术研究", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111445408A (en) * 2020-03-25 2020-07-24 浙江大华技术股份有限公司 Method, device and storage medium for performing differentiation processing on image
CN111524100A (en) * 2020-04-09 2020-08-11 武汉精立电子技术有限公司 Defect image sample generation method and device and panel defect detection method
CN111524100B (en) * 2020-04-09 2023-04-18 武汉精立电子技术有限公司 Defect image sample generation method and device and panel defect detection method
CN112288666A (en) * 2020-10-28 2021-01-29 维沃移动通信有限公司 Image processing method and device
CN112804505A (en) * 2020-12-31 2021-05-14 上海丹诺西诚智能科技有限公司 Projection pattern splicing method and system

Also Published As

Publication number Publication date
CN110390657B (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN111062905B (en) Infrared and visible light fusion method based on saliency map enhancement
KR102003015B1 (en) Creating an intermediate view using an optical flow
JP5645842B2 (en) Image processing apparatus and method using scale space
CN101883291B (en) Method for drawing viewpoints by reinforcing interested region
CN104318569B (en) Space salient region extraction method based on depth variation model
CN111080724A (en) Infrared and visible light fusion method
CN109829930A (en) Face image processing process, device, computer equipment and readable storage medium storing program for executing
CN110381268B (en) Method, device, storage medium and electronic equipment for generating video
CN113012172A (en) AS-UNet-based medical image segmentation method and system
CN104252700B (en) A kind of histogram equalization method of infrared image
CN106709878B (en) A kind of rapid image fusion method
CN109685732A (en) A kind of depth image high-precision restorative procedure captured based on boundary
CN105761234A (en) Structure sparse representation-based remote sensing image fusion method
CN110390657A (en) A kind of image interfusion method
CN111553841B (en) Real-time video splicing method based on optimal suture line updating
CN103440662A (en) Kinect depth image acquisition method and device
CN108280804A (en) A kind of multi-frame image super-resolution reconstruction method
CN109697696B (en) Benefit blind method for panoramic video
CN114049464A (en) Reconstruction method and device of three-dimensional model
CN116681636A (en) Light infrared and visible light image fusion method based on convolutional neural network
CN114565508B (en) Virtual reloading method and device
CN113436130B (en) Intelligent sensing system and device for unstructured light field
CN107862732B (en) Real-time three-dimensional eyelid reconstruction method and device
Yang et al. Enhancing foreground boundaries for medical image segmentation
CN116958927A (en) Method and device for identifying short column based on BEV (binary image) graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant