CN106910159A - Video-splicing method and device - Google Patents

Video-splicing method and device Download PDF

Info

Publication number
CN106910159A
CN106910159A CN201610891406.2A CN201610891406A CN106910159A CN 106910159 A CN106910159 A CN 106910159A CN 201610891406 A CN201610891406 A CN 201610891406A CN 106910159 A CN106910159 A CN 106910159A
Authority
CN
China
Prior art keywords
image
splicing seams
original image
splicing
reference picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610891406.2A
Other languages
Chinese (zh)
Inventor
王玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610891406.2A priority Critical patent/CN106910159A/en
Publication of CN106910159A publication Critical patent/CN106910159A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to image processing field, a kind of video-splicing method and device is disclosed, video-splicing method of the invention is comprised the following steps:The pretreatment of image;The registration of image;Set up transformation model and uniform coordinate conversion;The fusion of image.The fusion of described image is specifically included:Optimal splicing seams are set up in the overlapping region of original image and reference picture;Laplacian pyramid is constructed respectively to the image of original image and reference picture in overlapping region;To the corresponding image configuration gaussian pyramid of optimal splicing seams;Using the gaussian pyramid as weight, global weighted average fusion is carried out to the laplacian pyramid;Finally enter row interpolation expansion, obtain final image.Video-splicing method of the present invention can correct gray scale or color-values of the image to be spliced near splicing seams, gray scale or color at splicing seams is had a smooth transition, be obviously improved the splicing effect of image.

Description

Video-splicing method and device
Technical field
The present invention relates to Image Information Processing field, more particularly to a kind of video-splicing method and device.
Background technology
Augmented reality (Augmented Reality, abbreviation AR), is a kind of position for calculating camera image in real time Put and angle and plus respective image technology, the target of this technology is that virtual world is enclosed within real world simultaneously on screen Carry out interaction.This technology is proposed by nineteen ninety.With the lifting of accompanied electronic product operational capability, it is contemplated that the use of augmented reality Way will be more and more wider.In AR technology practical applications, real world image is mapped on display screen, it is necessary to using more first Fusion and illumination, equilibrium of color of the video-splicing method entered, particularly several images to be spliced etc. aspect, it is proposed that more High request.
Relative position between image and image can be obtained by matching algorithm, image pixel is reflected by position relationship Penetrate, obtain their coordinates in final splicing full figure.But usually the place on original image border occurs high-visible bar Line, this striped is known as splicing seams, and the image being spliced can not almost be kept away in the gray scale of adjacent edges or the difference of color Exempt from.
The content of the invention
In view of the shortcomings of the prior art, the present invention is intended to provide a kind of video-splicing side that can preferably eliminate splicing seams Method and device.
To achieve the above object, the present invention is adopted the following technical scheme that:
A kind of video-splicing method, the described method comprises the following steps:
Original image is pre-processed, described pretreatment includes the straight isometric chart matching of original image, Geometry rectification, puts down Slip over filter and noise processed;
The position of pretreated original image and the characteristic point of reference picture is searched, the position according to characteristic point is original Model conversion is set up between image and reference picture, the overlay region between pretreated original image and reference picture is determined Domain;
According to the corresponding relation of the overlapping region between the pretreated original image and reference picture, pretreatment Original image afterwards is transformed into the coordinate system of reference picture;
According to the transformational relation between pretreated original image and reference picture, the information of the overlapping region is spelled It is connected into a complete image;
The transformational relation according between pretreated original image and reference picture, the letter of the overlapping region Breath is spliced into a complete image, specifically includes:
Optimal splicing seams are set up in original image after the pre-treatment and the overlapping region of reference picture;
Laplce is constructed respectively to the image of pretreated original image and reference picture in the overlapping region Pyramid;
To the corresponding image configuration gaussian pyramid of optimal splicing seams in the overlapping region;
Using the gaussian pyramid as weight, global weighted average fusion is carried out to the laplacian pyramid;
Difference expansion is carried out to the image after the global weighted average fusion, is then added, obtain final image.
As a further improvement on the present invention, the weighed intensities in the global weighted average fusion and pixel are to original The distance of image is directly proportional.
As a further improvement on the present invention, it is described that optimal spelling is set up in the overlapping region of original image and reference picture Seam, specifically includes:Every a line Ge Lie pixels of the overlapping region are corresponded into a splicing seams, the splicing seams it is strong Angle value is initialized as the criterion value of each point, and the current point of the splicing seams is the train value where it;By working as each splicing seams Preceding point and this close to pixel criterion value be compared, take expansion of the pixel corresponding to minimal intensity value as the splicing seams Exhibition direction, updates the minimal intensity value of this splicing seams, and where being updated to obtain minimal intensity value by the current point of splicing seams Row where adjacent pixels value, the current line of the splicing seams intensity that will be computed extends downwards, to the last untill a line; The minimum splicing seams of selection intensity value are used as optimal splicing seams from all of splicing seams.
A kind of video-splicing device, including with lower module:
Pretreatment module, for being pre-processed to original image, specifically includes:The straight isometric chart matching of original image, Geometry rectification, smooths filtering, noise processed;
Registration module, the position for searching pretreated original image and the characteristic point of reference picture, then pre- Set up model conversion between original image and reference picture after treatment, determine pretreated original image and reference picture it Between overlapping region;
Conversion module, for pretreated original image to be transformed into the coordinate system of reference picture;
Fusion Module, for according to the transformational relation between two images, the information of the overlapping region being spliced into one Individual complete image;
The Fusion Module is specifically included:
Module is set up, for setting up optimal splicing in the overlapping region of original image after the pre-treatment and reference picture Seam;
First constructing module, for the image to pretreated original image and reference picture in the overlapping region Laplacian pyramid is constructed respectively;
Second constructing module, for the corresponding image configuration gaussian pyramid of optimal splicing seams in the overlapping region;
Global weighted average module, for using the gaussian pyramid as weight, to the laplacian pyramid Carry out global weighted average fusion;
Interpolation extension module, for carrying out difference expansion to carrying out the image after global weighted average fusion, is then added, Obtain final image.
As a further improvement on the present invention, the weighed intensities in the global weighted average fusion and pixel are to original The distance of image is directly proportional.
As a further improvement on the present invention, it is described to set up module, specifically for:Every a line of the overlapping region is each Row pixel corresponds to a splicing seams, and the intensity level of the splicing seams is initialized as the criterion value of each point, the splicing seams Current point is the train value where it;By each current point of splicing seams and this close to pixel criterion value be compared, take Pixel corresponding to minimal intensity value updates the minimal intensity value of this splicing seams as the propagation direction of the splicing seams, and will The current point of splicing seams be updated to the adjacent pixels value where obtaining minimal intensity value where row, the splicing that will be computed The current line for stitching intensity extends downwards, to the last untill a line;The minimum splicing of selection intensity value from all of splicing seams Seam is used as optimal splicing seams.
Compared to prior art, the present invention can correct gray scale or color-values of the image to be spliced near splicing seams, make Gray scale or color at splicing seams have a smooth transition, are obviously improved the splicing effect of image.
Described above is only the general introduction of technical solution of the present invention, in order to better understand technological means of the invention, And can be practiced according to the content of specification, and in order to allow the above and other objects, features and advantages of the invention can Become apparent, below especially exemplified by preferred embodiment, and coordinate accompanying drawing, describe in detail as follows.
Brief description of the drawings
Fig. 1 is video-splicing method flow diagram provided in an embodiment of the present invention.
Fig. 2 is the fusion flow chart of image in video-splicing method provided in an embodiment of the present invention.
Fig. 3 is global weight coefficient curve synoptic diagram in video-splicing method provided in an embodiment of the present invention.
Fig. 4 is pixel value mapping curve schematic diagram in video-splicing method provided in an embodiment of the present invention.
Fig. 5 is video-splicing device schematic diagram provided in an embodiment of the present invention.
Fig. 6 is the schematic diagram of Fusion Module in video-splicing device provided in an embodiment of the present invention.
Specific embodiment
Below, with reference to accompanying drawing and specific embodiment, the present invention is described further:
Fig. 1 is video-splicing method flow diagram provided in an embodiment of the present invention, and the video-splicing method flow diagram includes Following steps:
The pretreatment of step 110, image;
Preferably, original image is processed using following technology:The straight isometric chart matching of original image, geometry are rectified Just, filtering and noise processed are smoothed, the quality of picture is improved, the requirement of image registration can be reached, be unlikely to cause The matching of mistake, causes original stitching image geometry deformation.
The registration of step 120, image;
Preferably, the position of pretreated original image and the characteristic point of reference picture is searched, according to the position of characteristic point Put and model conversion is set up between original image and reference picture, determine between pretreated original image and reference picture Overlapping region;
Specifically include:Two pictures of different angles are collected, respectively original image and reference picture, if it is I1 And I2;Two functions of expression gray value, respectively I are set again1(x, y) and I2(x,y).The registering relation of picture is expressed as:I2 (x, y)=g (I1(h (x, y))), in formula g be gray value conversion it can be appreciated that be range value conversion obtain, in formula H is defined as the expression formula of the two-dimensional coordinate of geometry, finally calculates the relation between two position coordinateses of image, thus To two positions of the coordinate of picture strip matching, matrix is obtained.
Step 130, set up transformation model and uniform coordinate conversion;
Specifically include:According to the corresponding pass of the overlapping region between the pretreated original image and reference picture System, pretreated original image is transformed into the coordinate system of reference picture.
The fusion of step 140, image;
Specifically include:The transformational relation according between pretreated original image and reference picture, described heavy The information in folded region is spliced into a complete image.
Preferably, after the step 130 determines the parameter of mathematics conversion, closed according to the conversion between two images The information of overlapping region, is spliced into a complete image by system.May exist among these certain matching error or because The influence of the problem of luminosity, in addition it is also necessary to which the difference according to luminosity carries out mixing adjustment to stitching image, reduces the mistake of overlapping region Very.
Fig. 2 is the fusion flow chart of image in embodiment of the present invention video-splicing method.
Preferably, the fusion of step 140 image specifically includes following steps:
Step 141, set up optimal splicing seams;
Specifically include:Optimal splicing seams are set up in original image after the pre-treatment and the overlapping region of reference picture.
Preferably, every a line Ge Lie pixels of the overlapping region are corresponded into a splicing seams, the splicing seams Intensity level is initialized as the criterion value of each point, and the current point of the splicing seams is the train value where it;By each splicing seams Current point and this close to pixel criterion value be compared, take the pixel corresponding to minimal intensity value as the splicing seams Propagation direction, updates the minimal intensity value of this splicing seams, and the current point of splicing seams is updated to obtain minimal intensity value place Adjacent pixels value where row, the current line of the splicing seams intensity that will be computed extends downwards, a to the last behavior Only;The minimum splicing seams of selection intensity value are used as optimal splicing seams from all of splicing seams.
Step 142, construction laplacian pyramid;
Specifically include:Construct Laplce's gold respectively to the image of original image and reference picture in the overlapping region Word tower, size for the original image of (M+1) × (N+1) is extended to the image that size is (2M+1) × (2N+1).If Gl,kIt is image GlResult after interpolation k times, they have following relation:
Gl,0(x, y)=Gl(x,y)
Wherein, w (m, n) makes a living nucleation, only whenWithWhen being integer, Just participate in calculating.It can be seen that image G after interpolationl,1With low-pass pictures Gl- 1 has identical size, Gl,lIt is big as artwork It is small.Note band logical image is L0,L1,…,LN, they are obtained by following formula:
Ll=Gl-GL+1,10 < l < N
Due to GNIt is top low-pass pictures, no last layer image can subtract each other therewith, so defining LN=GN.Band logical After image interpolation and expansion, then original image just can be progressively reduced to next layer of band logical image addition.
Preferably, to highest tomographic image LNAfter interpolation, then with image LN-1It is added, has just restored image GN-1, then by GN-1 Interpolation expands, and adds LN-2Image G is just restoredN-2….If Ll,kIt is band logical image LlResult after interpolation k times, reduction artwork Process is:
Step 143, construction gaussian pyramid;
Preferably, to optimal splicing seams image configuration gaussian pyramid in the overlapping region, low pass is carried out to original image The image obtained after filtering is designated as G0,G1,…,GN。G0Original image is represented, figure is generated after its neighborhood and a masterplate effect As G1In pixel, image G1Neighborhood and the effect of masterplate after generate image G2In pixel, then so repeat down straight To GN.Sampling interval is 1 pixel, so the image size per next stage is the 1/4 of upper level image, forms a pyramid Structure.Make to be formulated as:
G0(x, y)=I (x, y)
Wherein, w (m, n) makes a living nucleation.
Preferably, makeTakeThe letter for obtaining Figure is counted similar to Gaussian function, so the image sets for eventually forming also referred to as gaussian pyramid.
Step 144, carry out global weighted average;
Global weighted average method can more effectively take into account global effect, in the lap by original image according to image Overall distribution form, compatibility enters reference picture, and the pixel value in splicing result figure carries out the overall situation and weights by original image pixels Come, weighed intensities are directly proportional to the distance of pixel to original image.
Preferably, if I1(x, y), I2(x, y) is respectively pretreated original image and reference picture at (x, y) place Pixel value, IL(x, y) is pixel value of the splicing result figure at (x, y) place, then pixel value is determined by following formula:
Wherein R1And R2Pretreated original image and the underlapped region of reference picture, R are represented respectively12Represent and overlap Region.α, β are weight coefficient, if xminIt is overlapping original position, xmaxIt is overlapping final position, β1For overlapping region is adjacent Margin of image element ratio is not for empty pixel accounts for I1The ratio of (x, y), β2For overlapping region adjacent pixel value difference ratio is not empty picture Element accounts for I2The computational methods of the ratio of (x, y), α and β are:
Fig. 3 is global weight coefficient curve synoptic diagram in embodiment of the present invention video-splicing method.A is weighting system in figure Number, it is interval that interval D is that weighting function is acted on, it can be seen that syncretizing effect has a lifting to a certain extent, but if Object has trickle position difference in splicing regions, then " ghost " phenomenon occurs, i.e., same object appears in splicing knot twice In fruit figure.And can make toward contact image blurring near splicing seams, detailed information is lost, therefore, also needing to be put down from illumination, color The aspects such as weighing apparatus carry out an integrated regulation to spliced image.
In order to eliminate " ghost " class problem, it is necessary to carry out illumination Balance Treatment to two images, make the illumination of two images In same level.According to the characteristics of stitching image, the histogram relation of two images in overlapping region is found, using each The ratio of image adjacent pixel value difference to be spliced is not empty picture element density as weight, and more intensive color contrast is higher (according to different application can using square or exponential function), then by that width image that this relation amendment is dark, make two width Image irradiation is in same level.
Preferably, the adjustment to pixel value can be expressed as being mapped using a function pair pixel value, such as:I′(x, Y)=f (I (x, y)).
Fig. 4 is pixel value mapping curve schematic diagram in embodiment of the present invention video-splicing method.Lateral coordinates are represented in figure Input pixel value, ordinate represents the pixel value of output, and by function curve, in image, dark local darker, bright place is more Bright, the contrast of image is strengthened, and particularly weight region contrast high is more obvious.By function curve, make each The pixel value of image to be spliced is approximately unified by after this Function Mapping, and illumination purpose in a balanced way is reached with this.
Step 405, enter row interpolation expansion.
Preferably, difference expansion is carried out to the image after global weighting in step 404, is then added, obtain final image.
Fig. 5 is video-splicing device schematic diagram provided in an embodiment of the present invention.Described device is included with lower module:Pretreatment Module 210, registration module 220, conversion module 230, Fusion Module 240.
Pretreatment module 210, the pretreatment module 210 is pre-processed to image, is specifically included:Original image it is straight Isometric chart matching, Geometry rectification, smooth filtering and noise processed;
Registration module 220, it is described to be used to carry out registration to image with quasi-mode 220, specifically include:Search pretreated original Model is set up between the position of the characteristic point of beginning image and reference picture, original image then after the pre-treatment and reference picture Conversion, determines the overlapping region between pretreated original image and reference picture;
Conversion module 230, it is described to become mold changing 230 for setting up transformation model and uniform coordinate conversion, specifically include:According to The corresponding relation of the overlapping region between the pretreated original image and reference picture, pretreated original image It is transformed into the coordinate system of reference picture;
Fusion Module 240, the fusion mould 240 is used for the fusion of image, specifically includes:According to pretreated original graph The information of the overlapping region, is spliced into a complete image by the transformational relation between picture and reference picture.
Fig. 6 is the schematic diagram of Fusion Module in video-splicing device provided in an embodiment of the present invention.The Fusion Module 240 Specifically include:Set up module 241, the first constructing module 242, the second constructing module 243, global weighted average module 244, insert Value extension module 245.
Module 241 is set up, the module 241 of setting up is specifically included for setting up optimal splicing seams:Original after the pre-treatment Optimal splicing seams are set up in beginning image and the overlapping region of reference picture;
Preferably, every a line Ge Lie pixels of the overlapping region are corresponded into a splicing seams, the splicing seams Intensity level is initialized as the criterion value of each point, and the current point of the splicing seams is the train value where it;By each splicing seams Current point and this close to pixel criterion value be compared, take the pixel corresponding to minimal intensity value as the splicing seams Propagation direction, updates the minimal intensity value of this splicing seams, and the current point of splicing seams is updated to obtain minimal intensity value place Adjacent pixels value where row, the current line of the splicing seams intensity that will be computed extends downwards, a to the last behavior Only;The minimum splicing seams of selection intensity value are used as optimal splicing seams from all of splicing seams.
First constructing module 242, the first construction mould 242 is to pretreated original image and reference picture described Image in overlapping region constructs laplacian pyramid respectively;;
Second constructing module 243, second constructing module 243 is used for optimal splicing seams correspondence in the overlapping region Image configuration gaussian pyramid;
Global weighted average module 244, the global weighted average mould 244 be used for using the gaussian pyramid as Weight, global weighted average fusion is carried out to the laplacian pyramid, is specifically included:In the lap by original graph As according to image overall distribution form, compatibility enters reference picture, and the pixel value in splicing result figure is carried out by original image pixels Overall situation weighting gets, and weighed intensities are directly proportional to the distance of pixel to original image.
Interpolation extension module 245, the interpolation extension module 245 is used for expanding into row interpolation, specifically includes:It is complete to carrying out Image after office's weighted average carries out difference expansion, is then added, and obtains final image.
The present invention can correct gray scale or color-values of the image to be spliced near splicing seams, make the gray scale at splicing seams Or color has a smooth transition, is obviously improved the splicing effect of image.
For a person skilled in the art, technical scheme that can be as described above and design, make other each Plant corresponding change and deform, and all these changes and deforms the protection model that should all belong to the claims in the present invention Within enclosing.

Claims (6)

1. a kind of video-splicing method, it is characterised in that comprise the following steps:
Original image is pre-processed, described pretreatment includes the straight isometric chart matching of original image, Geometry rectification, smoothed Filter and noise processed;
The position of pretreated original image and the characteristic point of reference picture is searched, the position according to characteristic point is in original image Model conversion is set up between reference picture, the overlapping region between pretreated original image and reference picture is determined;
According to the corresponding relation of the overlapping region between the pretreated original image and reference picture, pretreated Original image is transformed into the coordinate system of reference picture;
According to the transformational relation between pretreated original image and reference picture, the information of the overlapping region is spliced into One complete image;
The transformational relation according between pretreated original image and reference picture, spells the information of the overlapping region A complete image is connected into, is specifically included:
Optimal splicing seams are set up in original image after the pre-treatment and the overlapping region of reference picture;
Construct Laplce's gold word respectively to the image of pretreated original image and reference picture in the overlapping region Tower;
To the corresponding image configuration gaussian pyramid of optimal splicing seams in the overlapping region;
Using the gaussian pyramid as weight, global weighted average fusion is carried out to the laplacian pyramid;
Difference expansion is carried out to the image after the global weighted average fusion, is then added, obtain final image.
2. video-splicing method as claimed in claim 1, it is characterised in that the weighting in the global weighted average fusion is strong Degree is directly proportional to the distance of pixel to original image.
3. video-splicing method as claimed in claim 1, it is characterised in that described overlap with reference picture in original image Optimal splicing seams are set up in region, is specifically included:Every a line Ge Lie pixels of the overlapping region are corresponded into a splicing Seam, the intensity level of the splicing seams is initialized as the criterion value of each point, and the current point of the splicing seams is the train value where it;Will The each current point of splicing seams and this close to pixel criterion value be compared, take the pixel corresponding to minimal intensity value As the propagation direction of the splicing seams, the minimal intensity value of this splicing seams is updated, and the current point of splicing seams is updated to obtain The row where adjacent pixels value where minimal intensity value, the current line of the splicing seams intensity that will be computed extends downwards, To the last untill a line;The minimum splicing seams of selection intensity value are used as optimal splicing seams from all of splicing seams.
4. a kind of video-splicing device, it is characterised in that including with lower module:
Pretreatment module, for being pre-processed to original image, specifically includes:The straight isometric chart matching of original image, geometry Correction, smooths filtering, noise processed;
Registration module, the position for searching pretreated original image and the characteristic point of reference picture, then in pretreatment Model conversion is set up between rear original image and reference picture, is determined between pretreated original image and reference picture Overlapping region;
Conversion module, for pretreated original image to be transformed into the coordinate system of reference picture;
Fusion Module, for according to the transformational relation between two images, the information of the overlapping region be spliced into one it is complete Whole image;
The Fusion Module is specifically included:
Module is set up, for setting up optimal splicing seams in the overlapping region of original image after the pre-treatment and reference picture;
First constructing module, for the image difference to pretreated original image and reference picture in the overlapping region Construction laplacian pyramid;
Second constructing module, for the corresponding image configuration gaussian pyramid of optimal splicing seams in the overlapping region;
Global weighted average module, for using the gaussian pyramid as weight, being carried out to the laplacian pyramid Global weighted average fusion;
Interpolation extension module, for carrying out difference expansion to carrying out the image after global weighted average fusion, is then added, and obtains Final image.
5. video-splicing device as claimed in claim 4, it is characterised in that the weighting in the global weighted average fusion is strong Degree is directly proportional to the distance of pixel to original image.
6. video-splicing device as claimed in claim 4, it is characterised in that described to set up module, specifically for:Will be described heavy Every a line Ge Lie pixels in folded region correspond to a splicing seams, and the intensity level of the splicing seams is initialized as the standard of each point Then it is worth, the current point of the splicing seams is the train value where it;By each current point of splicing seams and this close to pixel it is accurate Then value is compared, and takes propagation direction of the pixel corresponding to minimal intensity value as the splicing seams, updates this splicing seams Row where minimal intensity value, and adjacent pixels value where the current point of splicing seams is updated to obtain minimal intensity value, will The current line of the splicing seams intensity being computed extends downwards, to the last untill a line;Selected from all of splicing seams The minimum splicing seams of intensity level are used as optimal splicing seams.
CN201610891406.2A 2016-10-12 2016-10-12 Video-splicing method and device Pending CN106910159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610891406.2A CN106910159A (en) 2016-10-12 2016-10-12 Video-splicing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610891406.2A CN106910159A (en) 2016-10-12 2016-10-12 Video-splicing method and device

Publications (1)

Publication Number Publication Date
CN106910159A true CN106910159A (en) 2017-06-30

Family

ID=59207388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610891406.2A Pending CN106910159A (en) 2016-10-12 2016-10-12 Video-splicing method and device

Country Status (1)

Country Link
CN (1) CN106910159A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034038A (en) * 2018-07-19 2018-12-18 东华大学 A kind of fire identification device based on multi-feature fusion
CN109509146A (en) * 2017-09-15 2019-03-22 腾讯科技(深圳)有限公司 Image split-joint method and device, storage medium
CN109847306A (en) * 2019-01-11 2019-06-07 衡阳师范学院 Training shuttlecock pace detection method and system based on image operation
CN110097504A (en) * 2019-05-13 2019-08-06 招商局重庆交通科研设计院有限公司 A kind of image vision acquisition system for tunnel crusing robot
CN111179173A (en) * 2019-12-26 2020-05-19 福州大学 Image splicing method based on discrete wavelet transform and gradient fusion algorithm
CN111292267A (en) * 2020-02-04 2020-06-16 北京锐影医疗技术有限公司 Image subjective visual effect enhancement method based on Laplacian pyramid
CN113723500A (en) * 2021-08-27 2021-11-30 四川启睿克科技有限公司 Image data expansion method based on feature similarity and linear smoothing combination

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631811A (en) * 2016-02-25 2016-06-01 科盾科技股份有限公司 Image stitching method and device
CN105915804A (en) * 2016-06-16 2016-08-31 恒业智能信息技术(深圳)有限公司 Video stitching method and system
US20160267695A1 (en) * 2015-03-13 2016-09-15 Trimble Navigation Limited Acceleration of exposure fusion with pixel shaders

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160267695A1 (en) * 2015-03-13 2016-09-15 Trimble Navigation Limited Acceleration of exposure fusion with pixel shaders
CN105631811A (en) * 2016-02-25 2016-06-01 科盾科技股份有限公司 Image stitching method and device
CN105915804A (en) * 2016-06-16 2016-08-31 恒业智能信息技术(深圳)有限公司 Video stitching method and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
李鸥等: "《三维全景摄影专家技法》", 30 September 2009, 电脑报电子音像出版社 *
瞿中等: "一种消除图像拼接缝和鬼影的快速拼接算法", 《计算机科学》 *
苗启广等: "《多传感器图像融合技术及应用》", 30 April 2014, 西安电子科技大学出版社 *
褚彬: "基于任务树调度的大尺度的遥感影像并行镶嵌技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
赵毅力等: "微距摄影的多聚焦图像拍摄和融合", 《中国图像图形学报》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509146A (en) * 2017-09-15 2019-03-22 腾讯科技(深圳)有限公司 Image split-joint method and device, storage medium
CN109509146B (en) * 2017-09-15 2023-03-24 腾讯科技(深圳)有限公司 Image splicing method and device and storage medium
CN109034038A (en) * 2018-07-19 2018-12-18 东华大学 A kind of fire identification device based on multi-feature fusion
CN109034038B (en) * 2018-07-19 2021-05-04 东华大学 Fire identification device based on multi-feature fusion
CN109847306A (en) * 2019-01-11 2019-06-07 衡阳师范学院 Training shuttlecock pace detection method and system based on image operation
CN110097504A (en) * 2019-05-13 2019-08-06 招商局重庆交通科研设计院有限公司 A kind of image vision acquisition system for tunnel crusing robot
CN111179173A (en) * 2019-12-26 2020-05-19 福州大学 Image splicing method based on discrete wavelet transform and gradient fusion algorithm
CN111292267A (en) * 2020-02-04 2020-06-16 北京锐影医疗技术有限公司 Image subjective visual effect enhancement method based on Laplacian pyramid
CN113723500A (en) * 2021-08-27 2021-11-30 四川启睿克科技有限公司 Image data expansion method based on feature similarity and linear smoothing combination
CN113723500B (en) * 2021-08-27 2023-06-16 四川启睿克科技有限公司 Image data expansion method based on combination of feature similarity and linear smoothing

Similar Documents

Publication Publication Date Title
CN106910159A (en) Video-splicing method and device
CN111080724B (en) Fusion method of infrared light and visible light
Jian et al. Multi-scale image fusion through rolling guidance filter
CN109754377B (en) Multi-exposure image fusion method
CN105453134A (en) A method and apparatus for dynamic range enhancement of an image
CN110211043A (en) A kind of method for registering based on grid optimization for Panorama Mosaic
US10810707B2 (en) Depth-of-field blur effects generating techniques
Wang et al. Variational single nighttime image haze removal with a gray haze-line prior
JP2012511760A (en) Image processing apparatus and method using scale space
CN108416754A (en) A kind of more exposure image fusion methods automatically removing ghost
CN108053363A (en) Background blurring processing method, device and equipment
CN108876753A (en) Optional enhancing is carried out using navigational figure pairing growth exposure image
CN108305232B (en) A kind of single frames high dynamic range images generation method
CN106815869A (en) The photocentre of fisheye camera determines method and device
CN109472752B (en) Multi-exposure fusion system based on aerial images
US8488899B2 (en) Image processing apparatus, method and recording medium
WO2011031331A1 (en) Interactive tone mapping for high dynamic range video
CN112712485A (en) Image fusion method and device
CN107767339A (en) A kind of binocular stereo image joining method
CN104751407A (en) Method and device used for blurring image
CN109711268A (en) A kind of facial image screening technique and equipment
CN104751406A (en) Method and device used for blurring image
CN107516302A (en) A kind of method of the mixed image enhancing based on OpenCV
CN106327437A (en) Color text image correction method and system
CN109934787B (en) Image splicing method based on high dynamic range

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170630