CN106791623A - A kind of panoramic video joining method and device - Google Patents

A kind of panoramic video joining method and device Download PDF

Info

Publication number
CN106791623A
CN106791623A CN201611127613.7A CN201611127613A CN106791623A CN 106791623 A CN106791623 A CN 106791623A CN 201611127613 A CN201611127613 A CN 201611127613A CN 106791623 A CN106791623 A CN 106791623A
Authority
CN
China
Prior art keywords
segment
video
overlay region
default
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611127613.7A
Other languages
Chinese (zh)
Inventor
董康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Original Assignee
SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd filed Critical SHENZHEN YUNZHOU MULTIMEDIA TECHNOLOGY Co Ltd
Priority to CN201611127613.7A priority Critical patent/CN106791623A/en
Publication of CN106791623A publication Critical patent/CN106791623A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the invention discloses a kind of panoramic video joining method and device.The method includes:Obtain multi-channel video source data to be spliced;Each video source image projection of synchronization will be corresponded to default panoramic projection face according to default projective parameter using parallel mode, the panoramic projection video image being made up of multiple segments is obtained;Strip-type fusion treatment is carried out according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode, and exports the full-view video image after fusion.The embodiment of the present invention can take into account the splicing efficiency and picture quality of panoramic video by using above-mentioned technical proposal, it is ensured that the panoramic video for being exported possesses higher ageing and preferably picture quality, is applicable to broadcasting and TV level live.

Description

A kind of panoramic video joining method and device
Technical field
The present embodiments relate to technical field of video processing, more particularly to a kind of panoramic video joining method and device.
Background technology
With the fast development of video processing technique, the people that appear as of panoramic video bring brand-new visual experience, Meet the demand that people obtain the scene information of wider visual range.At present, omnidirectional imaging system is broadly divided into single camera Two kinds of imaging system and multiple-camera imaging system.Panoramic picture need not splice in single camera imaging system, but resolution ratio Relatively low with definition, the scope of application is limited.Multiple-camera imaging system is typically by splicing the not Tongfang that multiple video cameras shoot The video image of position forms panoramic video, compared to resolution ratio for single camera imaging system and definition higher, using model Enclose wider.
At present, it is applied to panoramic video in monitoring and the scene such as live, these application scenarios are to the real-time of panoramic video more The requirement of property and definition is higher.However, in existing panoramic video connection scheme, due to needing data volume to be processed larger, And algorithm is complicated, in order to ensure preferable real-time, definition is not often high, and the picture quality of panoramic video is poor.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of panoramic video joining method and device, to take into account the spelling of panoramic video Connect efficiency and picture quality.
On the one hand, a kind of panoramic video joining method is the embodiment of the invention provides, including:
Obtain multi-channel video source data to be spliced;
Each video source image projection of synchronization will be corresponded to default according to default projective parameter using parallel mode Panoramic projection face, obtains the panoramic projection video image being made up of multiple segments, wherein, each segment one video source figure of correspondence Picture, the lap between each two segment constitutes segment overlay region;
Strip-type is carried out according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode Fusion treatment, and export the full-view video image after fusion.
On the other hand, panoramic video splicing apparatus is the embodiment of the invention provides, including:
Video source data acquisition module, for obtaining multi-channel video source data to be spliced;
Projection module, for using parallel mode each video source figure of synchronization will to be corresponded to according to default projective parameter As projection is to default panoramic projection face, the panoramic projection video image being made up of multiple segments is obtained, wherein, each segment correspondence One video source images, the lap between each two segment constitutes segment overlay region;
Fusion Module, for the default suture being pointed to according to default fusion parameters using parallel mode in segment overlay region Line carries out strip-type fusion treatment;
Panoramic video output module, for exporting the full-view video image after fusion.
In the embodiment of the present invention provide panoramic video connection scheme, get multi-channel video source data to be spliced it Afterwards, the video source images for spliced panoramic video image are projected and are merged etc. with treatment using parallel mode, it is ensured that spell Efficiency is connect, the real-time of panoramic video output can be strengthened;Additionally, using precalculated projective parameter, panoramic projection face, melting Close parameter and suture to carry out relevant treatment, operand during video-splicing can be reduced, further improve splicing efficiency, together When due to the contents such as above-mentioned parameter be precalculated, ensure that the fine definition of image.Therefore, the embodiment of the present invention is carried The technical scheme of confession can take into account the splicing efficiency and picture quality of panoramic video, it is ensured that the panoramic video for being exported possesses higher Ageing and preferably picture quality, be applicable to broadcasting and TV level live.
Brief description of the drawings
Fig. 1 is a kind of schematic flow sheet of panoramic video joining method that the embodiment of the present invention one is provided;
Fig. 2 is a kind of schematic flow sheet of panoramic video joining method that the embodiment of the present invention two is provided;
Fig. 3 is a kind of optimal stitching line schematic diagram that the embodiment of the present invention two is provided;
Fig. 4 is a kind of suture band schematic diagram that the embodiment of the present invention two is provided;
Fig. 5 is a kind of suture treatment effect schematic diagram that the embodiment of the present invention two is provided;
Fig. 6 is a kind of schematic flow sheet of panoramic video joining method that the embodiment of the present invention three is provided;
Fig. 7 is the corresponding curve synoptic diagram of translation parameters that the embodiment of the present invention three is provided;
Fig. 8 is a kind of structured flowchart of panoramic video splicing apparatus that the embodiment of the present invention four is provided.
Specific embodiment
Further illustrate technical scheme below in conjunction with the accompanying drawings and by specific embodiment.May be appreciated It is that specific embodiment described herein is used only for explaining the present invention, rather than limitation of the invention.Further need exist for explanation , for the ease of description, part rather than entire infrastructure related to the present invention is illustrate only in accompanying drawing.
It should be mentioned that some exemplary embodiments are described as before exemplary embodiment is discussed in greater detail The treatment described as flow chart or method.Although each step to be described as flow chart the treatment of order, many of which Step can be implemented concurrently, concomitantly or simultaneously.Additionally, the order of each step can be rearranged.When its operation The treatment can be terminated during completion, it is also possible to have the additional step being not included in accompanying drawing.The treatment can be with Corresponding to method, function, code, subroutine, subprogram etc..
Embodiment one
Fig. 1 is a kind of schematic flow sheet of panoramic video joining method that the embodiment of the present invention one is provided, and the method can be with Performed by panoramic video splicing apparatus, wherein the device can be realized by software and/or hardware, can be typically integrated in video-splicing and be set In standby.As shown in figure 1, the method includes:
Step 110, acquisition multi-channel video source data to be spliced.
It is exemplary, video-splicing equipment in the present embodiment concretely terminal such as computer, in video-splicing equipment Comprising central processing unit (Central Processing Unit, CPU) and graphic process unit (Graphics Processing Unit, GPU) (also known as video card).Multiple buffering area is provided with CPU ends and GPU ends, is used to support to be related in the embodiment of the present invention And the parallel processing manner for arriving.Additionally, a result buffer can be respectively provided with CPU ends and GPU ends, for depositing panoramic video Image.
Preferably, the embodiment of the present invention be based on CUDA (Compute Unified Device Architecture) or OpenCL (on Intel/AMD video cards realize) is realized.CUDA be by it is tall and handsome up to company propose a parallel computation framework, it It is, based on GPU, concurrently to be performed at a high speed on GPU, greatly improves the speed of service of programmed algorithm.In CUDA or OpenCL In programmed environment, mainly include two parts of CPU and GPU., used as main frame, i.e. Host ends, GPU is used as equipment, i.e. Device for CPU End.CPU and GPU have special passage to enter row data communication, and CPU is responsible for processing logicality practice, and to serialization The control of computing;GPU is responsible for performing large-scale parallelization process task.It is understood that the embodiment of the present invention also can base Realized in other kinds of parallel computation framework, be not limited to be realized based on CUDA or OpenCL.
Below description uses the function and concept of CUDA.Briefly describe the corresponding relation of the two, the stream of CUDA (stream) corresponding to the command queue (command_queue) of OpenCL, the grid (grid) and block (block) of CUDA are respectively Working group (work group) and work item (work item) corresponding to OpenCL.The copy function point synchronization of CUDA The copy function clEnqueueWriteBuffer of cudaMemcpy and asynchronous cudaMemcpyAsync, OpenCL and ClEnqueueReadBuffer only needs to a bool value parameter and decides whether obstruction, and obstruction is synchronous, and it is different not block Step.Algorithm of the invention does not rely on the special function of CUDA or OpenCL.
Exemplary, multi-channel video source data can be by multiple video capture devices (phase as included camera and capture card Machine) carry out video acquisition and obtain, the video source data that video capture device will can be gathered in real time sends to video-splicing and sets It is standby.In order to ensure the definition of panoramic video, it is preferred that video source data is HD video source data, such as at least 2k (2048x1536) HD video source.General, each video capture device corresponding video source, the number of video capture device all the way Amount generally 3~10.
Further, in each video source images corresponding to synchronization that video capture device is gathered, Ke Nengcun Do not changed for partial video two field picture compares last moment, in order to reduce volume of transmitted data, it is preferred that this step May include:Obtain the multi-channel video source data that there is frame updating compared to last moment in prefixed time interval.Preset Time Interval can according to the actual requirements determine, for example, can be realized using timer Timer that prefixed time interval corresponds to timer Time interval.When Timer expires, the data frame updated in one prefixed time interval of past is passed into GPU, started each The projection operation of frame and suture are processed.Here the asynchronous copy operations function of GPU is called to be put into the operation that GPU is initiated The operation queue of GPU, the time T for being put into queue is recorded, while adding an element to splicing result queue Q, represents There is new data frame can use.
Step 120, each video source images throwing that synchronization will be corresponded to according to default projective parameter using parallel mode Shadow obtains the panoramic projection video image being made up of multiple segments to default panoramic projection face.
Wherein, one video source images of each segment correspondence, the lap between each two segment constitutes segment and overlaps Area.
Exemplary, the parallel mode in the present embodiment specifically may include that multiple video source images are processed in parallel, and such as adopt Multiple video source images are processed respectively with different functions, and each function processes at least one video source images;May also include same Multiple pixels in one video source images can be processed in parallel, and such as process multiple pixels respectively using different threads, each line Journey processes at least one pixel.Preferably, in order to improve splicing efficiency, the parallel mode in the present embodiment includes:Different video Source images use different function parallel processings, and the different pixels in same video source images are using at different thread parallels Reason.Specifically, the asynchronous operation characteristic of GPU can be used to realize.More specifically, because data transfer and projection operation use The asynchronous function of CUDA is performed, and thus can open up multiple buffering area in GPU Zhong Weimei roads video, corresponding one per road video Stream stream, video source data is copied to GPU internal memories on stream using cudaMemcpyAsync from host memory, and Start GPU functions reproj<<<grids,blks,0,stream>>>(parameter).It is an event to record this operation simultaneously Ev_stream=cudaEventRecord (stream).This event is used in user calls the function of panoramic video frame, Briefly, the operation just waited on this event ev_stream time points is completed, and is so achieved that synchronization.Substantially, Peripheral component interconnection (Peripheral Component Interconnect, PCI) bus transfer data be it is serial, But because not cocurrent flow can be with executed in parallel, function pair each pixel inside stream is also executed in parallel, uses this asynchronous side Formula, can complete be carried out projection operation, because projection operation is in queue, it is not necessary to which user calls in data transfer The function of CUDA initiates this operation, and this time difference is very important when long-time is used.Dai Mei roads video source images have been performed After projection operation, just can write the result into GPU end matchings and meet buffering area G_S_BUF.Meanwhile, when multi-channel video source data to be spliced When only comprising the multi-channel video source data that there is frame updating compared to last moment in prefixed time interval, i.e., only treatment has number According to the frame of video of renewal, the combination side of multiple video frame buffers and single projection buffering area (namely panoramic video frame buffer zone) Formula can accomplish minimum data processing.
It is exemplary, in specific implementation process, the configuration (such as focal length) of video capture device, quantity and relative position It is to determine, spliced panoramic image, and record concatenation process pair can be come previously according to above- mentioned information and the video interception for being gathered Projection plane, projective parameter, optimal stitching line and the fusion parameters answered, obtain default panoramic projection face, default projective parameter, pre- If suture and default fusion parameters, when video-splicing is carried out, splicing directly is carried out using these parameter presets, can Operand is effectively reduced, splicing efficiency is further improved, simultaneously because the content such as above-mentioned parameter is precalculated, Neng Goubao Demonstrate,prove the fine definition of image.It is understood that the method that many sub-pictures are spliced into panoramic picture is had a lot, for example, in PC Seamless spliced charging software PTGUI can be accomplished on (personal computer, personal computer) machine or increased income Hugin etc., the present embodiment is not limited specific connecting method, only obtains default panoramic projection by certain connecting method Face, default projective parameter, default suture and default fusion parameters.It should be noted that in order to ensure that stitching portion can Preferable transition effect is realized, strip-type fusion treatment is carried out to suture in the present embodiment.
In this step, will be corresponding to each video source image projection of synchronization to default panorama according to default projective parameter Perspective plane, obtains the panoramic projection video image being made up of multiple segments.Each pixel in panoramic projection video image Value (such as interpolation) near certain pixel for the video source images that certain video capture device that source is to determine is gathered, It is to be understood that each pixel (dx, dy) on panoramic projection video image correspond to video source images position (sx, Sy), the corresponding relation is contained in default projective parameter, and (dx, dy) is rounded coordinate, and (sx, sy) is floating-point coordinate, for double Linear interpolation.It is video source images of each pixel from determination on panoramic projection video image with the computation capability of GPU Determination position interpolation calculation obtain pixel value, be GPU can accomplish real time frame rate (such as more than 25fps) higher it is crucial because Element.
Step 130, entered according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode Row strip-type fusion treatment, and export the full-view video image after fusion.
Exemplary, fusion treatment is still carried out using parallel mode and parameter preset in this step, to be further ensured that Splicing efficiency and joining quality.
Preferably, the fusion treatment to suture needs to wait current panorama projecting video images to complete, and fusion treatment Involved pixel quantity is less, thus during the output function of R_Thread thread dispatchings can be placed on perform, i.e., read data it The preceding treatment for starting suture.R_Thread threads are attempted obtaining data available frame (i.e. panoramic projection video image) from queue Q, Then the data frame is waited to be completed by GPU treatment, the operation before calling the function F stand-by period T of GPU offers is fully completed, Then the non-asynchronous of GPU is called to copy the splicing result buffering area C_S_BUF that function copies CPU ends to from G_S_BUF.General feelings Under condition, in no data available frame, R_Thread is blocked.After the completion for the treatment of to be fused, you can output panoramic video Image.
In the embodiment of the present invention provide panoramic video joining method, get multi-channel video source data to be spliced it Afterwards, the video source images for spliced panoramic video image are projected and are merged etc. with treatment using parallel mode, it is ensured that spell Efficiency is connect, the real-time of panoramic video output can be strengthened;Additionally, using precalculated projective parameter, panoramic projection face, melting Close parameter and suture to carry out relevant treatment, operand during video-splicing can be reduced, further improve splicing efficiency, together When due to the contents such as above-mentioned parameter be precalculated, ensure that the fine definition of image.Therefore, the embodiment of the present invention is carried The technical scheme of confession can take into account the splicing efficiency and picture quality of panoramic video, it is ensured that the panoramic video for being exported possesses higher Ageing and preferably picture quality, be applicable to broadcasting and TV level live.
Embodiment two
Fig. 2 be the embodiment of the present invention two provide a kind of panoramic video joining method schematic flow sheet, the present embodiment with Optimized based on above-described embodiment, in the present embodiment, before step " obtaining multi-channel video source data to be spliced ", Also include:Obtain that the facility information and multiple video capture devices of multiple video capture devices gathered corresponding to synchronization Video interception;Acquired video interception is spliced into by panoramic picture according to facility information, and record concatenation process is corresponding Projection plane, projective parameter, optimal stitching line and fusion parameters, obtain default panoramic projection face, default projective parameter, default seam Zygonema and default fusion parameters.
Further, to acquired video interception is spliced into panoramic picture according to facility information the step of, is entered The optimization of one step.
Accordingly, the method for the present embodiment comprises the following steps:
The correspondence that step 210, the facility information for obtaining multiple video capture devices and multiple video capture devices are gathered In the video interception of synchronization.
Wherein, the facility information includes focal length and equipment position.
Step 220, acquired video interception is spliced into by panoramic picture, and record concatenation process pair according to facility information Projection plane, projective parameter, optimal stitching line and the fusion parameters answered, obtain default panoramic projection face, default projective parameter, pre- If suture and default fusion parameters.
Preferably, this step specifically may include:Feature Points Matching is carried out to acquired video interception;According to the equipment Acquired video interception is projected to the same coordinate system and launched by information and Feature Points Matching result, is obtained by multiple segments The panoramic projection image of composition, wherein, each segment one video interception of correspondence, the lap between each two segment is constituted Segment overlay region;The optimal stitching line between each two segment is calculated using maximum flow minimum cut theorem, wherein, overlapped with segment The Grad in area as pixel in segment overlay region weighted value;Strip-type fusion treatment is carried out to the optimal stitching line.
Exemplary, Feature Points Matching can be removed using sift/surf scheduling algorithms by the way of ransac algorithms are combined Mispairing point, calculates turning between the three-dimensional system of coordinate centered on the homography matrix between each camera, that is, each camera Parameter is changed, the video interception of each camera is projected under the coordinate system of some camera, one is launched into according still further to conic projection Zhang Tu, i.e. panoramic projection figure, this panoramic projection figure are not unique, can be defined by certain camera, can also be inclined up and down Move, once this Zhang Quanjing perspective view is determined, on each camera pixel to project on panorama sketch where (fx, fy) just Determine, with this to floating-point interpolation of coordinate.The output of full-view video image 4096x2048 has ten thousand pixel more than 800, GPU's Computation capability can just accomplish the frame per second of more than 25fps.Although the data volume of each frame of video is not small, by taking yuyv as an example, 2048x 1536x 2byte=6MB, 4 cameras are exactly 24MB.In the case of the worst, the data that GPU is passed to every time are exactly 24MB.But It is that this is not problem, because bus data of the PCI-X buses of middle-grade desktop computer under minimum clock frequency 66MHz is handled up at present Ability just reaches 533Mb/s, that is, 66.625MB/s.Data transfer is not a problem.Moreover, in most cases, live scene The frame of video of middle picture continuous updating will not be all, that is, each panoramic video frame only need to transmission data have renewal Frame of video is calculated.
After under the coordinate system that each video interception is projected into panorama sketch, due to there is overlay region (feature between each camera Necessary to Point matching) so that the segment obtained after each video interception projection also has overlay region.Accepted or rejected to overlay region When need to use figure cut method get around big object, it is to avoid be visually substantially misaligned problem.Problem of misalignment herein is What calculation error was caused, optical accuracy of the error from focal length, the estimation of camera distortion parameter and camera device in itself Deng cannot be completely eliminated.Optimal stitching line is searched for using complicated figure cutting algorithm in the present embodiment, and suture is entered Row hierarchical fusion, to solve the problem.
In the present embodiment, with the Grad of segment overlay region as pixel weighted value, using the max-flow in graph theory Optimal stitching line is converted into minimal cut theorem the minimal cut problem in figure, that is, causes that the suture for finding is the minimum of figure Cut-off rule, wherein, Grad is that big object edge Grad is larger as the meaning of weight, thus equal to marked Object edge position.Algorithm can just be avoided through big object according to this, and suture location will be flat in ground, lawn, wall etc. Region, this writing position exist dislocation be not easy to find out.Fig. 3 is the optimal suture of one kind that the embodiment of the present invention two is provided Line schematic diagram, as shown in figure 8, optimal stitching line 31 has got around green direction board 32 (color not shown in figure), specific suture Position is in gray scale flat site.
Preferably, strip-type fusion treatment is carried out to optimal stitching line in the present embodiment, it may include:It is wide according to presetted pixel Degree is widened to optimal stitching line, obtains suture band;Suture band is carried out using multilayer laplacian pyramid Fusion.Wherein, presetted pixel width can be configured according to the actual requirements.Suture is widened according to specified pixel width, is obtained To a band, mixing operation is done in this region.Pixel in this band will be removed from projection operation, namely merely with Default projective parameter calculates the value of these pixels, but is added without in panoramic projection video image, it is to avoid the parallel band for bringing Flicker.
Fig. 4 is a kind of suture band schematic diagram that the embodiment of the present invention two is provided, and is shown in Fig. 4 and is adopted by 7 videos The schematic diagram that collection equipment acquired image is spliced, the gray value in figure represents the distance apart from suture, is worth smaller getting over Close to suture.General processing mode is exactly as weight P=P1*wei/255.0+ using this pixel value in existing scheme P2* (1.0-wei/255) carries out fusion treatment, and so weighting is the filtering of low-pass nature, lost high-frequency information and will result in Band it is fuzzy.
In the present embodiment, suture band is merged using improved laplacian pyramid, it is ensured that the real-time of GPU.Gold The essence of word tower fusion is to first pass through bandpass filtering (allowing the frequency of a certain scope to pass through, to filter other frequencies) by two images point Solution does mixing operation using appropriate operator over different frequency bands to going on different frequency bands, and then Pyramid Reconstruction Operation is also Original image.In view of to ensure frame per second higher, so being not suitable for the treating method pair using conventional laplacian pyramid Entire image is decomposed, and is melted using gradient magnitude (specifying contiguous range gradient weighted value) conduct of each pixel in each layer Close according to (for same place pixel P1 and P2, gradient G 1>G 2 then retains the value of P1, otherwise retains the value of P2).Therefore this implementation Following optimization has been carried out to existing laplacian pyramid integration program in example:
Only merge suture some pixels nearby.Multilayer (below as a example by preferred 4 layers) laplacian pyramid is used, Ground floor suture both sides are exactly 0 and 255, without fusion.The second layer suture line position value that has 1-3 pixel (width) is not 0 or 255, this value is exactly to weight weight.It is not 0 or 255 that third layer suture line position has the value of 3-5 pixel (width), this Individual value is used as weighting weight.The value of 4-6 pixel of four-layer seamed line position (width) is not 0 or 255, and this value is weighed for weighting Weight.
Because the construction features of laplacian pyramid are wide and high 1/2 of last layer, so the 4th layer of Laplce's gold Word tower fusion pixel wide is 4-6, has influence on (3~7) x 2^3=32~56 pixel that result seeks for suturing line position, is examined Consider the influence of the Gauss masterplate of 5x5, the pixel wide for eventually affecting is maximum up to 60 pixels or so.Because suture compares Bending, sutures the weighted pixel width average of line position 5 or so, and it is 40 pixels that can take suture line width.This means that Mixing operation only needs to the width for the treatment of suture line position 40-60 pixels, greatly reduces amount of calculation so that GPU parallel processings It is possibly realized.
Exemplary, the weighting weight near suture described above uses and method is calculated as below:
The weight of ground floor is the suture left side 0, the right 255.Namely left and right two width figure (laplacian pyramids first Layer) separated along suture, do not merge.
The weight of the second layer is that the weight map of ground floor uses the result of the Gaussian kernel treatment of 5x5.Because weighting treatment makes Occur near suture between 0-255 value, this value as two width figures of left and right weighting weight.
The weight map of third layer is that the weight map of the second layer uses the result of the Gaussian kernel treatment of 5x5.
4th layer of weight map is that the weight map of third layer uses the result of the Gaussian kernel treatment of 5x5.
It should be noted that only retain suture neighbouring position weight, remaining position need lose (by suture according to 10 pixel wides are widened and can obtain mask, for retaining the weighted value near suture).
Fig. 5 is a kind of suture treatment effect schematic diagram that the embodiment of the present invention two is provided, it is shown that after above-mentioned treatment Design sketch.Non-zero near the 2nd, 3,4 layers of suture or 255 value are respectively illustrated in Fig. 5.In figure the black depth represent (0, 255) weighted value between, does not have the value between 0-255 near the 1st layer of border, layer 2-4 just has near border.Three above Figure is easy to check by zooming to similar identical region.The pixel wide difference of layer 2-4 is little, between 1-2 pixel.
The structure of laplacian pyramid is specific as follows:
Gaussian pyramid G0=artwork Y0,
Gaussian pyramid G1=Gauss 5x5 masterplates are the half of G0 to G0 down-samplings, height wide.
Gaussian pyramid G2=Gauss 5x5 masterplates are the half of G1 to G1 down-samplings, height wide.
Gaussian pyramid G3=Gauss 5x5 masterplates are the half of G2 to G2 down-samplings, height wide.
Laplacian pyramid L3=G3
Laplacian pyramid L2=G2- Gauss 5x5 masterplates up-sampling G3 (the high wide and height in G2 wide)
Laplacian pyramid L1=G1- Gauss 5x5 masterplates up-sampling G2 (the high wide and height in G1 wide)
Laplacian pyramid L0=G 0- Gauss 5x5 masterplates up-sampling G1 (the high wide and height in G0 wide)
Aforesaid operations are done respectively for two width figure P1 and P2 of left and right, respectively obtain Gauss and Laplce the gold word of P1, P2 Tower.Next, to laplacian pyramid L0~L3, according to above-mentioned weight map, Weighted Fusion obtains new Laplce respectively Pyramid Px_L0~Px_L3.
False code is as follows, is that each pixel is weighted according to the weight map of each layer substantially.
Px_L0=P1_L0* (1.0-wei/255.0)+P2_L0*wei/255.0
Px_L1=P1_L1* (1.0-wei/255.0)+P2_L1*wei/255.0
Px_L2=P1_L2* (1.0-wei/255.0)+P2_L2*wei/255.0
Px_L3=P1_L3* (1.0-wei/255.0)+P2_L3*wei/255.0
Then, then Laplce's reconstruction operation is performed, obtains the result for finally merging.Reconstruction operation is specific as follows:
Px_L2+=Px_L3 is according to Gauss 5x5 masterplates up-sampling result (the high width in Px_L2 wide is high).
Px_L1+=Px_L2 is according to Gauss 5x5 masterplates up-sampling result (the high width in Px_L1 wide is high).
Px_L0+=Px_L1 is according to Gauss 5x5 masterplates up-sampling result (the high width in Px_L0 wide is high).
Gauss 5x5 masterplates therein are as follows:
Int pdKernel [25]=
{1,4,6,4,1,
4,16,24,16,4,
6,24,36,24,4,
4,16,24,16,4,
1,4,6,4,1};
Up-sampling operation, a height of original half wide, down-sampling is also much like, and final result is exactly Px_L0, this figure In valid pixel be exactly fusion after band, copy the buffering area for depositing full-view video image to.
It should be noted that the relatively whole panoramic video frame pixel (8,300,000) of pixel (600,000) in the band of suture It is little, it is impossible to enough to open a thread to be simply each pixel, if the pixel is to need pixel to be processed just to process, Otherwise return.Because pixel quantity is huge, the thread opened is more, and GPU needs for each thread block that (thread block is no more than 1024 threads, different because of different model GPU) copy parameter, each thread also Xu Qu GPU global memories read data, can make Into no small time overhead, now ensure that the frame per second of 25 frames is just highly difficult., it is necessary to record each band fusion picture in practical operation Corresponding location of pixels and the Gauss masterplate for using when the position of element, corresponding segment, each pixel are obtained by up/down sampling Position.It is only that these pixels open up thread in GPU.These can determine in above-mentioned parameter calculating process.Therefore, it is excellent Choosing, the default fusion parameters include:The position of each pixel to be fused and corresponding segment, each pixel to be fused pass through Down-sampling and the corresponding location of pixels that respectively obtains of up-sampling, and the Gaussian template for being used position.
It should be noted that using more layers laplacian pyramid (such as 8 layers, it is top weighting 4 pixel wides, Then have influence on and line position about 5x 2^7=640 pixel wides sutured on final panorama sketch) more preferable effect can be obtained, But because layered shaping step is various, amount of calculation can be directly resulted in using the more number of plies and increased severely, have a strong impact on real-time.Invention People, using 4 layers of laplacian pyramid, sutures the pixel of line width 40 through overtesting, is related to pixel 600,000, and frame per second is dropped to from 32 27, so cannot just keep more than 25fps using more layers (must wider range) frame per second.Additionally, inventor also found, adopt It is preferable enough with 4 layers of syncretizing effect of laplacian pyramid, the broadcasting and TV live demand of level, therefore the present invention can be applied to 4 layers are preferably used in embodiment.
Preferably, the associative operation of strip-type fusion treatment is the Y-component under YUV color spaces in the embodiment of the present invention Carry out.Wherein " Y " represents lightness (Luminance or Luma), that is, grey decision-making;And " U " and " V " expression is then color Degree (Chrominance or Chroma), effect is description colors of image and saturation degree, for the color of specified pixel.Because U and V component does not have difference in suture both sides.The difference of Y-component is less than 3-5 pixels, and human eye just cannot have been differentiated, and suture is just It is unobvious.Under above-mentioned 6 operation, have been realized in suturing the seamless transitions of line position, will not obscure and there will not be brightness Mutation.
Step 230, acquisition multi-channel video source data to be spliced.
Step 240, each video source images throwing that synchronization will be corresponded to according to default projective parameter using parallel mode Shadow obtains the panoramic projection video image being made up of multiple segments to default panoramic projection face.
Step 250, entered according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode Row strip-type fusion treatment, and export the full-view video image after fusion.
In the present embodiment, segment weight is pointed to using the amalgamation mode corresponding with step 220 according to default fusion parameters Default suture in folded area carries out strip-type fusion treatment.
The embodiment of the present invention is to obtaining default panoramic projection face, default projective parameter, default suture and default fusion ginseng Several concrete schemes are had been described in detail, and the optimal stitching line between each two segment is calculated using maximum flow minimum cut theorem, and Further employ improved laplacian pyramid to merge suture band, be capable of achieving the seamless mistake of suture line position Cross, effectively increase the joining quality of full-view video image.
Embodiment three
Fig. 6 be the embodiment of the present invention three provide a kind of panoramic video joining method schematic flow sheet, the present embodiment with Optimized based on above-described embodiment, in the present embodiment, " aligned according to default fusion parameters using parallel mode in step Strip-type fusion treatment is carried out in the default suture in segment overlay region " before, also include:With the pixel difference of segment overlay region The different principle that is minimised as carries out translational adjustment operation to the brightness histogram of each segment, to reduce the luminance difference between each segment It is different.
Accordingly, the method for the present embodiment comprises the following steps:
The correspondence that step 610, the facility information for obtaining multiple video capture devices and multiple video capture devices are gathered In the video interception of synchronization.
Wherein, the facility information includes focal length and equipment position.
Step 620, acquired video interception is spliced into by panoramic picture, and record concatenation process pair according to facility information Projection plane, projective parameter, optimal stitching line and the fusion parameters answered, obtain default panoramic projection face, default projective parameter, pre- If suture and default fusion parameters.
Step 630, acquisition multi-channel video source data to be spliced.
Step 640, each video source images throwing that synchronization will be corresponded to according to default projective parameter using parallel mode Shadow obtains the panoramic projection video image being made up of multiple segments to default panoramic projection face.
Step 650, principle is minimised as with the pixel difference of segment overlay region the brightness histogram of each segment is put down Transposition section is operated, to reduce the luminance difference between each segment.
Have been realized in suturing the seamless transitions of line position using multilayer laplacian pyramid in the above-described embodiments, no Can obscure and there will not be jump in brightness.But have a problem that it is that segment overall brightness difference may be obvious, 36-48 pictures The transition of element cannot make up too obvious luminance difference, it is necessary to the arrangement brightness to each segment is adjusted, to reduce each segment Between luminance difference.
Conventional luminance regulating method has the methods such as histogram matching and dodging, but these methods are typically only fitted For scene on a large scale, such as remote sensing images.For the small range scene such as live class, the above method can cause gray level to lose, Result causes picture quality not good enough.Such as one desk looks like a color, there is 3-5 gray scale in fact, and gray level is lost Mean that may there was only 1-2 gray level, at this moment desk is exactly one piece one piece of color lump, does not have transition, very unnatural.
Brand-new brightness regulation scheme is employed in the present embodiment, specifically, this step may include:For with reference to segment and Segment to be regulated, calculates the brightness histogram of segment overlay region respectively;The figure to be regulated is adjusted using candidate displacement parameter value The color mapping table of block, obtains the first color mapping table;The segment to be regulated is updated according to the first color mapping table The brightness histogram of segment overlay region;By correspond to same segment overlay region segment be designated as matching it is right, it is right for each matching, By the brightness summation segment overlay region corresponding with the second segment of the brightness histogram of the corresponding segment overlay region of the first segment The absolute value of the difference of the brightness summation of brightness histogram is used as luminance difference evaluation function value;With it is each matching to luminance difference Evaluation function value and minimum principle, obtains the corresponding target translation parameters value of each segment to be regulated;Translated using target Parameter value adjusts the color mapping table of the segment to be regulated, obtains the second color mapping table;Mapped using second color Table maps each pixel in the segment to be regulated, to realize that the brightness histogram translational adjustment to the segment to be regulated is grasped Make.
Wherein, for each translational adjustment operate, with reference to segment quantity can for one can also for multiple, it is to be regulated It can also be multiple that the quantity of segment can be one, specifically can refer to the mutual alignment relation between each segment to determine.It is many The brightness regulation of individual segment needs to consider difference, such as 1-2, the segment of 1-3,1-4,2-4,2-3,2-5,4-6 and 5-6 Overlay region, numeral represents segment numbering, and "-" represents that two segments are present and overlaps.By 1 be defined regulation brightness when, 2,3,4 is same When adjust, using 2-4 and 2-3 as constraint, and calculate overall brightness difference evaluation function value, take this functional value it is minimum when, 2,3,4 Brightness regulation value be brightness regulation result.The brightness of 2 and 4 adjustment 6 is compareed again, is constraint with 5-6.Which adjusted with specific reference to Whole brightness can be specified.
In above-mentioned brightness regulation scheme, the expression formula of translation parameters is:
Y=1/ (1-X/255) -1
Wherein, Y is translation parameters;X is the brightness regulation factor, and its span is [- 100,100].
Fig. 7 is the corresponding curve synoptic diagram of translation parameters that the embodiment of the present invention three is provided, and X is used to characterize brightness regulation model Enclose, be worth more big more bright smaller darker.The value of Y is the monotonic increasing function in the range of [- 0.2817,0.6452], belongs to hyp One.
Brightness value X to giving is calculated a Y value using this formula, and color mapping is adjusted using translation parameters value The concrete mode of table is to be multiplied by color mapping table Z=[0 with this Y value:1:255], new color mapping table nZ=[0 is obtained:1: 255]+Y*[0:1:255].Any pixel Px=nZ [Px] of segment is mapped using nZ, the segment brightness for obtaining will variable Or it is dimmed.The characteristics of this curve is that the brightness of X=0 positions does not change, brightness adjustment brightness changing value in the range of [- 50,50] It is suppressed, belongs to fine setting, rest interval is close to linear change.It is advantageous in that using this regulative mode, in illumination simultaneously Uncomplicated scene, such as outdoor scene on daytime and sporting venue etc., camera shoot the frame of video luminance difference come It is different and little, it is only necessary to fine setting, operand can be reduced, it is ensured that the real-time of full-view video image output.
Preferably, for referring to segment and segment to be regulated, before the brightness histogram of segment overlay region is calculated respectively, Also include:Matching operation is carried out using Feature Points Matching and/or each segment overlay region of color matching pair, and according to pre-conditioned sieve Select effective segment overlay region.
In order to ensure the effective of overlay region, it is necessary to consider that problem of misalignment may cause the upper of some overlay region actual match Valid pixel it is few, the validity of overlay region is now evaluated using Feature Points Matching and color-match two ways.It is quick special Matching (such as harris or orb Feature Points Matchings combined with ransac algorithms rejecting mispairing point) is levied to operate overlay region, to Be ranked up with a quantity, take match point be not less than median * 0.9 (be not excluded for closely median) matching it is right.Make With color-match because harris or orb characteristic points seldom exist in the small region of gradient, and such bulk region is often It is pixel of the same name.Specific practice is that to the lap zoning of A, B, (each region is referred to as respectively using figure patterning method One sub- segment), set a pixel threshold, such as the 5% of overlay region pixel is used to reject too small subgraph block.To other Subgraph block calculates the color histogram of U, V component respectively, and such subgraph block is simultaneously few, and quantity is used generally within 100 Pasteur's distance calculates Histogram distance two-by-two, and according to distance-taxis, the matching for taking distance no more than median d_mid*0.9 is right, The matching for rejecting pixel quantity great disparity is right.These matchings are calculated to the pixel summation sum_pix that is related to.
To U or the histogram Ha and Hb of V component, two Pasteur are calculated apart from batt_U and batt_V using U, V component, Two value multiplications obtain a distance value batt=batt_U*batt_V.The reason for multiplication is the Pasteur's distance on two components Bigger will could illustrate that matching is preferable.
To all of pixel summation N2 for thering is overlay region to be calculated a Feature Points Matching logarithm N1 and matching.Due to N1 The match point in the big region of gradient is characterized, what N2 was represented is the match point in the small region of gradient, the two is added as evaluation function Value Nx=N1+N2.
The sequence of Nx values that all overlay regions obtain, takes and is not less than the overlay region of median m_Nx*0.9 and enters next step, adjusts Section segment brightness.
In the concrete operations for carrying out segment brightness regulation, first find between two segments A, B overlaid pixel (Pa and Pb, all in (x, y) position of panorama sketch).Then, it (is exactly the Nogata of overlay region Y-component to calculate respective color histogram here Figure Ha and Hb).Existing in view of error causes dislocation, should not directly use the difference of value of Pa and Pb as luminance difference evaluation Function, more suitably method are the difference for using A, B in the brightness summation of overlay region, this have the advantage that, real is same Brightness value between famous cake makes the difference calculating and is achieved.For example with segment A (referring to segment) as reference, B is (to be regulated for adjustment segment Segment) brightness.Color mapping table is calculated to B using the value of X, according to the new color histogram nHb of this meter.Again by Ha and The absolute value of the difference of the brightness summation of nHb is used as luminance difference evaluation function value.
For another example with A as reference, the brightness of B and C is adjusted, meanwhile, there is overlay region between B and C as constraint.Should be noted , the overlay region between A and B, the overlay region between A and C, the overlay region between B and C are different, corresponding histograms Pair it is also different, is denoted as Hab_A and Hab_B, Hac_A and Hac_C, Hbc_B and Hbc_C.Above three matching is calculated respectively To brightness summation difference absolute value, using three absolute values and as luminance difference evaluation function value.
If B, C, D simultaneously to A adjust brightness, have overlay region between B and C, C and D, B and D as constraint, be also with it is upper Way as noodles.
According to logic above, it is assumed that adjusted B, C, D with reference to A, now E has overlay region with B, C, D, then need The brightness of E is adjusted with reference to B, C, D.In the adjustable scope of brightness, while adjusting Heb_E, Hec_E and Hed_E so that they Evaluation function value sum respectively with Heb_B, Hec_C and Hed_D is minimum.
Similarly, if E, F and B, C, D have overlay region simultaneously, E and F adjusts brightness simultaneously to B, C, D, and luminance difference is commented Valency functional value sum takes minimum.If now E and F also have overlay region as constraint, while adjusting E and F at the same time, by E and F Between overlay region luminance difference evaluation function value take into account, just as it is above-mentioned with reference to A adjust B and C brightness, have between B and C Overlay region as constraint equally process.
It is understood that the high definition camera most multipotency of general 2K video frequency outputs constitutes panorama camera by 8-10 platforms or so, Output 4K to 6K, more cameras not necessarily because 8-10 platform wide angle cameras are enough to cover horizontal 360-degree and vertical 360 degree, Meanwhile, GPU can process basic within 6K to the video frame size of 25 frame above frame per second, the data throughput of mainboard pci bus The limit is also nearly reached.So, although above-mentioned algorithm is complex, but due to the limitation of above-mentioned hardware, with reference to certain phase all the way Machine adjusts the brightness of other camera video frames, substantially has 2~3 regulation levels can be to cover whole cameras.
Such as ground floor is defined by 0, adjusts 1,2,3 tunnels, using the overlay region between 1,2,3 as constraint.The second layer, with 1, 2nd, 3 tunnels are defined, and adjust 4,5,6,7 tunnels, using the overlay region between 4~7 tunnels as constraint.The second layer it is possible that simultaneously with 1, 2nd, 3 tunnels are defined, and adjust 4,5 tunnels, are constraint with 4,5 overlay region, then certain 1~2 tunnel is defined in 1,2,3, adjust 6,7 tunnels, with 4th, the overlay region between 5 and 6,7 is constraint.It is defined by 1~3 tunnel, has the more person in overlay region preferential therewith.The second layer is likely to 4~7 tunnels are not adjusted, 4~5 tunnels have only been have adjusted, third layer adjustment has at this moment occurred, be defined by 4~5, adjusted 6~7 tunnels, together When with the overlay region between 6~7 tunnels be constraint.
Follow-up the like, if all regulation is finished segment, just exit.Generally, one can be combined into Complete panorama sketch a, segment is at least adjacent with a segment, and a segment segment quantity adjacent thereto is most all two More than individual, it is not always the case even across above-mentioned Feature Points Matching, color-match screening, the feelings that certain segment is not adjusted occurs Condition is non-existent.
In short, the segment of brightness has been mixed up, otherwise other segments are adjusted as constraint, or adjust other segments Overlay region therewith is considered as constraint, makes every effort to overall brightness difference minimum.On this basis, the pyramid of suture band Fusion could seamlessly transit suture line position, realize seamless spliced in real time.
Step 660, entered according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode Row strip-type fusion treatment, and export the full-view video image after fusion.
Panoramic video joining method provided in an embodiment of the present invention, default suture is carried out strip-type fusion treatment it Before, be minimised as principle with the pixel difference of segment overlay region carries out translational adjustment operation to the brightness histogram of each segment, subtracts Lacked the luminance difference between each segment, made the brightness uniformity of whole frame full-view video image consistent, realize it is seamless spliced, further The output of real-time high-quality panoramic video is ensure that, broadcasting and TV level application demand is preferably met.
Example IV
Fig. 8 is a kind of structured flowchart of panoramic video splicing apparatus that the embodiment of the present invention four is provided, and the device can be by soft Part and/or hardware realization, are typically integrated in video-splicing equipment, can carry out panorama by performing panoramic video joining method The splicing of video.As shown in figure 8, the device includes video source data acquisition module 81, projection module 82, Fusion Module 83 and complete Scape Video Output Modules 84.
Wherein, video source data acquisition module 81, for obtaining multi-channel video source data to be spliced;Projection module 82, Will be corresponding to each video source image projection of synchronization to default panorama according to default projective parameter for use parallel mode Perspective plane, obtains the panoramic projection video image being made up of multiple segments, wherein, each segment one video source images of correspondence, Lap between each two segment constitutes segment overlay region;Fusion Module 83, for being melted according to default using parallel mode The default suture that conjunction parameter is pointed in segment overlay region carries out strip-type fusion treatment;Panoramic video output module 84, uses Full-view video image after fusion is exported.
Panoramic video splicing apparatus provided in an embodiment of the present invention, get multi-channel video source data to be spliced it Afterwards, the video source images for spliced panoramic video image are projected and are merged etc. with treatment using parallel mode, it is ensured that spell Efficiency is connect, the real-time of panoramic video output can be strengthened;Additionally, using precalculated projective parameter, panoramic projection face, melting Close parameter and suture to carry out relevant treatment, operand during video-splicing can be reduced, further improve splicing efficiency, together When due to the contents such as above-mentioned parameter be precalculated, ensure that the fine definition of image.Therefore, the embodiment of the present invention is carried The technical scheme of confession can take into account the splicing efficiency and picture quality of panoramic video, it is ensured that the panoramic video for being exported possesses higher Ageing and preferably picture quality, be applicable to broadcasting and TV level live.
On the basis of above-described embodiment, the device also includes:
Data acquisition module, for before acquisition multi-channel video source data to be spliced, obtaining multiple videos and adopting The video interception corresponding to synchronization that the facility information of collection equipment and the multiple video capture device are gathered, wherein, The facility information includes focal length and equipment position;
Image mosaic module, for acquired video interception to be spliced into panoramic picture according to the facility information, and The corresponding projection plane of record concatenation process, projective parameter, optimal stitching line and fusion parameters, obtain default panoramic projection face, Default projective parameter, default suture and default fusion parameters.
On the basis of above-described embodiment, acquisition multi-channel video source data to be spliced, including:Obtain Preset Time The multi-channel video source data that there is frame updating compared to last moment in interval.
On the basis of above-described embodiment, the parallel mode includes:Different video source images use different functions simultaneously Row treatment, the different pixels in same video source images are processed using different thread parallels.
It is described that acquired video interception is spliced into by panorama according to the facility information on the basis of above-described embodiment Image, including:Feature Points Matching is carried out to acquired video interception;Will according to the facility information and Feature Points Matching result Acquired video interception is projected to the same coordinate system and launched, and obtains the panoramic projection image being made up of multiple segments, its In, each segment one video interception of correspondence, the lap between each two segment constitutes segment overlay region;Using max-flow Minimal cut theorem calculates the optimal stitching line between each two segment, wherein, using the Grad of segment overlay region as segment weight The weighted value of pixel in folded area;Strip-type fusion treatment is carried out to the optimal stitching line.
It is described to carry out strip-type fusion treatment to the optimal stitching line on the basis of above-described embodiment, including:According to Presetted pixel width is widened to the optimal stitching line, obtains suture band;Using multilayer laplacian pyramid pair The suture band is merged.
On the basis of above-described embodiment, the default fusion parameters include:The position of each pixel to be fused and correspondence Segment, each pixel to be fused is by down-sampling and the corresponding location of pixels that respectively obtains of up-sampling, and is used The position of Gaussian template.
On the basis of above-described embodiment, the device also includes:
Luminance adjustment module, it is pre- in segment overlay region for being pointed to according to default fusion parameters in use parallel mode If before suture carries out strip-type fusion treatment, principle is minimised as to the bright of each segment with the pixel difference of segment overlay region Degree histogram carries out translational adjustment operation, to reduce the luminance difference between each segment.
On the basis of above-described embodiment, the luminance adjustment module specifically for:Parallel mode is being used according to default Before the default suture that fusion parameters are pointed in segment overlay region carries out strip-type fusion treatment, for referring to segment and treating Regulation segment, calculates the brightness histogram of segment overlay region respectively;The segment to be regulated is adjusted using candidate displacement parameter value Color mapping table, obtain the first color mapping table;The figure of the segment to be regulated is updated according to the first color mapping table The brightness histogram of block overlay region;The segment for corresponding to same segment overlay region is designated as to match right, general right for each matching The brightness summation segment overlay region corresponding with the second segment of the brightness histogram of the corresponding segment overlay region of the first segment it is bright The absolute value of difference of histogrammic brightness summation is spent as luminance difference evaluation function value;With it is each matching to luminance difference comment Valency functional value and minimum principle, obtains the corresponding target translation parameters value of each segment to be regulated;Translated using target and joined Numerical value adjusts the color mapping table of the segment to be regulated, obtains the second color mapping table;Using the second color mapping table Each pixel in the segment to be regulated is mapped, to realize that the brightness histogram translational adjustment to the segment to be regulated is grasped Make.
On the basis of above-described embodiment, the expression formula of translation parameters is:
Y=1/ (1-X/255) -1
Wherein, Y is translation parameters;X is the brightness regulation factor, and its span is [- 100,100].
On the basis of above-described embodiment, the luminance adjustment module is additionally operable to:
For referring to segment and segment to be regulated, before the brightness histogram of segment overlay region is calculated respectively, using spy Levying Point matching and/or each segment overlay region of color matching pair carries out matching operation, and is overlapped according to the pre-conditioned segment that filters out The live part in area.
The panoramic video splicing apparatus provided in above-described embodiment can perform the panorama that any embodiment of the present invention is provided Video-splicing method, possesses the execution corresponding functional module of the method and beneficial effect.Not detailed description in the above-described embodiments Ins and outs, reference can be made to the panoramic video joining method that any embodiment of the present invention is provided.
Note, above are only presently preferred embodiments of the present invention and institute's application technology principle.It will be appreciated by those skilled in the art that The invention is not restricted to specific embodiment described here, can carry out for a person skilled in the art various obvious changes, Readjust and substitute without departing from protection scope of the present invention.Therefore, although the present invention is carried out by above example It is described in further detail, but the present invention is not limited only to above example, without departing from the inventive concept, also More other Equivalent embodiments can be included, and the scope of the present invention is determined by scope of the appended claims.

Claims (20)

1. a kind of panoramic video joining method, it is characterised in that including:
Obtain multi-channel video source data to be spliced;
Will be corresponding to each video source image projection of synchronization to default panorama according to default projective parameter using parallel mode Perspective plane, obtains the panoramic projection video image being made up of multiple segments, wherein, each segment one video source images of correspondence, Lap between each two segment constitutes segment overlay region;
Strip-type fusion is carried out according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode Treatment, and export the full-view video image after fusion.
2. method according to claim 1, it is characterised in that acquisition multi-channel video source data to be spliced, including:
Obtain the multi-channel video source data that there is frame updating compared to last moment in prefixed time interval.
3. method according to claim 1, it is characterised in that the parallel mode includes:
Different video source images use different function parallel processings, and the different pixels in same video source images are using different Thread parallel treatment.
4. method according to claim 1, it is characterised in that it is described obtain multi-channel video source data to be spliced it Before, also include:
Obtain that the facility information and the multiple video capture device of multiple video capture devices gathered corresponding to for the moment The video interception at quarter, wherein, the facility information includes focal length and equipment position;
Acquired video interception is spliced into by panoramic picture, and the corresponding projection of record concatenation process according to the facility information Plane, projective parameter, optimal stitching line and fusion parameters, obtain default panoramic projection face, default projective parameter, default suture With default fusion parameters.
5. method according to claim 4, it is characterised in that described to be cut acquired video according to the facility information Figure is spliced into panoramic picture, including:
Feature Points Matching is carried out to acquired video interception;
Acquired video interception is projected to the same coordinate system and opened up according to the facility information and Feature Points Matching result Open, obtain the panoramic projection image being made up of multiple segments, wherein, each segment one video interception of correspondence, each two segment Between lap constitute segment overlay region;
The optimal stitching line between each two segment is calculated using maximum flow minimum cut theorem, wherein, with the ladder of segment overlay region Angle value as pixel in segment overlay region weighted value;
Strip-type fusion treatment is carried out to the optimal stitching line.
6. method according to claim 5, it is characterised in that described to be carried out at strip-type fusion to the optimal stitching line Reason, including:
The optimal stitching line is widened according to presetted pixel width, obtains suture band;
The suture band is merged using multilayer laplacian pyramid.
7. method according to claim 6, it is characterised in that the default fusion parameters include:Each pixel to be fused Position and corresponding segment, each pixel to be fused by down-sampling and the corresponding location of pixels that respectively obtains of up-sampling, And the position of the Gaussian template for being used.
8. method according to claim 1, it is characterised in that be pointed to according to default fusion parameters using parallel mode Before default suture in segment overlay region carries out strip-type fusion treatment, also include:
Be minimised as principle with the pixel difference of segment overlay region carries out translational adjustment operation to the brightness histogram of each segment, with Reduce the luminance difference between each segment.
9. method according to claim 8, it is characterised in that principle pair is minimised as with the pixel difference of segment overlay region The brightness histogram of each segment carries out translational adjustment operation, to reduce the luminance difference between each segment, including:
For referring to segment and segment to be regulated, the brightness histogram of segment overlay region is calculated respectively;
The color mapping table of the segment to be regulated is adjusted using candidate displacement parameter value, the first color mapping table is obtained;
The brightness histogram of the segment overlay region of the segment to be regulated is updated according to the first color mapping table;
By correspond to same segment overlay region segment be designated as matching it is right, it is right for each matching, by the corresponding figure of the first segment The brightness of the brightness histogram of the brightness summation segment overlay region corresponding with the second segment of the brightness histogram of block overlay region is total The absolute value of the difference of sum is used as luminance difference evaluation function value;
With each matching to luminance difference evaluation function value and minimum principle, obtain the corresponding target of each segment to be regulated Translation parameters value;
The color mapping table of the segment to be regulated is adjusted using target translation parameters value, the second color mapping table is obtained;
Each pixel in the segment to be regulated is mapped using the second color mapping table, to realize to the figure to be regulated The brightness histogram translational adjustment operation of block.
10. method according to claim 9, it is characterised in that the expression formula of translation parameters is:
Y=1/ (1-X/255) -1
Wherein, Y is translation parameters;X is the brightness regulation factor, and its span is [- 100,100].
11. methods according to claim 9, it is characterised in that for referring to segment and segment to be regulated, calculate respectively Before the brightness histogram of segment overlay region, also include:
Matching operation is carried out using Feature Points Matching and/or each segment overlay region of color matching pair, and according to pre-conditioned screening Go out effective segment overlay region.
A kind of 12. panoramic video splicing apparatus, it is characterised in that including:
Video source data acquisition module, for obtaining multi-channel video source data to be spliced;
Projection module, for being thrown according to each video source images that default projective parameter will correspond to synchronization using parallel mode Shadow obtains the panoramic projection video image being made up of multiple segments to default panoramic projection face, wherein, each segment correspondence one Video source images, the lap between each two segment constitutes segment overlay region;
Fusion Module, for being entered according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode Row strip-type fusion treatment;
Panoramic video output module, for exporting the full-view video image after fusion.
13. devices according to claim 12, it is characterised in that also include:
Data acquisition module, for before acquisition multi-channel video source data to be spliced, obtaining multiple video acquisitions and setting The video interception corresponding to synchronization that standby facility information and the multiple video capture device is gathered, wherein, it is described Facility information includes focal length and equipment position;
Image mosaic module, for acquired video interception to be spliced into panoramic picture according to the facility information, and records The corresponding projection plane of splicing, projective parameter, optimal stitching line and fusion parameters, obtain default panoramic projection face, preset Projective parameter, default suture and default fusion parameters.
14. devices according to claim 13, it is characterised in that it is described according to the facility information by acquired video Sectional drawing is spliced into panoramic picture, including:
Feature Points Matching is carried out to acquired video interception;
Acquired video interception is projected to the same coordinate system and opened up according to the facility information and Feature Points Matching result Open, obtain the panoramic projection image being made up of multiple segments, wherein, each segment one video interception of correspondence, each two segment Between lap constitute segment overlay region;
The optimal stitching line between each two segment is calculated using maximum flow minimum cut theorem, wherein, with the ladder of segment overlay region Angle value as pixel in segment overlay region weighted value;
Strip-type fusion treatment is carried out to the optimal stitching line.
15. devices according to claim 14, it is characterised in that described that strip-type fusion is carried out to the optimal stitching line Treatment, including:
The optimal stitching line is widened according to presetted pixel width, obtains suture band;
The suture band is merged using multilayer laplacian pyramid.
16. devices according to claim 15, it is characterised in that the default fusion parameters include:Each picture to be fused The position of element and corresponding segment, the corresponding pixel position that each pixel to be fused is respectively obtained by down-sampling and up-sampling Put, and the Gaussian template for being used position.
17. devices according to claim 12, it is characterised in that also include:
Luminance adjustment module, in the default seam being pointed to according to default fusion parameters using parallel mode in segment overlay region Before zygonema carries out strip-type fusion treatment, brightness of the principle to each segment is minimised as with the pixel difference of segment overlay region straight Square figure carries out translational adjustment operation, to reduce the luminance difference between each segment.
18. devices according to claim 17, it is characterised in that the luminance adjustment module specifically for:
Melt strip-type is carried out according to the default suture that default fusion parameters are pointed in segment overlay region using parallel mode Before conjunction treatment, for referring to segment and segment to be regulated, the brightness histogram of segment overlay region is calculated respectively;
The color mapping table of the segment to be regulated is adjusted using candidate displacement parameter value, the first color mapping table is obtained;
The brightness histogram of the segment overlay region of the segment to be regulated is updated according to the first color mapping table;
By correspond to same segment overlay region segment be designated as matching it is right, it is right for each matching, by the corresponding figure of the first segment The brightness of the brightness histogram of the brightness summation segment overlay region corresponding with the second segment of the brightness histogram of block overlay region is total The absolute value of the difference of sum is used as luminance difference evaluation function value;
With each matching to luminance difference evaluation function value and minimum principle, obtain the corresponding target of each segment to be regulated Translation parameters value;
The color mapping table of the segment to be regulated is adjusted using target translation parameters value, the second color mapping table is obtained;
Each pixel in the segment to be regulated is mapped using the second color mapping table, to realize to the figure to be regulated The brightness histogram translational adjustment operation of block.
19. devices according to claim 18, it is characterised in that the expression formula of translation parameters is:
Y=1/ (1-X/255) -1
Wherein, Y is translation parameters;X is the brightness regulation factor, and its span is [- 100,100].
20. devices according to claim 19, it is characterised in that the luminance adjustment module is additionally operable to:
For referring to segment and segment to be regulated, before the brightness histogram of segment overlay region is calculated respectively, using characteristic point Matching and/or each segment overlay region of color matching pair carry out matching operation, and filter out effective segment weight according to pre-conditioned Folded area.
CN201611127613.7A 2016-12-09 2016-12-09 A kind of panoramic video joining method and device Pending CN106791623A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611127613.7A CN106791623A (en) 2016-12-09 2016-12-09 A kind of panoramic video joining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611127613.7A CN106791623A (en) 2016-12-09 2016-12-09 A kind of panoramic video joining method and device

Publications (1)

Publication Number Publication Date
CN106791623A true CN106791623A (en) 2017-05-31

Family

ID=58877742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611127613.7A Pending CN106791623A (en) 2016-12-09 2016-12-09 A kind of panoramic video joining method and device

Country Status (1)

Country Link
CN (1) CN106791623A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107205122A (en) * 2017-08-03 2017-09-26 哈尔滨市舍科技有限公司 The live camera system of multiresolution panoramic video and method
CN107820067A (en) * 2017-10-29 2018-03-20 苏州佳世达光电有限公司 The joining method and splicing apparatus of more projected pictures
CN107968949A (en) * 2018-01-22 2018-04-27 盎锐(上海)信息科技有限公司 Dynamic data processing method and panoramic shooting system based on full-view image
CN108090904A (en) * 2018-01-03 2018-05-29 深圳北航新兴产业技术研究院 A kind of medical image example dividing method and device
CN108495060A (en) * 2018-03-26 2018-09-04 浙江大学 A kind of real-time joining method of HD video
CN108769578A (en) * 2018-05-17 2018-11-06 南京理工大学 A kind of real-time omnidirectional imaging system and method based on multi-path camera
CN108846803A (en) * 2018-04-23 2018-11-20 遵义师范学院 A kind of color rendition method based on yuv space
CN109218604A (en) * 2017-06-29 2019-01-15 佳能企业股份有限公司 Image capture unit, image brilliance modulating method and image processor
CN109523468A (en) * 2018-11-15 2019-03-26 深圳市道通智能航空技术有限公司 Image split-joint method, device, equipment and unmanned plane
CN109583458A (en) * 2018-12-04 2019-04-05 中国兵器装备集团上海电控研究所 Space situation awareness method and computer readable storage medium
CN109598678A (en) * 2018-12-25 2019-04-09 维沃移动通信有限公司 A kind of image processing method, device and terminal device
CN110246081A (en) * 2018-11-07 2019-09-17 浙江大华技术股份有限公司 A kind of image split-joint method, device and readable storage medium storing program for executing
CN112308984A (en) * 2020-11-03 2021-02-02 豪威科技(武汉)有限公司 Vehicle-mounted image splicing method, system and device
CN113099248A (en) * 2021-03-19 2021-07-09 广州方硅信息技术有限公司 Panoramic video filling method, device, equipment and storage medium
CN113766273A (en) * 2021-01-05 2021-12-07 北京沃东天骏信息技术有限公司 Method and device for processing video data
CN113962859A (en) * 2021-10-26 2022-01-21 北京有竹居网络技术有限公司 Panorama generation method, device, equipment and medium
CN117670667A (en) * 2023-11-08 2024-03-08 广州成至智能机器科技有限公司 Unmanned aerial vehicle real-time infrared image panorama stitching method

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109218604B (en) * 2017-06-29 2020-12-29 佳能企业股份有限公司 Image capturing device, image brightness modulation method and image processing device
CN109218604A (en) * 2017-06-29 2019-01-15 佳能企业股份有限公司 Image capture unit, image brilliance modulating method and image processor
CN107205122A (en) * 2017-08-03 2017-09-26 哈尔滨市舍科技有限公司 The live camera system of multiresolution panoramic video and method
CN107820067B (en) * 2017-10-29 2019-09-20 苏州佳世达光电有限公司 The joining method and splicing apparatus of more projected pictures
CN107820067A (en) * 2017-10-29 2018-03-20 苏州佳世达光电有限公司 The joining method and splicing apparatus of more projected pictures
CN108090904A (en) * 2018-01-03 2018-05-29 深圳北航新兴产业技术研究院 A kind of medical image example dividing method and device
CN107968949A (en) * 2018-01-22 2018-04-27 盎锐(上海)信息科技有限公司 Dynamic data processing method and panoramic shooting system based on full-view image
CN108495060A (en) * 2018-03-26 2018-09-04 浙江大学 A kind of real-time joining method of HD video
CN108846803A (en) * 2018-04-23 2018-11-20 遵义师范学院 A kind of color rendition method based on yuv space
CN108769578A (en) * 2018-05-17 2018-11-06 南京理工大学 A kind of real-time omnidirectional imaging system and method based on multi-path camera
CN110246081B (en) * 2018-11-07 2023-03-17 浙江大华技术股份有限公司 Image splicing method and device and readable storage medium
CN110246081A (en) * 2018-11-07 2019-09-17 浙江大华技术股份有限公司 A kind of image split-joint method, device and readable storage medium storing program for executing
CN109523468A (en) * 2018-11-15 2019-03-26 深圳市道通智能航空技术有限公司 Image split-joint method, device, equipment and unmanned plane
CN109523468B (en) * 2018-11-15 2023-10-20 深圳市道通智能航空技术股份有限公司 Image stitching method, device, equipment and unmanned aerial vehicle
CN109583458A (en) * 2018-12-04 2019-04-05 中国兵器装备集团上海电控研究所 Space situation awareness method and computer readable storage medium
CN109583458B (en) * 2018-12-04 2020-11-17 中国兵器装备集团上海电控研究所 Spatial situation awareness method and computer-readable storage medium
CN109598678B (en) * 2018-12-25 2023-12-12 维沃移动通信有限公司 Image processing method and device and terminal equipment
CN109598678A (en) * 2018-12-25 2019-04-09 维沃移动通信有限公司 A kind of image processing method, device and terminal device
CN112308984A (en) * 2020-11-03 2021-02-02 豪威科技(武汉)有限公司 Vehicle-mounted image splicing method, system and device
CN112308984B (en) * 2020-11-03 2024-02-02 豪威科技(武汉)有限公司 Vehicle-mounted image stitching method, system and device
CN113766273A (en) * 2021-01-05 2021-12-07 北京沃东天骏信息技术有限公司 Method and device for processing video data
CN113099248A (en) * 2021-03-19 2021-07-09 广州方硅信息技术有限公司 Panoramic video filling method, device, equipment and storage medium
CN113962859A (en) * 2021-10-26 2022-01-21 北京有竹居网络技术有限公司 Panorama generation method, device, equipment and medium
CN117670667A (en) * 2023-11-08 2024-03-08 广州成至智能机器科技有限公司 Unmanned aerial vehicle real-time infrared image panorama stitching method
CN117670667B (en) * 2023-11-08 2024-05-28 广州成至智能机器科技有限公司 Unmanned aerial vehicle real-time infrared image panorama stitching method

Similar Documents

Publication Publication Date Title
CN106791623A (en) A kind of panoramic video joining method and device
US20170345214A1 (en) High Resolution (HR) Panorama Generation Without Ghosting Artifacts Using Multiple HR Images Mapped to a Low-Resolution 360-Degree Image
CA2940664C (en) Image stitching and automatic-color correction
CN103856727B (en) Multichannel real-time video splicing processing system
US7024053B2 (en) Method of image processing and electronic camera
CN110782394A (en) Panoramic video rapid splicing method and system
CN107451952B (en) Splicing and fusing method, equipment and system for panoramic video
DE102018120304A1 (en) Method and system for image distortion correction for images taken by using a wide-angle lens
CN105100640B (en) A kind of local registration parallel video joining method and system
CN107274346A (en) Real-time panoramic video splicing system
EP3516619A1 (en) Color normalization for a multi-camera system
US20050008254A1 (en) Image generation from plurality of images
CN109598673A (en) Image split-joint method, device, terminal and computer readable storage medium
CN109934772A (en) A kind of image interfusion method, device and portable terminal
CN104506826B (en) A kind of real-time splicing apparatus of fixed point orientation video without effective overlapping region
CN109509146A (en) Image split-joint method and device, storage medium
DE102011056970A1 (en) Raster output of rotated, interpolated pixels, optimized for digital image stabilization
CN106447602A (en) Image mosaic method and device
CN112085659A (en) Panorama splicing and fusing method and system based on dome camera and storage medium
RU2580473C1 (en) Device to seamlessly merge the images into a single composition with automatic contrast adjustment and gradients
CN105488777A (en) System and method for generating panoramic picture in real time based on moving foreground
CN104392416A (en) Video stitching method for sports scene
CN109711268A (en) A kind of facial image screening technique and equipment
CN107437239A (en) A kind of image enchancing method and device
CN113905219B (en) Image processing apparatus and method, image processing system, control method, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20170531