CN109658366A - Based on the real-time video joining method for improving RANSAC and dynamic fusion - Google Patents

Based on the real-time video joining method for improving RANSAC and dynamic fusion Download PDF

Info

Publication number
CN109658366A
CN109658366A CN201811238125.2A CN201811238125A CN109658366A CN 109658366 A CN109658366 A CN 109658366A CN 201811238125 A CN201811238125 A CN 201811238125A CN 109658366 A CN109658366 A CN 109658366A
Authority
CN
China
Prior art keywords
point
matching
points
region
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811238125.2A
Other languages
Chinese (zh)
Inventor
林健
杨建伟
黄波
史二厅
杨坤
候跃强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Pingdingshan Tianan Coal Mining Co Ltd
Original Assignee
Pingdingshan Tianan Coal Mining Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pingdingshan Tianan Coal Mining Co Ltd filed Critical Pingdingshan Tianan Coal Mining Co Ltd
Priority to CN201811238125.2A priority Critical patent/CN109658366A/en
Publication of CN109658366A publication Critical patent/CN109658366A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of based on the real-time video joining method for improving RANSAC and dynamic fusion, is related to video-splicing technical field.This method sequentially chooses four pairs of any 3 points of not conllinear match points in thick matching double points, and calculates separately its space similarity, retains the matching double points when similarity is higher than given threshold, conversely, then giving up, until having detected all matching double points.Then from the matching double points that above step filters out, four pairs are randomly selected, transformation model is brought into and homography matrix H is calculated, it is then gone to test all other data with the model that this step obtains, if some point is suitable for the model of estimation, it is believed that it is also intra-office point, and is marked.Step described in upper section is repeated into n times, extracts the most interior point of labeled number, and as final matching point set, the matching concentrated with selected point is to homography matrix is recalculated, to obtain final estimates of parameters and optimal homography matrix.

Description

Based on the real-time video joining method for improving RANSAC and dynamic fusion
Technical field
It is specifically a kind of based on the real-time view for improving RANSAC and dynamic fusion the present invention relates to video-splicing technical field Frequency joining method.
Background technique
Panoramic picture is made of multiple images, and visual angle is much larger than any image, is applied to video monitoring, is virtually showed The fields such as reality, environmental monitoring.With the development of digital monitor system, the real-time video splicing of panorama dynamic image is proposed Higher requirement.
The first step for constructing panoramic picture is image registration.Current method for registering mainly includes based on pixel method, transformation Domain method is based on characteristic method.Method for registering pixel-based can provide accurate registration result, but due to its huge calculating It measures, it is for splicing system higher for requirement of real-time and impracticable.Based on the method for registering of feature since its robustness is high, It is adaptable, it is therefore widely used in splicing realization.
Method for registering based on feature mainly includes feature extraction, characteristic matching.The feature extraction algorithm of mainstream has at present SIFT algorithm and SURF algorithm etc..Result of study shows at least 3 times faster than SIFT or more of SURF, and comprehensive performance is better than SIFT Algorithm, therefore be widely applied in the higher occasion of requirement of real-time.After extracting matching double points using SURF algorithm, need Error hiding pair is further eliminated using RANSAC algorithm, to improve splicing plot quality.
After panoramic picture generates, apparent splicing seams are had in overlapping region, fuzzy and ghost.There are mainly two types of at present Method is used to solve the above problems.A kind of method is the entire overlapping region of fusion, mainly includes single linear fusion and graph cut. Another method is the optimal stitching line found between image, such as Dynamic Programming and pattern dividing method.In addition there are also some sides Method combines above two method, and achieves good result.Ghost phenomenon is that another in Panorama Mosaic is normal The problem of seeing is difficult to solve.Ghost phenomenon is broadly divided into two classes.One kind is to be caused by moving target across overlapping region, separately One kind is since the parallax between two camera lenses causes.There are many methods that can be used to solve first kind ghost, as multiband melts Conjunction and gradient field method etc..Existing video-splicing method has the following disadvantages:
Problem 1: in the image registration stage, the method for registering based on pixel and transform domain is due to huge calculation amount, for reality It is for the more demanding splicing system of when property and impracticable, the higher processing method of real-time need to be used.
Problem 2: due to being influenced by sensor physical property itself and external condition and the difference at camera visual angle, Easily there is difference in exposure between the image of acquisition, causes at Panoramagram montage to eliminate splicing seams, to adopt there are apparent splicing seams Image is smoothed with certain method.
Problem 3: by image registration and geometric transformation error influenced or adjacent two images between there are apparent object positions Other situations such as shifting, can make occur ghost phenomenon in fusion results, greatly reduce the visual effect of spliced map, to obtain more Good spliced map, it is necessary to which existing blending algorithm is improved.
Summary of the invention
In order to overcome the disadvantages of the above prior art, the present invention mentions a kind of based on improving the real-time of RANSAC and dynamic fusion Video-splicing method.
The present invention is realized with following technical solution: a kind of to be spelled based on the real-time video for improving RANSAC and dynamic fusion Connect method, the specific steps are as follows:
(1) extract key frame SURF feature: setting binocular camera first is located at same level, and overlapping region is about 30%, the detection of SURF feature and matching are carried out to first frame image, calculate more stable and accurate projective transformation matrix;Then This Transformation Relation of Projection is applied in the splicing of subsequent frame image, carry out again at a certain time interval feature extraction, Projective transformation matrix is matched and sought, is then applied in the frame image mosaic of subsequent a period of time, is obtained with this final again The higher real-time dynamic of accuracy splices video;
(2) optimal homography matrix is obtained using improvement RANSAC: after extracting SURF characteristic point, utilizing improved RANSAC Algorithm carries out characteristic point purification, and calculates optimal homography matrix H, the specific steps are as follows: sequentially selects in thick matching double points Four pairs of any 3 points of not conllinear match points are taken, and calculate separately its space similarity, are retained when similarity is higher than given threshold The matching double points, conversely, then giving up, until having detected all matching double points;The matching double points then filtered out from above step In, four pairs are randomly selected, transformation model is brought into and homography matrix H is calculated, then go to test with the model that this step obtains All other data, if some point is suitable for the model of estimation, it is believed that it is also intra-office point, and is marked, by upper section The step repeats n times, the most interior point of labeled number is extracted, and as final matching point set, with selected point The matching of concentration is to homography matrix is recalculated, to obtain final estimates of parameters and optimal homography matrix H;
(3) image dynamic fusion: moving target is extracted first with background elimination algorithm and records its position;Secondly, weight In the two images in folded region, the pixel of a certain image should be better than another, meanwhile, a buffering is set at its left and right edges Area, for avoiding being mutated using lightness caused by the method, there are significant difference in object region and its peripheral region, Object region is further merged using dynamic fusion method.
Preferably, space similarity specifically calculates that steps are as follows: assuming that (P1,Q1)、(P2,Q2)、(P3,Q3)、(P4,Q4) be Four pairs of matching double points, PiAnd Qi(i=1,2,3,4) respectively indicates the character pair point in two frame figures, Pi, QiAny 3 points untotal Line, d (P1,P2)、d(P1, P3)、d(P1, P4) respectively indicate P1With P2、P3、P4The distance between, similarly, d (Q1,Q2)、d(Q1, Q3)、d(Q1, Q4) respectively indicate Q1With Q2、Q3、Q4The distance between, now introduce space similarityAs λ < δ, it is believed that (P1,Q1) matching degree is too low, determined It is on the contrary, then it is assumed that (P for Mismatching point rejecting1,Q1) it is correct match point, wherein δ is given threshold;Two view of traversal in this approach Each matching double points in frequency frame.
Preferably, dynamic fusion method and step is specific as follows:
Wherein, I1(x, y) represents each pixel information, I in left figure2(x, y) represents each pixel information in right figure, f (x, Y) each pixel information in fused image is represented, w is overlapping region left edge to the width of target left edge, and i is overlay region Current pixel point column in domain, value are 0~w-1.
Due to d1、d2The color component in region is different from moving target, need to be to d after completing above-mentioned steps1And d2Region into Further fusion, fusion formula are as follows for row:
J represents overlapping region current pixel and is expert at, when to d1When region is merged, value is 0~d1- 1, when to d2Region When fusion, value d2- 1~0.
Beneficial effects of the present invention: 1, being directed to the higher problem of registration stage time-consuming, is calculated using the faster SURF of arithmetic speed Method, while time interval is introduced, the detection of SURF feature and matching are carried out to first frame image, calculate more stable and accurate throwing After shadow transformation matrix, aforesaid operations are carried out again at a certain time interval, it is higher dynamic in real time to obtain final accuracy with this State splices video.
2, more Mismatching points can be rejected than original RANSAC algorithm using improved RANSAC algorithm, it is accurate to obtain The higher interior point of rate, this is conducive to the homography matrix for obtaining higher precision.
3, using the method for dynamic fusion, the ghost problem of overlapping region moving target generation can be greatly eliminated, is changed Kind splicing video quality.
Detailed description of the invention
Fig. 1 is flow diagram of the present invention;
Fig. 2 is the splicing schematic diagram of situation containing moving target;
Specific embodiment
1, SURF algorithm principle
SURF algorithm is the another image invariant features detection algorithm after SIFT algorithm, in addition to SIFT algorithmic stability Outside efficient feature, SIFT algorithm complexity is also greatly reduced, substantially increases feature detection and matched real-time, this Also meet the high demand of video-splicing system real time, thus herein using SURF algorithm carry out characteristic point detection and Match.
SURF algorithm mainly includes five steps:
(1) integral image is constructed.
Integral image is a big characteristic of SURF algorithm, is defined as: the integral image at point x is that the point is formed with origin The rectangular domain box that constitutes of angle steel joint in whole pixel value gray value summations, schematically as follows:
I (x, y) is each pixel information in image, and I Σ is the integral image of I.
(2) image pyramid is established.
SURF carries out convolution to establish scale space, by the ruler for changing cassette filter using cassette filter and image It is very little to obtain the image of different scale, to establish image pyramid.
(3) Hessian matrix detects extreme point.
The characteristic value of all the points in overlapping region is calculated using Hessian matrix.It then will inspection in 3 × 3 cubes In the characteristic point of survey and itself scale layer remaining 8 point and above and under two scale layers, 9 points, totally 26 click-through Row compares, and extreme point is remained, as characteristic point.Hessian Matrix Formula is as follows:
Det(Hs)=DxxDyy-(0.9×Dxy)2
Det(Hs) represent the Hessian matrix response D of pixelxx、Dyy、DxyRespectively represent in the x-direction, the direction y, xy The box-like filter in direction and the result of image convolution.
(4) characteristic point principal direction is determined.
Out of, circumference with characteristic point for center, by certain traveling step-length, each point in 60 degree of fan-shaped range is calculated every time In x there are also the Haar small echo response on the direction y, the size of Haar small echo response is 4s (s is the affiliated space scale of this feature point). Entire border circular areas is traversed, selects the direction of longest vector for the principal direction of this feature point.
(5) Feature Descriptor is generated.
It is guidance rotation image system with principal direction, characteristic point is taken to open up the square area of 20s side length, and handle as centre This space is cut into 4 × 4 sub-blocks, then calculates at each fritter every after redefining image system and according to the height of central signature point diverging The Haar small echo response of this weighting finally counts the response of Haar small echo and ∑ d on the direction x, y of each sub-blockx, ∑ dy, ∑ | dx|, ∑ | dy|, obtain 4 dimensional vectors comprising this four data, dx、dyThe Haar small echo for respectively representing the direction x and y is rung It answers, | dx|、|dy| represent its strength information.Successively to the processing of each fritter, the vector of one 64 dimension is finally obtained, this vector Unique description as characteristic point.
2, RANSAC algorithm principle
RANSAC is the abbreviation of " RANdom SAmple Consensus (random sampling is consistent) ".It can be from one group of packet In observation data set containing " point not in the know ", the parameter of mathematical model is estimated by iterative manner.It is a kind of uncertain calculation Method --- it has certain probability to obtain a reasonable result;The number of iterations must be improved in order to improve probability.The algorithm is most It is early to be proposed by Fischler and Bolles in 1981.
The basic assumption of RANSAC is:
(1) data are made of " intra-office point ", such as: the distribution of data can be explained with some model parameters;
(2) " point not in the know " is the data for not adapting to the model;
(3) data in addition to this belong to noise.
Not in the know Producing reason has: the extreme value of noise;The measurement method of mistake;To the false supposition of data.
RANSAC have also been made it is assumed hereinafter that: give one group of (usual very little) intra-office point, there are one can estimate model The process of parameter;And the model can be explained or be suitable for intra-office point.
The input of RANSAC algorithm is one group of observation data, and one can explain or be adapted to the parametrization of observation data Model, some believable parameters.RANSAC reaches target by one group of random subset being chosen in data.It is selected Subset is assumed to be intra-office point, and is verified with following methods:
(1) model is adapted to the intra-office point assumed, i.e., all unknown parameters can be from the intra-office point meter of hypothesis It obtains.
(2) model obtained in (1) goes to test all other data, if some point is suitable for the model of estimation, Think that it is also intra-office point.
(3) if there is enough points be classified as assume intra-office point, then the model estimated just enough rationally.
(4) then, it goes to reevaluate model with the intra-office of all hypothesis point, because it is only by initial hypothesis intra-office point Estimated.
(5) finally, by the error rate of estimation intra-office point and model come assessment models.
This process is repeatedly executed fixed number, the model that generates every time or is given up because intra-office point is very little It abandons or is selected because of more preferable than existing model.
As shown in Figure 1, a kind of based on the real-time video joining method for improving RANSAC and dynamic fusion, specific steps are such as Under:
(1) key frame SURF feature is extracted
Setting binocular camera first is located at same level herein, and overlapping region is about 30%.But, it is contemplated that it is special The time-consuming for levying detection-phase is still larger for video frame, if carrying out anastomosing and splicing again after all applying feature to detect every frame, It certainly cannot realize real-time, therefore introduce time interval, i.e., the detection of SURF feature and matching first be carried out to first frame image, calculate Then this Transformation Relation of Projection is applied to the splicing of subsequent frame image by more stable and accurate projective transformation matrix out In, while in order to overcome uncertain factor in shooting process and first frame to calculate not smart bring accumulated error, with it is certain when Between be spaced and carry out feature extraction again, match and seek projective transformation matrix, be then applied to the frame image of subsequent a period of time again In splicing, the higher real-time dynamic splicing video of final accuracy is obtained with this.
(2) optimal homography matrix is obtained using improvement RANSAC
After extracting SURF characteristic point, characteristic point purification is carried out using improved RANSAC algorithm, and calculate optimal list and answer Property matrix H.For RANSAC algorithm, detection and the outer point data of rejecting are the key that improve its precision.Thus herein from detection and RANSAC algorithm is improved in terms of rejecting exterior point.
Improved RANSAC algorithm sequentially chooses matching double points in thick matching double points, and calculates its space similarity, when Similarity retains the matching double points when being higher than given threshold, conversely, then giving up, until having detected all matching double points.Specific step It is rapid as follows.
According to the image-forming principle of camera, in relevant two images, any correct matched matching double points should exist similar Spatial relationship.Due in planar graph, to determine three points that the unique positions of certain point at least need other not conllinear. Assuming that (P1,Q1)、(P2,Q2)、(P3,Q3)、(P4,Q4) it is four pairs of matching double points, PiAnd Qi(i=1,2,3,4) two frames are respectively indicated Character pair point in figure.Pi, QiAny 3 points not conllinear.d(P1,P2)、d(P1, P3)、d(P1, P4) respectively indicate P1With P2、P3、 P4The distance between, similarly, d (Q1,Q2)、d(Q1, Q3)、d(Q1, Q4) respectively indicate Q1With Q2、Q3、Q4The distance between.Now introduce Space similarity As λ < δ, it is believed that (P1,Q1) matching degree is too low, It is determined as that Mismatching point is rejected, it is on the contrary, then it is assumed that (P1,Q1) it is correct match point, wherein δ is given threshold.In this approach Traverse each matching double points in two video frames.
In view of binocular video splices, for the change situation between two images, we can be with following matrix come table Show:
(x ', y ') and (x, y) respectively represents the point in two video frames.m0, m1, m3, m4Represent scale and rotation amount, m2It represents Horizontal direction displacement, m5Represent vertical direction displacement, m6And m7Represent the horizontal deflection with vertical direction.It is therefore seen that the list Answering property matrix has 8 freedom degrees, thus to determine its parameter at least and need 4 pairs of points, i.e., to obtain 4 pairs of match points can just be projected Transformation matrix.
Therefore from the matching double points that above step filters out, four pairs are randomly selected, transformation model is brought into and is calculated and singly answer Property matrix H, then gone to test all other data with the model that this step obtains, if some point be suitable for estimation mould Type, it is believed that it is also intra-office point, and is marked.
Step described in upper section is repeated into n times, extracts the most interior point of labeled number, and as final match point Collection, the matching concentrated with selected point is to homography matrix is recalculated, to obtain final estimates of parameters and optimal list Answering property matrix.
(3) image dynamic fusion
In traditional Image Fusion, before executing image co-registration, it will usually first carry out finding optimal stitching line Processing, to obtain optimal or local optimum suture.But in working herein, the step can with followed by merged Journey conflict, therefore optimal stitching line searching processing has been skipped over herein, directly carry out image co-registration.
Common image interfusion method has maximum value process, simple average method, weighted mean method, multi-resolution method, is fade-in gradually Method out.What is selected herein is that the speed of service is very fast and smooth being fade-in of transition effect gradually goes out the fusion treatment that method carries out image.Its Formula is as follows:
Wherein I1(x, y) represents each pixel information, I in left figure2(x, y) represents each pixel information, f (x, y) in right figure Represent each pixel information in fused image, d represents current pixel point at a distance from the right hand edge of overlapping region and overlapping region Width ratio.The formula need to be respectively calculated in tri- channels R, G, B.The experimental results showed that under normal circumstances, Good syncretizing effect can be obtained using the formula, but when there is moving target to pass through overlapping region, can still generate ghost and ask Topic.
To solve the problems, such as that overlapping region target is mobile, this paper presents a kind of algorithms of dynamic fusion.First with background Elimination algorithm extracts moving target and records its position.Secondly, the pixel of a certain image should in the two images of overlapping region Better than another, the generation of ghost is reduced with this, meanwhile, a buffer area should be set, at its left and right edges for avoiding adopting The mutation of the lightness caused by the method.Since there are significant difference in object region and its peripheral region, for this to object Body region is further merged using the method for dynamic fusion, largely eliminates the ghost of overlapping region moving target generation Shadow problem.The case where containing moving target and improved blending algorithm are as follows:
Wherein, w is width of the overlapping region left edge to target left edge, and i is current pixel point place in overlapping region Column, value are 0~w-1.
Due to d1、d2The color component in region is different from moving target, need to be to d after completing above-mentioned steps1And d2Region into Row further fusion.Fusion formula is as follows:
J represents overlapping region current pixel and is expert at, when to d1When region is merged, value is 0~d1- 1, when to d2Region When fusion, value d2- 1~0.
Ghost phenomenon can effectively and be quickly eliminated using this method.

Claims (3)

1. a kind of based on the real-time video joining method for improving RANSAC and dynamic fusion, it is characterised in that: specific step is as follows:
(1) extract key frame SURF feature: setting binocular camera first is located at same level, and overlapping region is about 30%, the detection of SURF feature and matching are carried out to first frame image, calculate more stable and accurate projective transformation matrix;Then This Transformation Relation of Projection is applied in the splicing of subsequent frame image, carry out again at a certain time interval feature extraction, Projective transformation matrix is matched and sought, is then applied in the frame image mosaic of subsequent a period of time, is obtained with this final again The higher real-time dynamic of accuracy splices video;
(2) optimal homography matrix is obtained using improvement RANSAC: after extracting SURF characteristic point, utilizing improved RANSAC algorithm Characteristic point purification is carried out, and calculates optimal homography matrix H, the specific steps are as follows: sequentially chooses four in thick matching double points To any 3 points not conllinear match points, and its space similarity is calculated separately, retains this when similarity is higher than given threshold With point pair, conversely, then giving up, until having detected all matching double points;Then from the matching double points that above step filters out, with Machine chooses four pairs, brings transformation model into and homography matrix H is calculated, then go to test with the model that this step obtains all Other data, if some point is suitable for the model of estimation, it is believed that it is also intra-office point, and is marked, and will be walked described in upper section It is rapid to repeat n times, the most interior point of labeled number is extracted, and as final matching point set, concentrated with selected point Matching is to homography matrix is recalculated, to obtain final estimates of parameters and optimal homography matrix H;
(3) image dynamic fusion: moving target is extracted first with background elimination algorithm and records its position;Secondly, overlay region In the two images in domain, the pixel of a certain image should be better than another, meanwhile, a buffer area is set at its left and right edges, For avoiding being mutated using lightness caused by the method, there are significant difference in object region and its peripheral region, right It is further merged using dynamic fusion method object region.
2. according to claim 1 based on the real-time video joining method for improving RANSAC and dynamic fusion, it is characterised in that: Space similarity specifically calculates that steps are as follows: postulated point (P1,Q1)、(P2,Q2)、(P3,Q3)、(P4,Q4) it is four pairs of matching double points, PiAnd Qi(i=1,2,3,4) respectively indicates the character pair point in two frame figures, Pi, QiAny 3 points not conllinear, d (P1,P2)、d (P1, P3)、d(P1, P4) respectively indicate P1With P2、P3、P4The distance between, similarly, d (Q1,Q2)、d(Q1, Q3)、d(Q1, Q4) respectively Indicate Q1With Q2、Q3、Q4The distance between, now introduce space similarityAs λ < δ, it is believed that (P1,Q1) matching degree is too low, determined It is on the contrary, then it is assumed that (P for Mismatching point rejecting1,Q1) it is correct match point, wherein δ is given threshold;Two view of traversal in this approach Each matching double points in frequency frame.
3. according to claim 1 based on the real-time video joining method for improving RANSAC and dynamic fusion, it is characterised in that:
Dynamic fusion method and step is specific as follows:
Wherein, I1(x, y) represents each pixel information, I in left figure2(x, y) represents each pixel information, f (x, y) generation in right figure Each pixel information in table fused image, w are width of the overlapping region left edge to target left edge, and i is in overlapping region Current pixel point column, value are 0~w-1.
Due to d1、d2The color component in region is different from moving target, need to be to d after completing above-mentioned steps1And d2Region is carried out into one Step fusion, fusion formula are as follows:
J represents overlapping region current pixel and is expert at, when to d1When region is merged, value is 0~d1- 1, when to d2Region fusion When, value d2- 1~0.
CN201811238125.2A 2018-10-23 2018-10-23 Based on the real-time video joining method for improving RANSAC and dynamic fusion Pending CN109658366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811238125.2A CN109658366A (en) 2018-10-23 2018-10-23 Based on the real-time video joining method for improving RANSAC and dynamic fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811238125.2A CN109658366A (en) 2018-10-23 2018-10-23 Based on the real-time video joining method for improving RANSAC and dynamic fusion

Publications (1)

Publication Number Publication Date
CN109658366A true CN109658366A (en) 2019-04-19

Family

ID=66110703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811238125.2A Pending CN109658366A (en) 2018-10-23 2018-10-23 Based on the real-time video joining method for improving RANSAC and dynamic fusion

Country Status (1)

Country Link
CN (1) CN109658366A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120012A (en) * 2019-05-13 2019-08-13 广西师范大学 The video-splicing method that sync key frame based on binocular camera extracts
CN111382782A (en) * 2020-02-23 2020-07-07 华为技术有限公司 Method and device for training classifier
CN111461013A (en) * 2020-04-01 2020-07-28 深圳市科卫泰实业发展有限公司 Real-time fire scene situation sensing method based on unmanned aerial vehicle
CN113327198A (en) * 2021-06-04 2021-08-31 武汉卓目科技有限公司 Remote binocular video splicing method and system
CN116258769A (en) * 2023-05-06 2023-06-13 亿咖通(湖北)技术有限公司 Positioning verification method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
程德强 等: "基于改进随机抽样一致算法的视频拼接算法", 《工矿自动化》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120012A (en) * 2019-05-13 2019-08-13 广西师范大学 The video-splicing method that sync key frame based on binocular camera extracts
CN110120012B (en) * 2019-05-13 2022-07-08 广西师范大学 Video stitching method for synchronous key frame extraction based on binocular camera
CN111382782A (en) * 2020-02-23 2020-07-07 华为技术有限公司 Method and device for training classifier
CN111382782B (en) * 2020-02-23 2024-04-26 华为技术有限公司 Method and device for training classifier
CN111461013A (en) * 2020-04-01 2020-07-28 深圳市科卫泰实业发展有限公司 Real-time fire scene situation sensing method based on unmanned aerial vehicle
CN111461013B (en) * 2020-04-01 2023-11-03 深圳市科卫泰实业发展有限公司 Unmanned aerial vehicle-based real-time fire scene situation awareness method
CN113327198A (en) * 2021-06-04 2021-08-31 武汉卓目科技有限公司 Remote binocular video splicing method and system
CN116258769A (en) * 2023-05-06 2023-06-13 亿咖通(湖北)技术有限公司 Positioning verification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109658366A (en) Based on the real-time video joining method for improving RANSAC and dynamic fusion
CN105245841B (en) A kind of panoramic video monitoring system based on CUDA
CN107533763B (en) Image processing apparatus, image processing method, and program
CN105957007B (en) Image split-joint method based on characteristic point plane similarity
JP3539788B2 (en) Image matching method
CN111738314B (en) Deep learning method of multi-modal image visibility detection model based on shallow fusion
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN108876749A (en) A kind of lens distortion calibration method of robust
CN104685513B (en) According to the high-resolution estimation of the feature based of the low-resolution image caught using array source
CN103679636B (en) Based on point, the fast image splicing method of line double characteristic
CN104134200B (en) Mobile scene image splicing method based on improved weighted fusion
CN107424181A (en) A kind of improved image mosaic key frame rapid extracting method
CN110992263B (en) Image stitching method and system
CN104933434A (en) Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method
CN107274336A (en) A kind of Panorama Mosaic method for vehicle environment
CN111445389A (en) Wide-view-angle rapid splicing method for high-resolution images
CN107945111A (en) A kind of image split-joint method based on SURF feature extraction combination CS LBP descriptors
CN109544635B (en) Camera automatic calibration method based on enumeration heuristic
CN103955888A (en) High-definition video image mosaic method and device based on SIFT
CN111768332A (en) Splicing method of vehicle-mounted all-around real-time 3D panoramic image and image acquisition device
CN109711321A (en) A kind of wide Baseline Images unchanged view angle linear feature matching process of structure adaptive
CN107133986A (en) A kind of camera calibration method based on two-dimensional calibrations thing
CN111783834B (en) Heterogeneous image matching method based on joint graph spectrum feature analysis
CN113887624A (en) Improved feature stereo matching method based on binocular vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190419

WD01 Invention patent application deemed withdrawn after publication