CN106210447B - Based on the matched video image stabilization method of background characteristics point - Google Patents

Based on the matched video image stabilization method of background characteristics point Download PDF

Info

Publication number
CN106210447B
CN106210447B CN201610578037.1A CN201610578037A CN106210447B CN 106210447 B CN106210447 B CN 106210447B CN 201610578037 A CN201610578037 A CN 201610578037A CN 106210447 B CN106210447 B CN 106210447B
Authority
CN
China
Prior art keywords
point
matching
background
feature
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610578037.1A
Other languages
Chinese (zh)
Other versions
CN106210447A (en
Inventor
吉淑娇
雷艳敏
冯志彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University
Original Assignee
Changchun University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University filed Critical Changchun University
Priority to CN201610578037.1A priority Critical patent/CN106210447B/en
Publication of CN106210447A publication Critical patent/CN106210447A/en
Application granted granted Critical
Publication of CN106210447B publication Critical patent/CN106210447B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • H04N5/145Movement estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses one kind to be based on the matched video image stabilization method of background characteristics point, carry out in accordance with the following steps: step 1 carries out feature point tracking based on KLT, background characteristics point set and foreground features point set are respectively specified that by MSAC algorithm, appoint the matching double points taken in background characteristics point set, calculates motion vector using SVD;Step 2 updates characteristic point, updates the characteristic point background characteristics point set and foreground features point set of next frame according to the background characteristics point set of setting and foreground features point set threshold value using Euclidean distance;Step 3 calculates new transformation matrix using background characteristics point;Step 4: the matrix group formed to transformation matrix in the step 3 is filtered.The result of inventive algorithm and the Video stabilization DITF that is averaged are compared, and the present invention is surely as rear video smoothness is higher, and inventive algorithm is significantly reduced without definition region, more conducively human visual experience.

Description

Video image stabilization method based on background feature point matching
Technical Field
The invention belongs to the field of video images, and particularly relates to a video image stabilization method based on background feature point matching.
Background
Video stabilization is a hot point of research of scholars at home and abroad in recent years, and particularly relates to a video which has motion on a camera and has a changed scene. The difficulty of video image stabilization under a moving background is to explore how to effectively eliminate the influence of foreground moving objects. In the algorithm for extracting the feature points, the feature points fall on a moving foreground object, errors are brought to global estimation under the condition, and a plurality of documents adopt the RANSAC algorithm to purify the feature points, but the solution effect is poor, and scholars at home and abroad make various attempts.
The algorithm proposed by y.g. ryu, m.j.chung et al (Robust online digital image stability based on adaptive motion estimation [ J ]. IEEE Signal Processing Letters, 2012,19(4):223-226) that directly estimates the original and smoothed feature point trajectories and corrects the motion loss amount, realizes online video stabilization, and the trajectories of the feature points are obtained by a feature tracking algorithm, which is good in robustness and does not generate accumulated motion estimation errors.
Image stabilization by using an improved motion vector estimation method (Video stabilization with improved motion vector estimation [ J ], OptRecision Eng,2015,23(5):254 and 261) proposed by Ji Shu-jiao, Zhu Ming, Lei Yan-min.
Disclosure of Invention
1. The object of the invention is to provide a method for producing a high-quality glass.
The invention provides a video image stabilization method based on background feature point matching, aiming at solving the problem that the accuracy of a running vector is seriously influenced when feature points are on foreground moving objects, namely how to effectively eliminate the influence caused by the foreground moving objects.
2. The technical scheme adopted by the invention is disclosed.
The video image stabilization method based on background feature point matching is carried out according to the following steps:
step 1, tracking feature points based on KLT, respectively appointing a background feature point set and a foreground feature point set through an MSAC algorithm, randomly selecting matching point pairs in the background feature point set, and calculating a motion vector by using SVD;
step 2, updating the feature points, namely updating the background feature point set and the foreground feature point set of the feature points of the next frame according to the set background feature point set and the set foreground feature point threshold by using the Euclidean distance;
step 3, calculating a new transformation matrix by using the background feature points;
and 4, step 4: filtering the matrix group formed by the transformation matrix in the step 3;
in a further specific embodiment, the KLT feature point tracking in step 1: the characteristic point extraction realizes the positioning of a characteristic window by checking the characteristic value of the n multiplied by n symmetric gradient matrix delta and setting a threshold value to screen the characteristic point from the reference frame.
In a further specific embodiment, the MSAC algorithm in step 1 of the video image stabilization method based on background feature point matching includes the following specific steps:
(1) using a matching point pair set obtained by a KLT tracking algorithm, optionally selecting n pairs of characteristic point pairs, and calculating a motion estimation parameter by using an n-x 2 parameter model;
(2) calculating the other feature points: the distance between the corresponding matching point obtained by affine transformation of the motion estimation parameter matrix and the characteristic point obtained by KLT matching;
(3) if the distance is smaller than a certain threshold range, the candidate characteristic points are interior points, and a new matching point set is formed;
(4) repeating steps (1) - (3) for a fixed number of times;
(5) and determining a final affine matrix by using the matching point pairs in the new matching point set to obtain the motion vector.
In a further specific embodiment, the specific steps of updating the feature points in step 2 of the video image stabilization method based on the background feature point matching are as follows:
(1) the image is divided into a foreground area and a background area which are not overlapped with each other;
(2) updating by using the motion vector obtained by the background feature point;
(3) and updating all the characteristic points according to the distances between the tracking characteristic points and the matching point pairs obtained by the affine matrix to obtain new background characteristic points and new foreground characteristic points.
In a further specific embodiment, in step 4 of the video image stabilization method based on background feature point matching, Kalman filtering is performed on a matrix group formed by the transformation matrices.
In a further specific embodiment, the video image stabilization method based on the background feature point matching further includes a step 5 of motion compensation.
3. The invention has the beneficial effects.
The invention provides a method for classifying by utilizing feature points, which comprises the steps of calculating KLT tracking feature points, calculating the distance between feature points obtained by tracking matching points obtained by motion estimation, continuously updating a foreground feature point and a background feature point set by an MSAC algorithm, and finally performing global motion estimation by only using the feature points on the background so as to stabilize images. The result of the algorithm is compared with the average DITF of the image stabilization algorithm, and the result shows that the smoothness of the video after image stabilization is higher, the undefined area of the algorithm is obviously reduced, and the method is more beneficial to the visual perception of human eyes.
Drawings
FIG. 1 is a self-contained test video from the MATLAB library.
Fig. 2 shows videos in the united states air force video database VIRAT.
Fig. 3 is a schematic diagram of feature point extraction by using KLT, and the right image is a stable frame after motion compensation.
FIG. 4 is a diagram illustrating background feature point matching according to the present invention, and the right diagram shows a motion-compensated stable frame.
FIG. 5 is a schematic diagram showing comparison of image stabilization results of self-contained test videos in MATLAB library.
Wherein the first line diagram is the original video sequence diagram; the second line of the graph is a graph of the image stabilization result of the MATLAB algorithm based on the characteristic points; the third row is a graphical representation of the image stabilization results of the algorithm of the present invention.
Fig. 6 is a comparison diagram of video image stabilization results in the united states air force video database VIRAT.
Wherein the first line diagram is the original video sequence diagram; the second line of the graph is a graph of the image stabilization result of the MATLAB algorithm based on the characteristic points; the third row is a graphical representation of the image stabilization results of the algorithm of the present invention.
FIG. 7 is a flow chart of an embodiment of the present invention.
Detailed Description
Examples
1.1KLT feature extraction
The KLT algorithm is a typical feature point tracking method, and the tracking is mainly performed by using continuity information between frames. The KLT feature point extraction algorithm mainly aims at realizing the positioning of a feature window by checking a feature value of a 2 multiplied by 2 symmetric gradient matrix delta, wherein the delta matrix is as shown in a formula (1):
in the formula,
Dx,Dythe first order partial derivatives of the image in the x and y directions are respectively shown, and w is a selected smaller region where feature points are expected to be obtained.
The characteristic points can be determined by calculating two characteristic values λ of δ1And λ2To determine λ1And λ2The formula (2) is as follows:
if λ1And λ2Are small, indicating that the image has a relatively constant gray scale distribution; if one is small and the other is large, the image window is indicated to have a non-directional texture pattern; if λ1And λ2Are large and represent corner points, salt and pepper textures or other texture patterns that can be reliably tracked[7]. Thus, the threshold T, λ is set1And λ2The requirements are as follows:
min(λ12)>T (3)
normally, the threshold T is set to:
T=rλmax(0<r<1) (4)
here, λmaxThe largest feature value of δ, by this method, a good feature point can be selected from the reference frame.
1.2 KLT feature matching
The KLT matching algorithm adopts the sum of squares of image gray difference as a matching criterion of the feature points, and utilizes a strategy based on optimal estimation to match the feature points, so that the algorithm is relatively simple, searching is not needed, time consumption is low, and the instantaneity of electronic image stabilization can be effectively improved.
For a gray image, assuming a characteristic window w with texture information, a translation model is used to represent the change between pixels in the characteristic window, and assuming that a frame image corresponding to time t is represented as I (x, y, t), an image frame corresponding to time t + τ is represented as I (x, y, t + τ), and the positions of the frame image and the image frame can satisfy:
I(x,y,t+τ)=I(x-Δx,y-Δy,t+τ) (5)
each pixel in I (x, y, t + τ) can be obtained by shifting d ═ Δ x, Δ y) the pixels in the feature window w in I (x, y, t).
The final goal of the KLT algorithm is to find the value of d that minimizes SSD. Let the SSD value be represented by ε, solve using equation (6):
when the displacement vector is small, I (x + d)x,y+dyT + τ) can be developed by a first order Taylor equation,
or in matrix form:
I(x+dx,y+dy,t+τ)≈I(x,y,t)+gTd+Itτ (8)
here:
equation (6) is therefore equivalent to:
and (8) removing high-order terms, then carrying out derivation on d, and finally simplifying:
Zd=e (10)
wherein,
by solving equation (10), the offset d can be obtained.
By utilizing the KLT matching algorithm, the searching range in the matching process can be reduced, the matching time is shortened, and the matching precision is higher.
The invention adopts KLT to extract characteristic points and combines MSAC extractionAnd (5) background feature points. First, some functions needed to extract background feature points are defined. Defining the feature point set extracted by the KLT algorithm in the ith frame as: pi={Pi(j)=(xj,yj) J is 1, L, N }; definition Pi B, Pi FRespectively representing a background feature point subset and a foreground feature point subset in the feature point set, wherein the relationship between the background feature point subset and the foreground feature point subset is as follows: pi=Pi B∪Pi F. The KLT algorithm can obtain matched feature point pairs simultaneously in addition to the feature points, and is defined as a matched point pair set:
Ci={ci(j)=Pi(j),Pi+1(j),j=1,L,N} (11)
wherein P isi+1(j)=Φ(Pi(j) And) obtaining a matching point pair in the (i + 1) th frame by adopting a tracking algorithm as a characteristic point in the image of the ith frame. By using the matching point pairs, a motion estimation parameter matrix M can be obtained.
In the tracking process, some feature points may be less and less after several frames or even disappear due to the motion of the foreground object or the camera. At this time, these feature points are from Pi,Pi+1Is eliminated. Thus, if at CiWith sufficient pairs of feature matching points, a more accurate global motion estimation can be obtained. If the scene change is severe, the corresponding matching point pair cannot be found from the feature points in the reference frame, that is, when the number of the feature points in the ith frame is less than the preset threshold, the matching point pair is correspondingly reduced. At this time, a new reference frame is selected again, the extracted feature points are extracted, and the foreground feature points and the background feature points are redefined.
2 background feature point acquisition
2.1 MSAC
Many documents adopt random sample consensus RANSAC to purify feature points, but when the current scene body occupies a large part of the whole image, the method has a poor effect. The method utilizes MSAC (M-animation sampling consensus) to combine Euclidean distance to acquire the background characteristic points and calculate accurate motion vectors. MSAC is an optimized morphing method for algorithms. RANSAC is a typical matching pair purification algorithm based on feature matching, can well remove mismatching point pairs, and has the following calculation process of a consumption function:
n represents the number of all matching points in the initial matching set, sigma represents the probability that the current matching point is an 'interior point', and sigma is set to be a conservative value according to the accuracy of the matching method in practical application.
MSAC optimizes the performance of RANSAC by modifying its consumption function[11]. The MSAC employs a fallback optimum estimation method. The same value is still given to the outlier, the inlier is scored according to the degree of the adaptation data, instead of giving a value of 0 as in RANSAC, and the cost function is:
comparing MSAC and RANSAC algorithms, MSAC does not require additional computational consumption. The consumption function can directly use the maximum residual saturation point to compare with the least square method, thereby inhibiting the influence of extreme abnormal values.
Using 6-parameter affine transformations as examples, of MSAC algorithmThe method comprises the following steps: (1) matching point pair C obtained from KLT tracking algorithmi={ci(j)=Pi(j),Pi+1(j) J is 1, L, N, and the parameters of the motion estimation parameter M are calculated by using a 6-parameter model, with 3 pairs of feature points being selected;
(2) calculating the distance d (P) between the corresponding matching points obtained by affine transformation of the M matrix and the feature points obtained by KLT matching of the other K-3 feature pointsi(j) ): the calculation formula is as follows:
d(Pi(j))=||TM(Pi(j))-ΦPi(j)|| (16)
here, TM(. is) the characteristic points, Φ P, obtained from the geometric transformation matrix M obtained in the first calculation stepi(j) For the matching point pairs obtained for KLT tracking, | | | · | |, is the L2 norm.
(3) If the distance is less than a certain threshold range, the candidate characteristic points are interior points to form a new set
(4) Repeating steps 1-3 for a fixed number of times;
(5) using a new set of matching pointsTo determine the final affine matrix
2.2 feature point update
Dividing a W x H image I (x, y) into 2 non-overlapping regions, and defining foreground region as RFThe background region is RBTheir relationship is: i ═ RF∪RB.RFFor a set of pixels that satisfies the conditional following condition:
here, 0< α <0.5, is used to determine the size of the foreground and background regions.
TABLE 1 characteristic points classification schematic
Table 1 Schematic of feature points classification
Suppose a background region RBThe feature points in the image are feature points of the background of the first frame, because the foreground object is usually in the center of the image. In the rest frames, the background area will change continuously due to the continuous movement of the foreground object, so the background feature points need to be updated continuously. Therefore, in order to obtain the background feature points of the next frames, the motion vector obtained by only the background feature points is used as an updating strategy, and the tracking feature points and the transformation parameters are obtainedThe distance d (P) between the obtained matching point pairsi(j) After) all feature points enter the update phase. The feature points are again relegated to foreground and background feature points. More precisely, in the i +1 th frame, the background feature point is classified as one having a smaller distance valueOn the contrary, large distance values are classified as foreground feature points
The classification method of whether the feature point is the foreground or the background in the next frame is shown in table 1. In the specific updating process, two different thresholds tau are adopted12The state of the feature points in the next frame is updated, and the classification result of the feature points is determined. The feature points in the current frame do not change to a great extent regardless of the state of the next frame, and the threshold value related to the distance in the current state is selected selectively. That is, for background feature points, a smaller τ is selected1For foreground feature points, τ to select larger point2The value is obtained. Thus, if a feature point is classified very accurately in the current frame, most feature points can maintain their state in the next frame. To summarize: equation (18) can better explain the state classification.
3 summary of the Algorithm
According to the description process in the above sections, the algorithm of the invention is summarized:
step 1: extraction of reference frame feature point P by KLT operatoriFurther, according to equation (14), let α be 0.2, and specify the background feature point set P respectivelyi BAnd foreground feature point set Pi F
Step 2: KLT is matched to obtain a matching point pair set CiOptionally taking Pi BCalculating a motion vector M by using SVD (singular value decomposition) for 4 pairs of matching points in the set;
and step 3: updating the characteristic points: the Euclidean distance d is calculated by using the formula (16), and the Euclidean distance d is calculated according to the set threshold value tau12According to equation (18), the feature point of the next frame is updated
And 5: using background feature pointsComputing a new, exact transform matrix M'
Step 6: performing Kalman filtering on a matrix group formed by the transformation matrixes;
and 7: motion compensation
4 results of the experiment
The computer of the experiment adopts an Intel Pentium processor, a CPU main frequency of 2.90G and a memory of 4G. One MATLAB library carries test videos with a total of 162 frames of 320 × 240 pixels, and the other video from the U.S. air force video database VIRAT has 720 × 480 pixels, both videos have foreground moving object vehicles, and the background is complex, as shown in fig. 1 and 2.
Experiment one: background feature point extraction graph
Fig. 3 is a diagram of extracting feature points by using KLT, which does not use MSAC to filter background feature points, that is, there are many feature points distributed on the vehicle body, which inevitably affects the accuracy of motion vector calculation.
The right diagram of fig. 3 shows the motion-compensated stable frame, and although the pre-stabilized and post-stabilized compensated frames can be completely matched, the connecting line of the matched point pair is slightly inclined, which indicates that the two frames contain a motion component. FIG. 4 shows that the characteristic points on the vehicle body are basically filtered after the algorithm of the present invention is adopted. The right image of fig. 4 is the stable frame after motion compensation, and at this time, the stable frame is completely matched with the original video frame, and the connecting line of the matching point pair is a straight line, which indicates that the image stabilizing effect is good.
Experiment two: result of image stabilization
The invention compares the corresponding frames of the video before and after image stabilization, and compares the algorithm of the invention with the self-contained MATLAB image stabilization algorithm (method 1 for short) based on the characteristic points, and the effect is shown in figures 4 and 5. In the MATLAB self-contained algorithm (method 1) for image stabilization based on feature points, the range of an undefined region of a sequence after image stabilization is large, and only a gray level image is processed.
Experiment three: evaluation of Effect
The common evaluation method is the PSNR method, and on the basis of the PSNR method, the DITF is defined as the absolute difference between PSNR of adjacent frames after image stabilization. The method is suitable for evaluating video image stabilization results in a dynamic background, and the DITF calculation formula is shown as (19): the smaller the DITF is, the more obvious the image stabilizing effect is.
DITF(i)=|PSNR(i+1)-PSNR(i)| (19)
One of the methods for evaluating the stabilization effect is DITF, and the comparison result of the MATLAB self-contained algorithm (method 1) for image stabilization based on the characteristic points and the average DITF of the method is compared, as shown in Table 2, the DITF value of the method is minimum, and the video after image stabilization is smoother.
TABLE 2 average DITF comparison
Table 2Compare of mean DITF

Claims (2)

1. A video image stabilization method based on background feature point matching is characterized by comprising the following steps:
step 1, tracking feature points based on KLT, and respectively appointing a background feature point set and a foreground feature through an MSAC algorithm
A point set, namely randomly selecting a matching point pair in the background characteristic point set, and calculating a motion vector by using SVD;
the MSAC algorithm comprises the following specific steps:
(1) using matching point pair set obtained by KLT tracking algorithm, selecting n pairs of characteristic point pairs, and calculating by using n-2 parameter model
A motion estimation parameter;
the characteristic point extraction realizes the positioning of a characteristic window by checking the characteristic value of the nxn symmetric gradient matrix delta and setting a threshold value to screen the characteristic point from the reference frame;
(2) calculating the other feature points: matching the corresponding matching point obtained by affine transformation of the motion estimation parameter matrix with the KLT
Distance to the feature points;
(3) if the distance is smaller than a certain threshold range, the candidate characteristic points are interior points, and a new matching point set is formed;
(4) repeating steps (1) - (3) for a fixed number of times;
(5) determining a final affine matrix to obtain a motion vector by using a matching point pair in the new matching point set;
step 2, updating the feature points, and updating the feature points according to the set background feature point set and the set foreground feature point set threshold value by using the Euclidean distance
The feature point background feature point set and the foreground feature point set of the new next frame specifically include:
(1) the image is divided into a foreground area and a background area which are not overlapped with each other;
(2) updating by using the motion vector obtained by the background feature point;
(3) updating all the characteristic points according to the distances between the matching point pairs obtained by tracking the characteristic points and the affine matrix to obtain
New background feature points and foreground feature points;
step 3, calculating a new transformation matrix by using the background feature points;
step 4, filtering the matrix group formed by the transformation matrix in the step 3;
and 5, motion compensation.
2. The video image stabilization method based on background feature point matching according to claim 1, characterized in that: step (ii) of
And 4, carrying out Kalman filtering on the matrix group consisting of the transformation matrixes.
CN201610578037.1A 2016-09-09 2016-09-09 Based on the matched video image stabilization method of background characteristics point Expired - Fee Related CN106210447B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610578037.1A CN106210447B (en) 2016-09-09 2016-09-09 Based on the matched video image stabilization method of background characteristics point

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610578037.1A CN106210447B (en) 2016-09-09 2016-09-09 Based on the matched video image stabilization method of background characteristics point

Publications (2)

Publication Number Publication Date
CN106210447A CN106210447A (en) 2016-12-07
CN106210447B true CN106210447B (en) 2019-05-14

Family

ID=57491329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610578037.1A Expired - Fee Related CN106210447B (en) 2016-09-09 2016-09-09 Based on the matched video image stabilization method of background characteristics point

Country Status (1)

Country Link
CN (1) CN106210447B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107968916A (en) * 2017-12-04 2018-04-27 国网山东省电力公司电力科学研究院 A kind of fast video digital image stabilization method suitable for on-fixed scene
CN109102013B (en) * 2018-08-01 2022-03-15 重庆大学 Improved FREAK characteristic point matching image stabilization method suitable for tunnel environment characteristics
CN110047091B (en) * 2019-03-14 2022-09-06 河海大学 Image stabilization method based on camera track estimation and feature block matching
CN110046555A (en) * 2019-03-26 2019-07-23 合肥工业大学 Endoscopic system video image stabilization method and device
CN110401796B (en) * 2019-07-05 2020-09-29 浙江大华技术股份有限公司 Jitter compensation method and device of image acquisition device
CN110796010B (en) * 2019-09-29 2023-06-06 湖北工业大学 Video image stabilizing method combining optical flow method and Kalman filtering
CN114827473B (en) * 2022-04-29 2024-02-09 北京达佳互联信息技术有限公司 Video processing method and device
CN114842013B (en) * 2022-07-04 2022-09-02 南通艾果纺织品有限公司 Textile fiber strength detection method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231792A (en) * 2011-06-29 2011-11-02 南京大学 Electronic image stabilization method based on characteristic coupling
CN103426182A (en) * 2013-07-09 2013-12-04 西安电子科技大学 Electronic image stabilization method based on visual attention mechanism
CN103685866A (en) * 2012-09-05 2014-03-26 杭州海康威视数字技术股份有限公司 Video image stabilization method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102231792A (en) * 2011-06-29 2011-11-02 南京大学 Electronic image stabilization method based on characteristic coupling
CN103685866A (en) * 2012-09-05 2014-03-26 杭州海康威视数字技术股份有限公司 Video image stabilization method and device
CN103426182A (en) * 2013-07-09 2013-12-04 西安电子科技大学 Electronic image stabilization method based on visual attention mechanism

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于特征点匹配的电子稳像技术;吉淑娇 等;《中国光学》;20131231;第6卷(第6期);全文
基本矩阵的鲁棒贪心估计算法;向长波 等;《计算机辅助设计与图形学学报》;20070531;第19卷(第5期);第2页

Also Published As

Publication number Publication date
CN106210447A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106210447B (en) Based on the matched video image stabilization method of background characteristics point
CN107358623B (en) Relevant filtering tracking method based on significance detection and robustness scale estimation
CN104820996B (en) A kind of method for tracking target of the adaptive piecemeal based on video
CN102098440B (en) Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN110175649B (en) Rapid multi-scale estimation target tracking method for re-detection
CN102542571B (en) Moving target detecting method and device
CN105574891B (en) The method and system of moving target in detection image
CN108805832B (en) Improved gray projection image stabilizing method suitable for tunnel environment characteristics
CN109102013B (en) Improved FREAK characteristic point matching image stabilization method suitable for tunnel environment characteristics
CN110910421A (en) Weak and small moving object detection method based on block characterization and variable neighborhood clustering
Katramados et al. Real-time visual saliency by division of gaussians
CN111563849A (en) Observation image denoising method and system
CN107360377B (en) Vehicle-mounted video image stabilization method
CN112598708A (en) Hyperspectral target tracking method based on four-feature fusion and weight coefficient
CN109102528A (en) A kind of ship tracking method and system
CN109462748B (en) Stereo video color correction algorithm based on homography matrix
CN110910425B (en) Target tracking method for approaching flight process
Brockers Cooperative stereo matching with color-based adaptive local support
CN107564029B (en) Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA
CN106845448B (en) Infrared weak and small target detection method based on non-negative constraint 2D variational modal decomposition
CN106934818B (en) Hand motion tracking method and system
CN112884817B (en) Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium
CN113470074A (en) Self-adaptive space-time regularization target tracking algorithm based on block discrimination
Hu et al. Digital video stabilization based on multilayer gray projection
CN112801903A (en) Target tracking method and device based on video noise reduction and computer equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190514

Termination date: 20210909