CN101561932A - Method and device for detecting real-time movement target under dynamic and complicated background - Google Patents

Method and device for detecting real-time movement target under dynamic and complicated background Download PDF

Info

Publication number
CN101561932A
CN101561932A CNA2009100840075A CN200910084007A CN101561932A CN 101561932 A CN101561932 A CN 101561932A CN A2009100840075 A CNA2009100840075 A CN A2009100840075A CN 200910084007 A CN200910084007 A CN 200910084007A CN 101561932 A CN101561932 A CN 101561932A
Authority
CN
China
Prior art keywords
pixel
point
frame image
value
covariance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2009100840075A
Other languages
Chinese (zh)
Other versions
CN101561932B (en
Inventor
王涛
须德
郎从妍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jiaotong University
Original Assignee
Beijing Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jiaotong University filed Critical Beijing Jiaotong University
Priority to CN2009100840075A priority Critical patent/CN101561932B/en
Publication of CN101561932A publication Critical patent/CN101561932A/en
Application granted granted Critical
Publication of CN101561932B publication Critical patent/CN101561932B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a method and a device for detecting a real-time movement target under a dynamic and complicated background. The method comprises the following proposal: acquiring a frame image from a video, determining a covariance of colors in the current frame image, and acquiring a gray image of the frame image; and acquiring a gray value and a gradient value of a pixel point in the gray image, determining a pixel point value according to the gray value, the gradient value and preset parameters of the pixel point, judging whether the pixel point value is less than a preset multiple of the covariance of the colors determined in the last frame image, if so, taking the pixel point as a background point, otherwise, taking the pixel point as a foreground point, and taking collection of foreground points as the real-time movement target. The method and the device restrain the influence of the sudden change of light on a detection result by converting the video image into the gray image so as to more precisely detect the movement target.

Description

Method for detecting real-time moving object and device under a kind of DYNAMIC COMPLEX background
Technical field
The present invention relates to video analysis and monitoring technology, be specifically related to method for detecting real-time moving object and device under a kind of DYNAMIC COMPLEX background.
Background technology
Motion target detection is the basis of digital video analysis and video monitoring system, and the accurate detection of moving target can bring very useful effect for follow-up work.At present the motion target detection method mainly contains three kinds of frame difference method, optical flow method and background subtraction methods, wherein the background subtraction method is the most general at present also effective method, it is to utilize scene information, set up a background model that does not have moving target, thereby and itself and present frame compared detect moving target, it can not adapt to the unexpected variation of illumination, if background illumination changes suddenly, thereby then the color of most of background dot can change and is identified as the foreground point; It also can't be discerned as irregular background motion targets such as branch that waves or current.
Summary of the invention
The embodiment of the invention provides method for detecting real-time moving object and the device under a kind of DYNAMIC COMPLEX background, and it can adapt to the unexpected variation of illumination, can detect moving target more accurately.
Method for detecting real-time moving object under a kind of DYNAMIC COMPLEX background that the embodiment of the invention provides comprises:
From video, obtain two field picture, determine the covariance of color in the current frame image, and obtain the gray level image of described two field picture;
Obtain gray values of pixel points and Grad in the described gray level image, determine the pixel point value according to the described gray values of pixel points, Grad and the preset parameter that obtain, judge that described pixel point value is whether less than the prearranged multiple of the covariance of the color of determining in the former frame image, if, determine that then described pixel is a background dot, if not, determine that then described pixel is the foreground point, the set of described foreground point is the real time kinematics target.
Real time kinematics object detecting device under a kind of DYNAMIC COMPLEX background that the embodiment of the invention provides comprises:
The covariance acquisition module is used for obtaining two field picture from video, determines the covariance of color in the current frame image;
The pixel acquisition module, be used for two field picture described in the covariance acquisition module is converted to gray level image, and obtain gray values of pixel points and Grad in the described gray level image, gray values of pixel points and Grad in the gray level image described in the gray level image acquisition module, and determine the pixel point value according to the described gray values of pixel points, Grad and the preset parameter that obtain;
The first dynamic object determination module, be used for judging the prearranged multiple of the covariance of the color whether determined described pixel point value of pixel acquisition module is determined less than former frame image covariance acquisition module, if, determine that then described pixel is a background dot, if not, determine that then described pixel is the foreground point, the set of described foreground point is the real time kinematics target.
Method for detecting real-time moving object and device under a kind of DYNAMIC COMPLEX background that the embodiment of the invention provides, it suppresses the influence of the unexpected variation of illumination to testing result by video image being converted to gray level image, can detect moving target more accurately.
Description of drawings
Fig. 1 is the structural representation of the real time kinematics object detecting device under described a kind of DYNAMIC COMPLEX background of providing of the embodiment of the invention;
Fig. 2 is the structural representation that the real time kinematics object detecting device under described a kind of DYNAMIC COMPLEX background of providing of the embodiment of the invention further describes in an embodiment.
Embodiment
In the scheme of the method for detecting real-time moving object under a kind of DYNAMIC COMPLEX background that the embodiment of the invention provides, at first, from video, obtain two field picture, determine the covariance of color in the current frame image, and obtain the gray level image of described two field picture; Then, obtain gray values of pixel points and Grad in the described gray level image, determine the pixel point value according to the described gray values of pixel points, Grad and the preset parameter that obtain, judge that described pixel point value is whether less than the prearranged multiple of the covariance of the color of determining in the former frame image, if determine that then described pixel is a background dot, if not, determine that then described pixel is the foreground point, the set of described foreground point is the real time kinematics target.
In such scheme, further can comprise according to the foreground point of determining in the former frame image foreground point of determining in the present frame is adjusted, concrete adjustment process can for: extract in the former frame image pixel point value for the present frame of foreground point, determine that according to pixel neighborhood of a point point described in the pixel point value of the described present frame that extracts and the former frame image pixel of present frame is the probability of real foreground point, if determined probability is less than designated value, then the foreground point of determining in the described present frame is adjusted into background dot, if determined probability more than or equal to designated value, is not then adjusted the foreground point of determining.
Comprehensive such scheme, concrete process can for:
Step S1, from video, obtain two field picture;
Step S2, determine the weight w of color k in the described two field picture k, average μ kAnd covariance sigma k 2, wherein k generally chooses 5 or 3 for default natural number;
Step S3, described two field picture is converted to gray level image, obtains gray values of pixel points A in the described gray level image Ij, according to gray values of pixel points A in the described gray level image that obtains IjObtain the Grad of pixel M ij = ( A ij - A i + 1 , j ) 2 + ( A ij - A i , j + 1 ) 2 And gradient direction R Ij=ac tan 2 (A Ij-A I+1, j, A Ij-A I, j+1), wherein i is a horizontal ordinate, j is an ordinate, and according to the described gray values of pixel points A that obtains Ij, Grad M IjDetermine pixel point value X with preset parameter λ Ij=(λ A Ij, (1-λ) M Ij), wherein preset parameter λ is the value in [0,1], it is to determine according to the colouring information of image.Set up mixed Gauss model according to above-mentioned data: P ( X t ) = w k × 1 ( 2 π ) n / 2 | Σ k | 1 / 2 e - 1 2 ( X ij - μ k ) T Σ k - 1 ( X ij - μ k ) , ∑ wherein kIt is the standard covariance sigma kMultiply each other with unit matrix and to obtain (X Ijk) TExpression (X Ijk) transposition;
Step S4, the described current frame pixel point value X of judgement T+1Whether less than the standard covariance sigma of the color k that determines in the former frame image tPrearranged multiple, wherein prearranged multiple generally elects 2.5 as, just judges described current frame pixel point value X T+1Whether with the mixed Gauss model coupling of the color k of former frame image, if, execution in step S5 then, if not, execution in step S6 then;
Step S5, determine that described pixel is a background dot, and according to the weight w of color k in the former frame image t, average μ tAnd covariance sigma t 2And the Grad M of present frame T+1With pixel point value X T+1Upgrade the weight w of color in the current frame image T+1, average μ T+1And covariance sigma T+1 2, concrete renewal process can comprise: w t + 1 = ( 1 - α ) w t + α M t + 1 μ t + 1 = ( 1 - ρ ) μ t + ρ X t + 1 σ t + 1 2 = ( 1 - ρ ) σ t 2 + ρ ( X t + 1 - μ t + 1 ) T ( X t + 1 - μ t + 1 ) , Wherein α is a predetermined value, and it is to determine ρ=α/w according to the intensity of variation of former frame image and current frame image T+1, and according to above-mentioned Data Update mixed Gauss model P (X T+1);
Step S6, determine that described pixel is the foreground point, and upgrade the weight w of color in the current frame image according to predetermined value T+1', average μ T+1' and the standard covariance sigma T+1', detailed process can comprise: make weight w T+1' the value that equals to be scheduled to generally chooses 0.05, makes the standard covariance sigma T+1' the value that equals to be scheduled to generally chooses 30, makes average μ T+1'=X T+1, and according to above-mentioned Data Update mixed Gauss model P (X T+1);
Step S7, according to foreground point and background dot that step S5 in the former frame image and step S6 determine, the foreground point of determining in the present frame is adjusted, concrete adjustment process comprises: extract that pixel is the pixel X of the present frame of foreground point in the former frame image T+1, according to pixel neighborhood of a point point Y in the former frame image IjDetermine the pixel X of present frame T+1Probability for real foreground point
Figure A20091008400700091
Wherein a is a positive number, k and l are respectively the variable quantity of horizontal ordinate, k and l are taken as integer, d gets odd number, the expression width neighborhood, and b is the distance between pixel described in the former frame image and the described pixel neighborhood of a point point, it can replace with Gauss's template of the symmetric matrix of a d * d, the element value of described symmetric matrix meets Gaussian distribution, can calculate with gaussian kernel function, and for example Gauss's template of n=3 is: 1 2 1 2 4 2 1 2 1 , If with the color of 256 grades gray-scale value remarked pixels, so
Figure A20091008400700093
Can be one is 256 array in advance by the good size of calculating, if determined probability Less than designated value, if execution in step S8 then is determined probability
Figure A20091008400700095
More than or equal to designated value, execution in step S9 then, described designated value is to determine according to the scene of video image, if the scene complexity, then designated value can be chosen the value in [0.5,0.7], if scene is simple, then designated value can be chosen the value in [0.3,0.5];
Step S8, the foreground point of determining in the described present frame is adjusted into background dot, and execution in step S10;
Step S9, the foreground point of determining is not adjusted, and execution in step S10;
Step S10, current frame image data replace the former frame view data, and the last frame image of execution in step S1 in video again.
The embodiment of the invention also provides the real time kinematics object detecting device under a kind of DYNAMIC COMPLEX background, as shown in Figure 1, comprising:
Covariance acquisition module 1 is used for obtaining two field picture from video, determines the covariance of color in the current frame image;
Pixel acquisition module 2, be used for two field picture described in the covariance acquisition module 1 is converted to gray level image, and obtain gray values of pixel points and Grad in the described gray level image, gray values of pixel points and Grad in the gray level image described in the gray level image acquisition module, and determine the pixel point value according to the described gray values of pixel points, Grad and the preset parameter that obtain;
The first dynamic object determination module 3, be used for judging the prearranged multiple of the covariance of the color whether pixel acquisition module 2 determined described pixel point values are determined less than former frame image covariance acquisition module 1, if, determine that then described pixel is a background dot, if not, determine that then described pixel is the foreground point, the set of described foreground point is the real time kinematics target.
In the said apparatus, determine that according to the described gray values of pixel points, Grad and the preset parameter that obtain the process of pixel point value can comprise in the pixel acquisition module 2:
Pixel point value X Ij=(λ A Ij, (1-λ) M Ij), wherein, A IjAnd M IjBe respectively described gray values of pixel points and Grad, wherein i is a horizontal ordinate, and j is an ordinate, and λ is a preset parameter.
In the said apparatus, the first dynamic object determination module 3 can also comprise:
The first covariance update module, be used for determining that described pixel is background dot after, upgrade the covariance of color in the current frame image, detailed process can comprise:
Described covariance sigma T+1 2=(1-ρ) σ t 2+ ρ (X T+1T+1) T(X T+1T+1), wherein α is a predetermined value, ρ=α/w T+1, w T+1And μ T+1Be respectively the weight and the average of color in the current frame image.
The second covariance update module, be used for determining that described pixel is the foreground point after, upgrade the standard covariance sigma of color in the current frame image T+1' the value that equals to be scheduled to.
According to the foreground point of determining in the former frame image foreground point of determining in the present frame is adjusted, as shown in Figure 2, in the said apparatus, can also be comprised:
Extraction module 4 is used for foreground point and the background dot determined according to the former frame image first dynamic object determination module 3, extracts in the former frame image pixel point value X for the present frame of foreground point T+1
Probability generation module 5 is used for the pixel point value X according to the described present frame of extraction module 4 acquisitions T+1With the Y of pixel neighborhood of a point point described in the former frame image IjThe pixel of determining present frame is the probability of real foreground point Wherein a is a positive number, k and l are respectively the variable quantity of horizontal ordinate, k and l are taken as integer, d gets odd number, the expression width neighborhood, and b is the distance between pixel described in the former frame image and the described pixel neighborhood of a point point, it can replace with Gauss's template of the symmetric matrix of a d * d, the element value of described symmetric matrix meets Gaussian distribution, can calculate with gaussian kernel function, and for example Gauss's template of n=3 is: 1 2 1 2 4 2 1 2 1 , If with the color of 256 grades gray-scale value remarked pixels, so
Figure A20091008400700113
Can be one is 256 array in advance by the good size of calculating;
The second dynamic object determination module is used for the determined probability of probability generation module Compare with designated value, if determined probability
Figure A20091008400700115
Less than designated value, then the foreground point of determining in the described present frame is adjusted into background dot, if determined probability
Figure A20091008400700116
More than or equal to designated value, then the foreground point of determining is not adjusted, if the scene complexity, then designated value can be chosen the value in [0.5,0.7], if scene is simple, then designated value can be chosen the value in [0.3,0.5].
The method and apparatus that the embodiment of the invention provides, suppress the influence of the unexpected variation of illumination by video image being converted to gray level image to testing result, and by extracting foreground point and the background dot that present frame is further determined in the foreground point of determining in the former frame image, some irregular background motion targets can be removed, thereby moving target can be detected more accurately.
The above; only for the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, and anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (10)

1, the method for detecting real-time moving object under a kind of DYNAMIC COMPLEX background is characterized in that, comprise,
From video, obtain two field picture, determine the covariance of color in the current frame image, and obtain the gray level image of described two field picture;
Obtain gray values of pixel points and Grad in the described gray level image, determine the pixel point value according to the described gray values of pixel points, Grad and the preset parameter that obtain, judge that described pixel point value is whether less than the prearranged multiple of the covariance of the color of determining in the former frame image, if, determine that then described pixel is a background dot, if not, determine that then described pixel is the foreground point, the set of described foreground point is the real time kinematics target.
2, method according to claim 1 is characterized in that, described gray values of pixel points, Grad and the preset parameter that described basis obtains determines that the process of pixel point value comprises:
Pixel point value X Ij=(λ A Ij, (1-λ) M Ij), wherein, A IjAnd M IjBe respectively described gray values of pixel points and Grad, wherein i is a horizontal ordinate, and j is an ordinate, and λ is a preset parameter.
3, method according to claim 1 is characterized in that, determine that described pixel is background dot after, upgrade the covariance of color in the current frame image, detailed process comprises:
Described covariance sigma T+1 2=(1-ρ) σ t 2+ ρ (X T+1T+1) T(X T+1T+1), wherein α is a predetermined value, ρ=α/w T+1, w T+1And μ T+1Be respectively the weight and the average of color in the current frame image.
4, method according to claim 1 is characterized in that, determine that described pixel is the foreground point after, upgrade the standard covariance sigma of color in the current frame image T+1' the value that equals to be scheduled to.
5, according to each described method in the claim 1 to 4, it is characterized in that, comprise also according to the foreground point of determining in the former frame image foreground point of determining in the present frame is adjusted that concrete adjustment process comprises:
Extract in the former frame image is the pixel point value X of the present frame of foreground point T+1, according to the pixel point value X of the described present frame that extracts T+1With the Y of pixel neighborhood of a point point described in the former frame image IjThe pixel of determining present frame is the probability of real foreground point:
Figure A2009100840070003C1
Wherein a is a positive number, and k and l are respectively the variable quantity of horizontal ordinate, and d is a width neighborhood, and b is the distance between pixel described in the former frame image and the described pixel neighborhood of a point point, if determined probability
Figure A2009100840070003C2
Less than designated value, then the foreground point of determining in the described present frame is adjusted into background dot, if determined probability
Figure A2009100840070003C3
More than or equal to designated value, then the foreground point of determining is not adjusted.
6, the real time kinematics object detecting device under a kind of DYNAMIC COMPLEX background is characterized in that, comprising:
The covariance acquisition module is used for obtaining two field picture from video, determines the covariance of color in the current frame image;
The pixel acquisition module, be used for two field picture described in the covariance acquisition module is converted to gray level image, and obtain gray values of pixel points and Grad in the described gray level image, gray values of pixel points and Grad in the gray level image described in the gray level image acquisition module, and determine the pixel point value according to the described gray values of pixel points, Grad and the preset parameter that obtain;
The first dynamic object determination module, be used for judging the prearranged multiple of the covariance of the color whether determined described pixel point value of pixel acquisition module is determined less than former frame image covariance acquisition module, if, determine that then described pixel is a background dot, if not, determine that then described pixel is the foreground point, the set of described foreground point is the real time kinematics target.
7, device according to claim 6 is characterized in that, determines that according to the described gray values of pixel points, Grad and the preset parameter that obtain the process of pixel point value comprises in the described pixel acquisition module:
Pixel point value X Ij=(λ A Ij, (1-λ) M Ij), wherein, A IjAnd M IjBe respectively described gray values of pixel points and Grad, wherein i is a horizontal ordinate, and j is an ordinate, and λ is a preset parameter.
8, device according to claim 6, it is characterized in that the described first dynamic object determination module also comprises: the first covariance update module, be used for determining that described pixel is background dot after, upgrade the covariance of color in the current frame image, detailed process comprises:
Described covariance sigma T+1 2=(1-ρ) σ t 2+ ρ (X T+1T+1) T(X T+1T+1), wherein α is a predetermined value, ρ=α/w T+1, w T+1And μ T+1Be respectively the weight and the average of color in the current frame image.
9, device according to claim 6 is characterized in that, the described first dynamic object determination module also comprises: the second covariance update module, be used for determining that described pixel is the foreground point after, upgrade the standard covariance sigma of color in the current frame image T+1' the value that equals to be scheduled to.
10, according to each described device in the claim 6 to 9, it is characterized in that, also comprise
Extraction module is used for the foreground point definite according to the former frame image first dynamic object determination module, and extracting in the former frame image is the pixel point value X of the present frame of foreground point T+1
The probability generation module is used for the pixel point value X of the described present frame that obtains according to extraction module T+1With the Y of pixel neighborhood of a point point described in the former frame image IjThe pixel of determining present frame is the probability of real foreground point
Figure A2009100840070004C1
Wherein a is a positive number, and k and l are respectively the variable quantity of horizontal ordinate, and d is a width neighborhood, and b is the distance between pixel described in the former frame image and the described pixel neighborhood of a point point;
The second dynamic object determination module is used for the determined probability of probability generation module
Figure A2009100840070004C2
Compare with designated value, if determined probability
Figure A2009100840070004C3
Less than designated value, then the foreground point of determining in the described present frame is adjusted into background dot, if determined probability More than or equal to designated value, then the foreground point of determining is not adjusted.
CN2009100840075A 2009-05-12 2009-05-12 Method and device for detecting real-time movement target under dynamic and complicated background Expired - Fee Related CN101561932B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2009100840075A CN101561932B (en) 2009-05-12 2009-05-12 Method and device for detecting real-time movement target under dynamic and complicated background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2009100840075A CN101561932B (en) 2009-05-12 2009-05-12 Method and device for detecting real-time movement target under dynamic and complicated background

Publications (2)

Publication Number Publication Date
CN101561932A true CN101561932A (en) 2009-10-21
CN101561932B CN101561932B (en) 2012-01-11

Family

ID=41220719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2009100840075A Expired - Fee Related CN101561932B (en) 2009-05-12 2009-05-12 Method and device for detecting real-time movement target under dynamic and complicated background

Country Status (1)

Country Link
CN (1) CN101561932B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799434A (en) * 2010-03-15 2010-08-11 深圳市中钞科信金融科技有限公司 Printing image defect detection method
CN101908214A (en) * 2010-08-10 2010-12-08 长安大学 Moving object detection method with background reconstruction based on neighborhood correlation
CN101976439A (en) * 2010-11-02 2011-02-16 上海海事大学 Visual attention model with combination of motion information in visual system of maritime search and rescue machine
CN103150739A (en) * 2013-03-04 2013-06-12 上海大学 Video moving object partitioning algorithm based on multi-feature steady main component analysis
CN103824297A (en) * 2014-03-07 2014-05-28 电子科技大学 Multithreading-based method for quickly updating background and foreground in complex high dynamic environment
CN103971386A (en) * 2014-05-30 2014-08-06 南京大学 Method for foreground detection in dynamic background scenario
CN101751670B (en) * 2009-12-17 2014-09-10 北京中星微电子有限公司 Method and device for detecting foreground object
CN106572387A (en) * 2016-11-09 2017-04-19 广州视源电子科技股份有限公司 Video sequence alignment method and video sequence alignment system
CN110166851A (en) * 2018-08-21 2019-08-23 腾讯科技(深圳)有限公司 A kind of video abstraction generating method, device and storage medium
CN110276788A (en) * 2019-06-12 2019-09-24 北京轩宇空间科技有限公司 Method and apparatus for infrared imaging formula target seeker target following
CN111369591A (en) * 2020-03-05 2020-07-03 杭州晨鹰军泰科技有限公司 Method, device and equipment for tracking moving object
CN112166435A (en) * 2019-12-23 2021-01-01 商汤国际私人有限公司 Target tracking method and device, electronic equipment and storage medium

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101751670B (en) * 2009-12-17 2014-09-10 北京中星微电子有限公司 Method and device for detecting foreground object
CN101799434B (en) * 2010-03-15 2011-06-29 深圳市中钞科信金融科技有限公司 Printing image defect detection method
CN101799434A (en) * 2010-03-15 2010-08-11 深圳市中钞科信金融科技有限公司 Printing image defect detection method
CN101908214A (en) * 2010-08-10 2010-12-08 长安大学 Moving object detection method with background reconstruction based on neighborhood correlation
CN101976439A (en) * 2010-11-02 2011-02-16 上海海事大学 Visual attention model with combination of motion information in visual system of maritime search and rescue machine
CN103150739A (en) * 2013-03-04 2013-06-12 上海大学 Video moving object partitioning algorithm based on multi-feature steady main component analysis
CN103824297B (en) * 2014-03-07 2016-08-24 电子科技大学 In complicated high dynamic environment, background and the method for prospect is quickly updated based on multithreading
CN103824297A (en) * 2014-03-07 2014-05-28 电子科技大学 Multithreading-based method for quickly updating background and foreground in complex high dynamic environment
CN103971386A (en) * 2014-05-30 2014-08-06 南京大学 Method for foreground detection in dynamic background scenario
CN103971386B (en) * 2014-05-30 2017-03-15 南京大学 A kind of foreground detection method under dynamic background scene
CN106572387A (en) * 2016-11-09 2017-04-19 广州视源电子科技股份有限公司 Video sequence alignment method and video sequence alignment system
CN106572387B (en) * 2016-11-09 2019-09-17 广州视源电子科技股份有限公司 Video sequence alignment schemes and system
CN110166851A (en) * 2018-08-21 2019-08-23 腾讯科技(深圳)有限公司 A kind of video abstraction generating method, device and storage medium
CN110166851B (en) * 2018-08-21 2022-01-04 腾讯科技(深圳)有限公司 Video abstract generation method and device and storage medium
US11347792B2 (en) 2018-08-21 2022-05-31 Tencent Technology (Shenzhen) Company Limited Video abstract generating method, apparatus, and storage medium
CN110276788A (en) * 2019-06-12 2019-09-24 北京轩宇空间科技有限公司 Method and apparatus for infrared imaging formula target seeker target following
CN112166435A (en) * 2019-12-23 2021-01-01 商汤国际私人有限公司 Target tracking method and device, electronic equipment and storage medium
CN111369591A (en) * 2020-03-05 2020-07-03 杭州晨鹰军泰科技有限公司 Method, device and equipment for tracking moving object

Also Published As

Publication number Publication date
CN101561932B (en) 2012-01-11

Similar Documents

Publication Publication Date Title
CN101561932B (en) Method and device for detecting real-time movement target under dynamic and complicated background
CN111028213B (en) Image defect detection method, device, electronic equipment and storage medium
CN101727662B (en) SAR image nonlocal mean value speckle filtering method
CN103994724B (en) Structure two-dimension displacement and strain monitoring method based on digital image processing techniques
CN105654091B (en) Sea-surface target detection method and device
CN111986099A (en) Tillage monitoring method and system based on convolutional neural network with residual error correction fused
CN102096824B (en) Multi-spectral image ship detection method based on selective visual attention mechanism
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN109242870A (en) A kind of sea horizon detection method divided based on image with textural characteristics
CN102855485B (en) The automatic testing method of one grow wheat heading
CN103093458B (en) The detection method of key frame and device
CN108921099A (en) Moving ship object detection method in a kind of navigation channel based on deep learning
CN102297822A (en) Method for predicting ash content of coal particles by utilizing image analysis
CN104197900A (en) Meter pointer scale recognizing method for automobile
CN104574381A (en) Full reference image quality evaluation method based on LBP (local binary pattern)
CN105139391A (en) Edge detecting method for traffic image in fog-and-haze weather
CN112287838A (en) Cloud and fog automatic identification method and system based on static meteorological satellite image sequence
CN109472790A (en) A kind of machine components defect inspection method and system
CN115375991A (en) Strong/weak illumination and fog environment self-adaptive target detection method
CN109272484B (en) Rainfall detection method based on video image
CN112884795A (en) Power transmission line inspection foreground and background segmentation method based on multi-feature significance fusion
CN115830514B (en) Whole river reach surface flow velocity calculation method and system suitable for curved river channel
CN117237736A (en) Daqu quality detection method based on machine vision and deep learning
CN109784317B (en) Traffic signal lamp identification method and device
CN116188316A (en) Water area defogging method based on fog concentration sensing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120111

Termination date: 20120512