CN103077533A - Method for positioning moving target based on frogeye visual characteristics - Google Patents

Method for positioning moving target based on frogeye visual characteristics Download PDF

Info

Publication number
CN103077533A
CN103077533A CN2012105744979A CN201210574497A CN103077533A CN 103077533 A CN103077533 A CN 103077533A CN 2012105744979 A CN2012105744979 A CN 2012105744979A CN 201210574497 A CN201210574497 A CN 201210574497A CN 103077533 A CN103077533 A CN 103077533A
Authority
CN
China
Prior art keywords
moving
image
moving target
tracking
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105744979A
Other languages
Chinese (zh)
Other versions
CN103077533B (en
Inventor
陈宗海
郭明玮
赵宇宙
张陈斌
项俊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201210574497.9A priority Critical patent/CN103077533B/en
Publication of CN103077533A publication Critical patent/CN103077533A/en
Application granted granted Critical
Publication of CN103077533B publication Critical patent/CN103077533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for positioning a moving target based on frogeye visual characteristics. The method comprises the following steps: extracting moving areas in an image through stimulating the characteristic that a frogeye visual system is sensitive to the moving target by adopting a moving area detection algorithm based on an inter-frame difference method; concretely, carrying out differential operation on the adjacent frame in a sequence image collected by a second vidicon used for carrying out relay tracking by utilizing the inter-frame differential method, so as to obtain images in a plurality of the moving areas which comprise one moving target or more than one moving target; carrying out histogram matching on the images in the moving areas which comprise one moving target or more than one moving target and the moving target tracking frame images selected in a first vidicon; finding out the most similar area in a plurality of moving images in the moving area, wherein the area is the position area that the selected moving target in the second vidicon. With the adoption of the method for positioning the moving target based on the frogeye visual characteristics, the moving target can be accurately extracted from a complex scene, so that the condition that the moving target is rapidly and accurately positioned can be realized.

Description

A kind of method based on frogeye visual characteristic setting movement target
Technical field
The present invention relates to area of pattern recognition, relate in particular to a kind of method based on frogeye visual characteristic setting movement target.
Background technology
At present, the construction of wisdom city and About Safety Cities is increasing for the demand of intelligent video monitoring system, and also more and more for its functional requirement, relay tracking becomes a large major function of intelligent video monitoring system.Because the monitoring range of video camera is limited at present, for the same target in the larger monitoring scene zone being continued monitoring, target is more clear to obtain, image more specifically, needs the collaborative work of multiple-camera, and relay tracking then is the function that produces according to this demand.Nearly all relay tracking method all relates to the coupling of moving target, and matching process commonly used can be divided into three major types: 1, directly utilize the pixel value of original image to mate according to the difference of choosing of matching characteristic; 2, utilize the physical form feature (point, line) of image, such as features such as edge, angle points, the pixel number that need to carry out correlation computations has had obvious minimizing, and has stronger adaptive faculty; 3, use the algorithm of the senior features such as tree search of constraint.Yet for the relay tracking under the actual complex scene, these target matching methods commonly used can't directly be used preferably, its basic reason is that actual scene is comparatively complicated, have more interference, direct use matching algorithm commonly used is met and is faced the problem that calculated amount is large, interference is more and can't accurately locate.
Summary of the invention
The purpose of this invention is to provide a kind of method based on frogeye visual characteristic setting movement target, can from complex scene, extract accurately moving target, thereby realize target is located quickly and accurately.
A kind of method based on frogeye visual characteristic setting movement target comprises:
Employing is based on the moving region detection algorithm of frame-to-frame differences method, simulation frogeye vision system extracts moving region in the image to the characteristic of moving target sensitivity, concrete: utilize frame differential method, to the second camera acquisition of being used for carrying out relay tracking to the consecutive frame of sequence image do calculus of differences, obtain to comprise the image of some moving regions of one or more moving targets;
Motion target tracking block diagram selected in the image of the described some moving regions that comprise one or more moving targets and the first video camera is looked like to carry out Histogram Matching, seek the most similar zone from this some moving regions image, this zone then is the selected band of position of moving target in the second video camera.
As seen from the above technical solution provided by the invention, by based on the stagnant zone in the frogeye visual characteristic filtering complex scene image, can extract comparatively accurately the moving region of moving target, the calculated amount when having reduced object matching has increased the accuracy of locating.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention, the accompanying drawing of required use was done to introduce simply during the below will describe embodiment, apparently, accompanying drawing in the following describes only is some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite of not paying creative work, can also obtain other accompanying drawings according to these accompanying drawings.
The process flow diagram of a kind of method based on frogeye visual characteristic setting movement target that Fig. 1 provides for the embodiment of the invention one;
Another process flow diagram based on the method for frogeye visual characteristic setting movement target that Fig. 2 provides for the embodiment of the invention two.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme in the embodiment of the invention is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.Based on embodiments of the invention, those of ordinary skills belong to protection scope of the present invention not making the every other embodiment that obtains under the creative work prerequisite.
Comprise a plurality of video cameras in the monitoring scene zone, finish the monitoring work of scene areas by the cooperation of a plurality of video cameras.The first video camera (for example, rifle type video camera) is used for scene areas is carried out integral monitoring, when occurring the moving target of user's concern in this first video camera, need call the second video camera (for example, ball-shaped camera) this moving target is carried out relay tracking.
For complex environment (for example, may have a plurality of moving targets in the second video camera) comparatively, if the second video camera will carry out relay tracking to the moving target of paying close attention in the first video camera accurately, at first to carry out the coupling of moving target.
Embodiment one
The process flow diagram of a kind of method based on frogeye visual characteristic setting movement target that Fig. 1 provides for the embodiment of the invention one mainly comprises the steps:
Step 101, based on the stagnant zone in frogeye visual characteristic filtering the second video camera, extract the moving region image.
The frogeye vision system has specific sensibility to moving target, and frog cannot see the static part details in (being not pay close attention at least) world around, and can utilize this characteristic to carry out the filtration of stagnant zone for moving target in the complex scene.
Therefore, adopt and simulate frogeye to the visual characteristic of moving target sensitivity based on the moving region detection algorithm of frame-to-frame differences method, with the stagnant zone in filtering the second scene, realize that the moving region of moving target is accurately extracted; Concrete: utilize frame differential method to the second camera acquisition to sequence image in consecutive frame do calculus of differences, obtain to comprise the image of some moving regions of one or more moving targets.
Mainly by consecutive frame in the sequence image is done calculus of differences, relatively the difference of the gray-scale value of consecutive frame corresponding pixel points is extracted the image of the some moving regions that comprise one or more moving targets to the frame-to-frame differences method again by selected threshold.
The location of step 102, moving target.
Because the monitoring range of the first video camera is limited, therefore, if need a certain moving target is carried out relay tracking, then need by the second video camera.Can obtain moving region (some moving regions that can comprise one or more moving targets) in the second camera supervised scene by step 101, at this moment, can utilize a certain motion target tracking block diagram selected in moving region in the second video camera and the first video camera to look like to mate, obtain the position of this target in the second video camera.Concrete: the user can select a motion target tracking block diagram picture (moving region that comprises a moving target) from the scene of the first video camera, the image that comprises some moving regions of one or more moving targets in this image and the second video camera is carried out Histogram Matching, seek the most similar zone from described the second video camera in the image of some moving regions, this zone then is the predetermined band of position of moving target in the second video camera.
When obtaining moving target behind the second video camera region, can with should the zone as the initial tracking frame region of the second video camera; Recycling is controlled described the second camera motion take average drift (Mean Shift) method as the active track algorithm of core, makes this moving target be positioned at all the time the middle section of the second camera scene, and the size of following the tracks of frame remains within the predetermined scope.
The embodiment of the invention is filtered stagnant zone in the complex scene image by the moving region detection algorithm based on the frogeye visual characteristic, and the calculated amount of object matching has been reduced in the moving region that can extract comparatively accurately moving target, has increased the accuracy of its coupling.
Embodiment two
For the ease of understanding the present invention, 2 couples of the present invention do further introduction below in conjunction with accompanying drawing, as shown in Figure 2, mainly comprise the steps:
Step 201, the first video camera carry out detection and the tracking of monitoring scene regional movement target.
Can adopt the background subtraction method to carry out moving object detection, when the first video camera detects when having moving target in the monitoring scene zone, judge whether to meet predetermined tracking condition, if then change step 202 over to.Described predetermined tracking condition comprises: judge whether described the first video camera is in the edge of the first camera supervised scene areas for the position of the tracking frame of monitoring moving target; It is concrete: if the distance that should follow the tracks of frame and the first camera views border longitudinal direction or transverse direction during less than predetermined value (for example, 3 pixels), judges that then moving target is in the edge of the first camera supervised scene areas.
Step 202, call the second idle video camera moving target is carried out relay tracking.
For the ease of carrying out relay tracking, need to arrange at scene areas each the second video camera is arranged a plurality of presetting bits.For example, in summit and top, bottom, left end and the right-hand member center of the first camera supervised scene areas P presetting bit is set.
When meeting predetermined tracking condition, then call idle the second video camera and move to the presetting bit nearest apart from moving target and carry out relay tracking.
Step 203, employing are extracted the image of moving region based on the stagnant zone in moving region detection algorithm filtering second video camera of frogeye visual characteristic.
The second video camera need to remain on this position stability a period of time (for example, 500 milliseconds) after arriving presetting bit, for taking the image that is used for carrying out with the first video camera the moving target coupling.This time can be set according to the actual conditions in the scene, but needs to guarantee during this period of time, and the moving target matching algorithm can calculate and finish, and moving target can not walked out this constantly monitoring scene of the second video camera.
The vision system of frogeye has susceptibility to moving target, therefore, this enforcement is adopted and is simulated the visual characteristic of frogeye based on the moving region detection algorithm of frame-to-frame differences method, with the static target in filtering the second camera scene, realizes that the moving region of moving target is accurately extracted.
Concrete: utilize the three-frame difference method in the frame-to-frame differences method that two adjacent in three continuous two field pictures two field pictures are carried out respectively calculus of differences, obtain two gray scale difference images:
D k-1,k(x,y)=|f k-1(x,y)-f k(x,y)|;
D k,k+1(x,y)=|f k+1(x,y)-f k(x,y)|;
Wherein, f K-1(x, y), f k(x, y) and f K+1(x, y) is three continuous two field pictures; D K-1k(x, y) and D K, k+1(x, y) carries out the gray scale difference image that obtains behind the calculus of differences for adjacent two two field pictures;
Utilize threshold value to D K-1, k(x, y) and D K, k+1(x, y) carries out binaryzation, obtains corresponding binary image B K-1, k(x, y) and B K, k+1(x, y);
With binary image B K-1, k(x, y) and B K, k+1(x, y) carries out phase and computing, and acquisition comprises the poor bianry image of three frames of some moving regions of one or more moving targets
Figure BDA00002657338600051
D s k ( x , y ) = 1 , B k - 1 , k ( x , y ) ∩ B k , k + 1 ( x , y ) = 1 0 , B k - 1 , k ( x , y ) ∩ B k , k + 1 ( x , y ) ≠ 1 .
By above-mentioned algorithm, can obtain to comprise in the second video camera the image of some moving regions of one or more moving targets.But may there be the situation that splits into polylith in the moving region in this image, therefore, in order to improve the accuracy of follow-up moving target matching algorithm, needs the one or more moving targets in the described some moving regions of mark.
Its step mainly comprises: at first, utilize the method for morphology closed operation and reduction resolution, extract continuous moving region from the image of described some moving regions: (1) carries out expansion process in the morphology with the described image that comprises some moving regions of one or more moving targets, the image D after obtaining to expand n(2) with D nResolution obtain image R n, concrete: with D nBe divided into the sub-block of Z * Z, if in each sub-block pixel to be that 255 pixel accounts for over half, then all pixels in the current sub-block are set to 255, otherwise are 0; (3) to described image R nCarry out the corrosion treatment in the morphology, obtain to comprise the image of continuous moving region.
Secondly, mark connected region.After obtaining continuous moving region, also need mark is carried out in the zone that is communicated with.
The present embodiment uses the method for scanning bianry image, and the straight line data structure in any delegation of bianry image is:
Figure BDA00002657338600053
If the straight line Line of adjacent two row iAnd Line oEight neighborhoods are communicated with, and then must satisfy simultaneously following relation:
Line i.m_1ColumnTail+1≥Line o.m_1ColumnHead;
Line o.m_1ColumnTail+1≥Line i.m_1ColumnHead;
By lining by line scan, the straight line of all connections is linked to be chained list and carries out unified figure notation, can obtain the information of connected region.Just can calculate easily barycenter, area, the girth of each moving target by the information that obtains, be used for classification or the feature representation of target.
At last, framework location.
Extract respectively each independently minimum boundary rectangle framework of connected region, the rower of going forward side by side is annotated.In order to moving target and corresponding moving region are distinguished.
The location of step 204, moving target.
A certain motion target tracking block diagram selected in the image that step 203 is obtained and the first video camera looks like to mate.
The present embodiment uses histogram matching, at first, and the histogram W of selected tracking block diagram picture in the image that calculation procedure 203 obtains and the first video camera 1With W 2
Again with W 1With W 2Be converted to the image of regulation probability density function, and find out the most similar zone; Concrete:
For histogram W 1With w 2Each pixel, if pixel value is r k, this value is mapped to its corresponding gray level s kShine upon again gray level s kTo final gray level z k
Gray level s kWith z kCan calculate by following method:
Suppose that r and z are respectively the front gray level with processing rear image of processing, p r(r) and p z(z) be respectively corresponding continuous probability density function, estimate p according to the image before processing r(r), p z(z) the regulation probability density function that has for the image after the expectation processing.S is a stochastic variable in addition, and has:
s = T ( r ) = ∫ 0 r p r ( w ) dw ; - - - ( 1 )
Wherein w is integration variable, and its discrete formula is as follows:
s k = T ( r k ) = Σ j = 0 k p r ( r j ) = Σ j = 0 j n j n , k = 0,1,2 . . . , L - 1 - - - ( 2 )
N be pixel in the image quantity and, n jBe gray level r jPixel quantity, L is the quantity of discrete gray levels.
Then, suppose definition stochastic variable z, and have:
G ( z ) = ∫ 0 z p z ( t ) dt = s - - - ( 3 )
Wherein t is integration variable, and its discrete expression is:
v k = G ( z k ) = Σ i = 0 k p z ( z i ) = s k , k = 0,1,2 . . . , L - 1 - - - ( 4 )
By formula (1) and (3) as can be known, G (z)=T (z), so z must satisfy following condition:
z=G -1(s)=G -1[T(r)] (5)
Transforming function transformation function T (r) obtains p by formula (1) r(r) by image valuation before processing, its discrete expression is:
z k=G -1[T(r)],k=0,1,2...,L-1 (6)
That is, at first utilize above-mentioned formula (2) to each gray level r kPrecomputation mapping gray level s kRecycling formula (4) is from having the P of predetermined rule density function z(z) obtain transforming function transformation function G; At last, utilize formula (6) to each s kValue precomputation z k
When acting on respectively histogram W by above-mentioned steps 1With W 2After, finish the coupling of moving target, namely from the second video camera, be positioned at user-selected fixed moving target in the first video camera.
Step 205, relay tracking.
The zone of described the second Camera Positioning is carried out relay tracking as the initial frame region of following the tracks of to moving target.For example, can adopt take average drift (Mean Shift) method as the active track algorithm of core and control described the second camera motion, make this moving target be positioned at all the time the middle section of the second camera scene, and the size of following the tracks of frame remain within the predetermined scope.
The embodiment of the invention is by filtering the stagnant zone in the complex scene image, and the calculated amount of object matching has been reduced in the moving region that can extract comparatively accurately moving target, has increased the accuracy of its coupling.
Through the above description of the embodiments, those skilled in the art can be well understood to above-described embodiment and can realize by software, also can realize by the mode that software adds necessary general hardware platform.Based on such understanding, the technical scheme of above-described embodiment can embody with the form of software product, it (can be CD-ROM that this software product can be stored in a non-volatile memory medium, USB flash disk, portable hard drive etc.) in, comprise some instructions with so that computer equipment (can be personal computer, server, the perhaps network equipment etc.) carry out the described method of each embodiment of the present invention.
The above; only for the better embodiment of the present invention, but protection scope of the present invention is not limited to this, anyly is familiar with those skilled in the art in the technical scope that the present invention discloses; the variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claims.

Claims (10)

1. the method based on frogeye visual characteristic setting movement target is characterized in that, comprising:
Employing is based on the moving region detection algorithm of frame-to-frame differences method, simulation frogeye vision system extracts moving region in the image to the characteristic of moving target sensitivity, concrete: utilize frame differential method to the second camera acquisition of being used for carrying out relay tracking to the consecutive frame of sequence image do calculus of differences, obtain to comprise the image of some moving regions of one or more moving targets;
Motion target tracking block diagram selected in the image of the described some moving regions that comprise one or more moving targets and the first video camera is looked like to carry out Histogram Matching, seek the most similar zone from this some moving regions image, this zone then is the selected band of position of moving target in the second video camera.
2. method according to claim 1 is characterized in that, describedly also comprises seek the most similar zone from this moving region after:
With the initial tracking frame region of this zone as the second video camera;
Utilize average drift Mean Shift algorithm to control described the second camera motion, make this moving target be positioned at all the time the middle section of the second camera scene, and the size of following the tracks of frame remains within the predetermined scope.
3. method according to claim 1, it is characterized in that, described utilize frame differential method to the second camera acquisition of being used for carrying out relay tracking to the consecutive frame of sequence image do calculus of differences, the step of image that obtains to comprise some moving regions of one or more moving targets comprises:
Utilize the three-frame difference method that two adjacent in three continuous two field pictures two field pictures are carried out respectively calculus of differences, obtain two gray scale difference images:
D k-1,k(x,y)=|f k-1(x,y)-f k(x,y)|;
D k,k+1(x,y)=|f k+1(x,y)-f k(x,y)|;
Wherein, f K-1(x, y), f k(x, y) and f K+1(x, y) is three continuous two field pictures; D K-1, k(x, y) and D K, k+1(x, y) carries out the gray scale difference image that obtains behind the calculus of differences for adjacent two two field pictures;
Utilize threshold value to D K-1, k(x, y) and D K, k+1(x, y) carries out binaryzation, obtains corresponding binary image B K-1, k(x, y) and B K, k+1(x, y);
With binary image B K-1, k(x, y) and B K, k+1(x, y) carries out phase and computing, and acquisition comprises the poor bianry image of three frames of some moving regions of one or more moving targets
Figure FDA00002657338500011
D s k ( x , y ) = 1 , B k - 1 , k ( x , y ) ∩ B k , k + 1 ( x , y ) = 1 0 , B k - 1 , k ( x , y ) ∩ B k , k + 1 ( x , y ) ≠ 1 .
4. method according to claim 3 is characterized in that, the method also comprises: the one or more moving targets in the described some moving regions of mark;
Concrete: as to utilize the morphology closed operation and reduce the method for resolution, from the image of the described some moving regions that comprise one or more moving targets, extract continuous moving region;
Use comprises the bianry image of S bar straight line lines by line scan to the described image that extracts continuous moving region, and the straight line of all connections is linked to be chained list and carries out unified figure notation, obtains the information of connected region, and wherein S is positive integer;
Extract respectively each independently minimum boundary rectangle framework of connected region, the rower of going forward side by side is annotated.
5. method according to claim 4 is characterized in that, the described method of utilizing the morphology closed operation and reducing resolution, and the step that extracts continuous moving region from the image of the described some moving regions that comprise one or more moving targets comprises:
The described image that comprises some moving regions of one or more moving targets is carried out expansion process in the morphology, the image D after obtaining to expand n
Reduce described D nResolution obtain image R n, concrete: with D nBe divided into the sub-block of Z * Z, if in each sub-block pixel to be that 255 pixel accounts for over half, then all pixels in the current sub-block are set to 255, otherwise are 0;
To described image R nCarry out the corrosion treatment in the morphology, obtain to comprise the image of continuous moving region.
6. method according to claim 1, it is characterized in that, described with the described some moving regions that comprise one or more moving targets image and the first video camera in selected motion target tracking block diagram look like to carry out Histogram Matching, the step of seeking the most similar zone from the image of described some moving regions comprises:
Calculate respectively the histogram W of the selected motion target tracking block diagram picture of the image of the described some moving regions that comprise one or more moving targets and the first video camera 1With W 2
With above-mentioned histogram W 1With W 2Be converted to the image with regulation probability density function, and find out the most similar zone.
7. method according to claim 6 is characterized in that, and is described with above-mentioned histogram W 1With W 2The step that is converted to the image with regulation probability density function comprises:
For histogram W 1With W 2Each pixel, if pixel value is r k, this value is mapped to its corresponding gray level s kShine upon again gray level s kTo final gray level z k
Concrete: to each gray level r kPrecomputation mapping gray level s k:
s k = T ( r k ) = Σ j = 0 k p r ( r j ) = Σ j = 0 j n j n , k = 0,1,2 . . . , L - 1 ;
Wherein, p r(r j) be corresponding continuous probability density function, n be pixel in the image quantity and, n jBe gray level r jPixel quantity, L is the quantity of discrete gray levels;
Utilization has the P of predetermined rule density function z(z) obtain transforming function transformation function G:
G ( z k ) = Σ t = 0 k p z ( z t ) = s k , k = 0,1,2 , . . . , L - 1 , Wherein, z kBe final gray level;
To each s kValue precomputation z k: z k=G -1[T (r)], k=0,1,2..., L-1.
8. method according to claim 1 is characterized in that, described utilize frame differential method to the second camera acquisition of being used for carrying out relay tracking to the consecutive frame of sequence image also comprise before doing calculus of differences:
Carry out Motion target detection and tracking by the first video camera, after this first video camera detects moving target, judge whether to meet predetermined tracking condition, if then call the second idle video camera moving target is carried out relay tracking.
9. method according to claim 8 is characterized in that, describedly judges whether to meet predetermined tracking condition and comprises:
Judge whether described the first video camera is in the edge of the first camera supervised scene areas for the position of the tracking frame of monitoring moving target; It is concrete: if the distance that should follow the tracks of frame and the first camera views border longitudinal direction or transverse direction during less than predetermined value, judges that then moving target is in the edge of the first camera supervised scene areas.
10. according to claim 8 or 9 described methods, it is characterized in that, describedly call the second idle video camera and moving target is carried out relay tracking comprise:
The summit of described the first camera supervised scene areas and top, bottom, left end and right-hand member center are provided with P and supply the second video camera to carry out the presetting bit of relay tracking, when meeting predetermined tracking condition, then call idle the second video camera and move to the presetting bit nearest apart from moving target and carry out relay tracking.
CN201210574497.9A 2012-12-26 2012-12-26 A kind of based on frogeye visual characteristic setting movement order calibration method Active CN103077533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210574497.9A CN103077533B (en) 2012-12-26 2012-12-26 A kind of based on frogeye visual characteristic setting movement order calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210574497.9A CN103077533B (en) 2012-12-26 2012-12-26 A kind of based on frogeye visual characteristic setting movement order calibration method

Publications (2)

Publication Number Publication Date
CN103077533A true CN103077533A (en) 2013-05-01
CN103077533B CN103077533B (en) 2016-03-02

Family

ID=48154052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210574497.9A Active CN103077533B (en) 2012-12-26 2012-12-26 A kind of based on frogeye visual characteristic setting movement order calibration method

Country Status (1)

Country Link
CN (1) CN103077533B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791687A (en) * 2016-03-04 2016-07-20 苏州卓视蓝电子科技有限公司 Frogeye bionic detection method and frogeye bionic camera
CN107133969A (en) * 2017-05-02 2017-09-05 中国人民解放***箭军工程大学 A kind of mobile platform moving target detecting method based on background back projection
CN107844734A (en) * 2016-09-19 2018-03-27 杭州海康威视数字技术股份有限公司 Monitoring objective determines method and device, video frequency monitoring method and device
CN109767454A (en) * 2018-12-18 2019-05-17 西北工业大学 Based on Space Time-frequency conspicuousness unmanned plane video moving object detection method
CN111885301A (en) * 2020-06-29 2020-11-03 浙江大华技术股份有限公司 Gun and ball linkage tracking method and device, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060098873A1 (en) * 2000-10-03 2006-05-11 Gesturetek, Inc., A Delaware Corporation Multiple camera control system
CN101572803A (en) * 2009-06-18 2009-11-04 中国科学技术大学 Customizable automatic tracking system based on video monitoring
CN101883261A (en) * 2010-05-26 2010-11-10 中国科学院自动化研究所 Method and system for abnormal target detection and relay tracking under large-range monitoring scene
CN102289822A (en) * 2011-09-09 2011-12-21 南京大学 Method for tracking moving target collaboratively by multiple cameras
CN102509088A (en) * 2011-11-28 2012-06-20 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060098873A1 (en) * 2000-10-03 2006-05-11 Gesturetek, Inc., A Delaware Corporation Multiple camera control system
CN101572803A (en) * 2009-06-18 2009-11-04 中国科学技术大学 Customizable automatic tracking system based on video monitoring
CN101883261A (en) * 2010-05-26 2010-11-10 中国科学院自动化研究所 Method and system for abnormal target detection and relay tracking under large-range monitoring scene
CN102289822A (en) * 2011-09-09 2011-12-21 南京大学 Method for tracking moving target collaboratively by multiple cameras
CN102509088A (en) * 2011-11-28 2012-06-20 Tcl集团股份有限公司 Hand motion detecting method, hand motion detecting device and human-computer interaction system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
冈萨雷斯: "《数字图像处理》", 31 August 2007, 电子工业出版社 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105791687A (en) * 2016-03-04 2016-07-20 苏州卓视蓝电子科技有限公司 Frogeye bionic detection method and frogeye bionic camera
CN107844734A (en) * 2016-09-19 2018-03-27 杭州海康威视数字技术股份有限公司 Monitoring objective determines method and device, video frequency monitoring method and device
CN107844734B (en) * 2016-09-19 2020-07-07 杭州海康威视数字技术股份有限公司 Monitoring target determination method and device and video monitoring method and device
CN107133969A (en) * 2017-05-02 2017-09-05 中国人民解放***箭军工程大学 A kind of mobile platform moving target detecting method based on background back projection
CN107133969B (en) * 2017-05-02 2018-03-06 中国人民解放***箭军工程大学 A kind of mobile platform moving target detecting method based on background back projection
CN109767454A (en) * 2018-12-18 2019-05-17 西北工业大学 Based on Space Time-frequency conspicuousness unmanned plane video moving object detection method
CN109767454B (en) * 2018-12-18 2022-05-10 西北工业大学 Unmanned aerial vehicle aerial video moving target detection method based on time-space-frequency significance
CN111885301A (en) * 2020-06-29 2020-11-03 浙江大华技术股份有限公司 Gun and ball linkage tracking method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN103077533B (en) 2016-03-02

Similar Documents

Publication Publication Date Title
US11288818B2 (en) Methods, systems, and computer readable media for estimation of optical flow, depth, and egomotion using neural network trained using event-based learning
CN110651310B (en) Deep learning method for estimating object density and/or flow, and related method and software
CN107967451B (en) Method for counting crowd of still image
Saunier et al. A feature-based tracking algorithm for vehicles in intersections
CN103259962B (en) A kind of target tracking method and relevant apparatus
CN101883209B (en) Method for integrating background model and three-frame difference to detect video background
CN102867311A (en) Target tracking method and target tracking device
CN103077533A (en) Method for positioning moving target based on frogeye visual characteristics
CN101344965A (en) Tracking system based on binocular camera shooting
CN102142085B (en) Robust tracking method for moving flame target in forest region monitoring video
Zhang et al. Moving vehicles segmentation based on Bayesian framework for Gaussian motion model
CN111161309B (en) Searching and positioning method for vehicle-mounted video dynamic target
CN111062971B (en) Deep learning multi-mode-based mud head vehicle tracking method crossing cameras
CN110531618B (en) Closed loop detection robot self-positioning error elimination method based on effective key frame
CN102426785A (en) Traffic flow information perception method based on contour and local characteristic point and system thereof
CN113763427A (en) Multi-target tracking method based on coarse-fine shielding processing
CN104159088A (en) System and method of remote monitoring of intelligent vehicle
CN110827320A (en) Target tracking method and device based on time sequence prediction
Chen et al. Multi-lane detection and tracking using temporal-spatial model and particle filtering
CN110533692B (en) Automatic tracking method for moving target in aerial video of unmanned aerial vehicle
CN102214301A (en) Multi-target tracking method for associated cooperation of adaptive motion
Dornaika et al. A new framework for stereo sensor pose through road segmentation and registration
Roy et al. A comprehensive survey on computer vision based approaches for moving object detection
CN107437071B (en) Robot autonomous inspection method based on double yellow line detection
Li et al. Moving vehicle detection based on an improved interframe difference and a Gaussian model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant