CN107146239A - Satellite video moving target detecting method and system - Google Patents

Satellite video moving target detecting method and system Download PDF

Info

Publication number
CN107146239A
CN107146239A CN201710266714.0A CN201710266714A CN107146239A CN 107146239 A CN107146239 A CN 107146239A CN 201710266714 A CN201710266714 A CN 201710266714A CN 107146239 A CN107146239 A CN 107146239A
Authority
CN
China
Prior art keywords
pixel
frame
target
motion
candidate target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710266714.0A
Other languages
Chinese (zh)
Other versions
CN107146239B (en
Inventor
孙向东
吴佳奇
张过
汪韬阳
蒋永华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201710266714.0A priority Critical patent/CN107146239B/en
Publication of CN107146239A publication Critical patent/CN107146239A/en
Application granted granted Critical
Publication of CN107146239B publication Critical patent/CN107146239B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of satellite video moving target detecting method and system, including:It is compensation frame that other frames in reference frame, video outside reference frame are selected from video, extracts the characteristic point of each image blocks in reference frame;Matching is tracked to the characteristic point of each image blocks using the point tracing of front and rear LK light streams, the matching point set of the same name of consecutive frame in video is obtained;Utilize the motion compensation for matching point set original compensation frame of the same name;Background model is set up to each pixel in the first frame after motion compensation;Extraction moving target acquisition bianry image is compared by newly compensating frame and background model, connected domain is extracted to bianry image, candidate target is obtained;Possible local parallax pseudo-motion target and ghost target are isolated from candidate target;Context update is carried out to new compensation frame.The present invention can reject " pseudo-motion " error detection caused by parallax and ghost well, effectively improve accuracy of detection, be widely used in motion analysis and the application of satellite video.

Description

Satellite video moving target detecting method and system
Technical field
The invention belongs to mobile background modeling and Detection for Moving Target field, particularly a kind of satellite video motion mesh Mark detection method and system.
Background technology
With going deep into for remote sensing application, application demand is generally investigated from regularly static state to be developed to real-time dynamic monitoring direction, Global hot spot region and target are continued to monitor using satellite, multidate information is obtained and has become active demand.Except warp Outside the static information that allusion quotation Optical remote satellite image is included, video satellite can also obtain the real-time dynamic in certain space-time unique Information so that remote sensing observations has broader practice prospect.In Video Applications, the detection of moving target is that multidate information is extracted Essential step, while be also the later stage it is senior application and processing basis and premise.Therefore accurate, the robust of moving target are realized Detection and segmentation are particularly important.In current satellite video moving object detection, exist by scene global motion and part Serious flase drop problem caused by " pseudo-motion ".
The content of the invention
The problem of existing for prior art, the present invention, which proposes one kind, can avoid the satellite video of " pseudo-motion " flase drop from transporting Moving target detection method and system.
Technical scheme is as follows:
First, satellite video moving target detecting method, including:
Other frames in frame on the basis of the intermediate frame of S1 selection videos, video outside reference frame are compensation frame;Reference frame is entered Row piecemeal obtains image blocks, extracts the Good Features characteristic points of each image blocks;
S2 is since the frame of video the 1st, using the point tracing of front and rear LK light streams to each shadow between reference frame and compensation frame As the GoodFeatures characteristic points of block are tracked matching, matching point set of the same name is obtained;
S3 utilizes the interframe 2D affine model parameters for matching point set estimation each image blocks of reference frame of the same name, and sets up interframe 2D Affine model transformational relation;One is created with benchmark frame sign identical null images as new compensation frame, according to the affine moulds of interframe 2D Type transformational relation, adopts new compensation frame by region corresponding with each image blocks of reference frame in raw compensation frame, completes original again Compensate the motion compensation of frame;
S4 randomly selects n pixel to each pixel v in the first frame after motion compensation from pixel v and its neighborhood As pixel v sampled pixel, pixel v background model Model (x)={ v is set up1, v2... vn, a }, wherein, vxRepresent the X sampled pixel, x=1,2...n;A is pixel v updating factor;
S5 compares extraction moving target by newly compensating frame and background model, and obtains bianry image, and bianry image is carried Connected domain is taken, connected domain is candidate target;Described extraction moving target is specially:To each pixel V in new compensation frame (X), build centered on V (X), R is the Euclidean distance space S R (V (X)) of radius, obtains SR (V (X)) corresponding with pixel V (X) Background model in sampled pixel intersection of sets collection, if occur simultaneously in pixel count be more than or equal to amount threshold, V (X) be background picture Element, gray scale is designated as 0;Otherwise, V (X) is motion pixel, and gray scale is designated as 255;R is empirical value, in 18~22 scope values;Quantity Threshold value is empirical value;
S6 isolates possible local parallax pseudo-motion target and possible ghost target from candidate target;
Described isolates possible local parallax pseudo-motion target from candidate target, is specially:
(1) minimum enclosed rectangle of candidate target is extracted, the length-width ratio ratio of minimum enclosed rectangle is calculated, if ratio> 3.5, then the candidate target be judged as possible local parallax pseudo-motion target;
(2) edge line of candidate target neighborhood is extracted, judges whether edge line intersects with candidate target contour line, if phase Hand over, then the candidate target is judged as possible local parallax pseudo-motion target;
Described isolates possible ghost target from candidate target, is specially:
Candidate target is recorded in the position of frame of video, if candidate target is identical in the position of continuous 10 frame, being judged as can The ghost target of energy;
S7 randomly selects several s to each background pixel in new compensation frame from the natural number of scope [1, a], if s=1, from A sampled pixel is randomly choosed in the corresponding background model of the background pixel, the sampled pixel is replaced with the background pixel and carries out Context update, a is the updating factor of the background pixel;If s ≠ 1, without context update;
The updating factor a of each pixel is adopted in new compensation frame sets with the following method:
1. by the minimum enclosed rectangle of possible local parallax pseudo-motion target in new compensation frame to interframe 2D affine models Translation vector direction extend out b pixel, obtain updating factor rectangle A, the b rule of thumb value in the range of 10~12;
2. count the pixel count that each candidate target is included in new compensation frame one by one, to comprising pixel count be not more than 3 time Target is selected, the updating factor a of its interior pixel is set to 1;To comprising pixel count be more than 3 but no more than 20 candidate target, category Belong to updating factor rectangle A pixel again simultaneously in the candidate target, its updating factor a is set to 5;To comprising pixel count be more than 20 candidate target, belongs to the pixel that the candidate target belongs to updating factor rectangle A again simultaneously, and its updating factor a is set to 1;
3. the pixel included to possible ghost target in new compensation frame, its updating factor a is set to 1;
4. the updating factor of other pixels is set to 10 in new compensation frame.
In S4, before the interframe 2D affine model parameters for estimating each image blocks of reference frame, matching is rejected using MSAC methods of the same name The error matching same place that point is concentrated.
In S5, Euclidean distance is represented using the gray scale difference between pixel.
2nd, satellite video moving object detection system, including:
Other frames in frame on the basis of feature point extraction module, the intermediate frame for selecting video, video outside reference frame are Compensate frame;Piecemeal is carried out to reference frame and obtains image blocks, the Good Features characteristic points of each image blocks are extracted;
Tracking and matching module, for since the frame of video the 1st, using front and rear LK light streams between reference frame and compensation frame Point tracing is tracked matching to the GoodFeatures characteristic points of each image blocks, obtains matching point set of the same name;
Motion compensating module, for utilizing the interframe 2D affine models ginseng for matching point set estimation each image blocks of reference frame of the same name Number, and set up interframe 2D affine model transformational relations;One is created with benchmark frame sign identical null images as new compensation frame, root According to interframe 2D affine model transformational relations, region corresponding with each image blocks in reference frame in raw compensation frame is adopted new again Frame is compensated, the motion compensation of raw compensation frame is completed;
Background model builds module, for each pixel v in the first frame after motion compensation, from pixel v and its neighbour N pixel is randomly selected in domain as pixel v sampled pixel, pixel v background model Model (x)={ v is set up1, v2, ...vn, a }, wherein, vxRepresent x-th of sampled pixel, x=1,2...n;A is pixel v updating factor;
Moving target recognition module, for comparing extraction moving target by newly compensating frame and background model, and obtains two It is worth image, connected domain is extracted to bianry image, connected domain is candidate target;Described extraction moving target is specially:To new benefit The pixel of each in frame V (X) is repaid, is built centered on V (X), R is the Euclidean distance space S R (V (X)) of radius, obtains SR (V (X)) sampled pixel intersection of sets collection in background model corresponding with pixel V (X), if pixel count is more than or equal to quantity threshold in occuring simultaneously Value, then V (X) is background pixel, and gray scale is designated as 0;Otherwise, V (X) is motion pixel, and gray scale is designated as 255;R is empirical value, 18 ~22 scope values;Amount threshold is empirical value;
Pseudo-motion object extraction module, for isolating possible local parallax pseudo-motion target and can from candidate target The ghost target of energy;
Described pseudo-motion object extraction module further comprises local parallax pseudo-motion target separation module and possible Ghost target separation module, wherein:
Local parallax pseudo-motion target separation module, for isolating possible local parallax pseudo-motion from candidate target Target, be specially:
(1) minimum enclosed rectangle of candidate target is extracted, the length-width ratio ratio of minimum enclosed rectangle is calculated, if ratio> 3.5, then the candidate target be judged as possible local parallax pseudo-motion target;
(2) edge line of candidate target neighborhood is extracted, judges whether edge line intersects with candidate target contour line, if phase Hand over, then the candidate target is judged as possible local parallax pseudo-motion target;
Possible ghost target separation module, isolates possible ghost target from candidate target, is specially:
Candidate target is recorded in the position of frame of video, if candidate target is identical in the position of continuous 10 frame, being judged as can The ghost target of energy;
Background compensation module, for each background pixel in new compensation frame, being randomly selected from the natural number of scope [1, a] Number s, if s=1, a sampled pixel is randomly choosed from the corresponding background model of the background pixel, is replaced with the background pixel Change the sampled pixel and carry out context update, a is the updating factor of the background pixel;If s ≠ 1, without context update;
The updating factor a of each pixel is adopted in new compensation frame sets with the following method:
1. by the minimum enclosed rectangle of possible local parallax pseudo-motion target in new compensation frame to interframe 2D affine models Translation vector direction extend out b pixel, obtain updating factor rectangle A, the b rule of thumb value in the range of 10~12;
2. count the pixel count that each candidate target is included in new compensation frame one by one, to comprising pixel count be not more than 3 time Target is selected, the updating factor a of its interior pixel is set to 1;To comprising pixel count be more than 3 but no more than 20 candidate target, category Belong to updating factor rectangle A pixel again simultaneously in the candidate target, its updating factor a is set to 5;To comprising pixel count be more than 20 candidate target, belongs to the pixel that the candidate target belongs to updating factor rectangle A again simultaneously, and its updating factor a is set to 1;
3. the pixel included to possible ghost target in new compensation frame, its updating factor a is set to 1;
4. the updating factor of other pixels is set to 10 in new compensation frame.
Compared to the prior art, the present invention has following features and beneficial effect:
(1) background model can be set up by a frame image, for target detection segmentation.
(2) the front and rear light stream matching of uniform piecemeal builds inter frame motion model, can accurately carry out global motion compensation.
(3) by potential " pseudo-motion " judgement and the dynamic corrections of correspondence updating factor, it can be good at rejecting parallax With " pseudo-motion " error detection caused by ghost, accuracy of detection is effectively improved.
(4) motion analysis and the application of satellite video be present invention may be broadly applicable to.
Brief description of the drawings
Fig. 1 is the particular flow sheet of the present invention;
Fig. 2 is the piecemeal and feature extraction schematic diagram of reference frame in embodiment;
Fig. 3 is the matching schematic diagram of front and rear light stream point in embodiment;
Fig. 4 is the result figure of motion compensation in embodiment;
Fig. 5 is the pseudo-motion schematic diagram of building in embodiment;
Fig. 6 is the ghost schematic diagram of sport plane in embodiment.
Embodiment
The present invention first, using improved VIBE modelings, set up comprising dynamic renewal by the first frame after motion The background model of the factor.Since the second frame, fortune is partitioned into connected domain analysis detection using compensating frame and background model and comparing Moving-target, and then judge that the dynamic corrections context update factor completes the background model of local auto-adaptive more by " pseudo-motion " Newly.During processing, the method using the front and rear LK light streams of uniform piecemeal estimates interframe scene global motion model, carries out The motion compensation of interframe.Finally, the inventive method can meet analysis and the application demand of satellite video motion.
Technical solution of the present invention is described in detail below in conjunction with accompanying drawing, Fig. 1, specific steps of the invention is seen:
Step 1, reference frame is selected from video, piecemeal is carried out to reference frame, and extract the feature of each image blocks.
In present embodiment, video intermediate frame is set in reference frame, video to other frames beyond base station is Compensate frame.Reference frame is divided into equal-sized M × N number of image blocks, reference frame can be regarded as to be made up of the M rows N image blocks arranged New image.Each image blocks are designated as Bmn, BmnRepresent m rows n-th arrange image blocks, m=1,2 ... M, n=1,2 ... N. When extracting the characteristic point of image blocks, using Good Features methods.In embodiment, it is 3 to set filter window size × 3, minimum range is in [3,5] scope value so that low frequency, weak characteristic area can extract enough and be evenly distributed Good Features characteristic points, are shown in Fig. 2, and it show a frame in the satellite video of Jilin one, and the frame is divided into 10 × 10 shadows It is from image blocks B in the figure lower right corner square frame as block10,1The Good Features points of extraction.
Step 2, since the frame of video the 1st, using the point tracing pair of front and rear LK light streams between reference frame and compensation frame The GoodFeatures characteristic points of each image blocks are tracked matching, obtain matching point set of the same name.
Point tracking method should have forward and reverse uniformity, and either positive sequence is followed the trail of in time or inverted sequence is followed the trail of, and is produced Track should be the same.First, forward trace forward is carried out to frame image sequence:To the Xt points of t frames, using LK light streams with Track continues to trace into the Xt+k points of t+k frames to the Xt+1 points of t+1 frames.Then, traceback backward is carried out:Using LK light streams from t The Xt+k points of+k frames trace into the ^Xt points of t frames, so just generate forward to latter two track.Finally, Xt points and ^Xt are compared The Euclidean distance of point, if Euclidean distance is less than specified threshold, Xt points and ^Xt points match same place each other.In the present invention, k 1 is taken, i.e., is tracked backward forward in interframe.
With image blocks B10,1Exemplified by, according to the method described above, in image blocks B10,1In the GoodFeatures points of extraction are carried out Tracking and matching, will two consecutive frame image Overlapping displays, and mark matching point set, see Fig. 3, so it is maskable fall Errors Catastrophic compared with Big match point, while also can guarantee that being uniformly distributed a little.
Step 3, the interframe 2D affine model parameters of each image blocks of reference frame are estimated using matching point set of the same name, and set up frame Between 2D affine model transformational relations.Preferably, before estimation interframe 2D affine model parameters, utilizing MSAC (M-estimator SAmple Consensus) method reject matching same place concentrate error matching same place, further lifting Inter-frame Transformation precision.
Interframe 2D affine models are shown in formula (1):
In formula (1), a1、a2、a3、a4、a5、a6For interframe 2D affine model parameters.
The differential correcting method of " anti-solution " is utilized, one is created with benchmark frame sign identical null images as new compensation Frame.According to interframe 2D affine model transformational relations, region corresponding with each image blocks of reference frame in raw compensation frame is adopted again To new compensation frame, motion compensation is completed.Fig. 4 show the Overlapping display of consecutive frame after motion compensation, including after parameter optimization High-precision GoodFeatures matches point set.
Step 4, background model is set up beneficial to the frame of video first after motion compensation.
Background model thought of the invention based on VIBE modelings, only completes the modeling of background model using the first frame. VIBE modelings use the random neighborhood sampling modeling pattern of single frames Pixel-level, to each pixel v in two field picture, from pixel v And its 8 randomly select n pixel as pixel v sampled pixel in neighborhood, background model is set up.Background model is designated as:Model (x)={ v1, v2... vn, x=1,2...n, vxRepresent x-th of sampled pixel, n values in the range of 15~25.
This stochastical sampling pattern, it is ensured that each pixel has equal random chance to be selected so that construct Background model objective reality is reliable, it is to avoid interference caused by subjective factors.Simultaneously as background model only preserves the pixel sampling of 8 neighborhoods, The normal detection of small size target can be ensured, background can be resisted again and rocks caused error detection.In view of in satellite video Part " pseudo-motion " problem, the present invention VIBE modelings are improved, in background model add updating factor a, more New factor a represents the renewal frequency of background, and background model is modified to:Model (x)={ v1, v2... vn, a }.
Step 5, extraction moving target is compared by newly compensating frame and background model.
For each pixel in new compensation frame, compare the similar of sampled pixel in the corresponding background model of the pixel Degree.Specify the sampled pixel number of similarity to be more than amount threshold if met, judge the pixel for background pixel, be otherwise fortune Dynamic pixel.
The deterministic process of the similarity can be illustrated by the Euclidean distance in mathematically 2d spaces.Note V (X) is to wait to judge Pixel, the Euclidean distance space that SR (V (X)) is represented centered on V (X), R is radius, using { v1, v2... vnRepresent background Sampled pixel set in model.If SR (V (X)) and { v1, v2... vnCommon factor in pixel count Num be more than or equal to quantity Threshold value T, then it is assumed that V (X) is background pixel, gray scale is designated as 0;Otherwise it is foreground pixel, that is, moves pixel, gray scale is designated as 255.R For empirical value, in 18~22 scope values;Amount threshold T is empirical value.
The situation that V (X) is background pixel is represented with formula (2):
During judgement, Euclidean distance is represented using the gray scale difference between pixel.After the completion of all pixels detection in newly compensation frame, The bianry image of foreground and background is obtained, connected domain analysis and extraction are carried out on this basis, each connected domain is regarded as one Candidate target, completes Target Segmentation.
Step 6, pseudo-motion judges.
The candidate target being partitioned into mainly includes three parts:Real moving target, parallax " pseudo-motion " target and " ghost Shadow " target, " pseudo-motion " target includes parallax " pseudo-motion " target and " ghost " target.This step is to judge possible office Portion's parallax " pseudo-motion " target and possible " ghost " target.
6.1 according to 3 features of local parallax " pseudo-motion ", further to all candidate targets to be handled, and isolate Possible local parallax " pseudo-motion " target.
The Rule of judgment of possible local parallax " pseudo-motion " target is:
(1) by connected domain node coordinate, the minimum enclosed rectangle of connected domain is extracted, the length and width of minimum enclosed rectangle are calculated Than ratio, if ratio>3.5, then the candidate target representated by the connected domain is possible local parallax " pseudo-motion " target;
(2) candidate target neighborhood carries out edge extracting, judges whether are edge line (without closure) and candidate target contour line Intersecting, if intersecting, the candidate target is possible local parallax " pseudo-motion " target.
Will likely the minimum enclosed rectangle of local parallax " pseudo-motion " target extend out 1 pixel, meanwhile, will be minimum external The translation vector direction of rectangle to interframe 2D affine models extends out b pixel and obtains updating factor rectangle A, b in 10~12 scopes Interior value.See Fig. 5, show the testing result of a certain two field picture, solid box represents the minimum enclosed rectangle of a candidate target, root It is judged that, the candidate target is possible local parallax " pseudo-motion " target;Dotted line frame represents updating factor rectangle A.Extend out 10 Individual pixel is mainly the change for considering the candidate target direction of motion.What deserves to be explained is, it is same that video satellite track is generally the sun Track is walked, therefore translation vector direction is generally towards upper.
6.2 isolate possible " ghost " target from candidate target.
During Background Modeling, sampled, transported using moving target as sampled pixel in the initial position of moving target Moving-target is left after initial position, and the real background pixel of the position will be judged as object pixel.As shown in Figure 6, it is winged The testing result of machine target (moving target).The ghost image of aircraft is occurred in that in target initial position, i.e. " ghost ".According to " ghost " The characteristics of target, determination methods are:When there is moving target for the first time in a certain position, the position is recorded, if the moving target In continuous c frames transfixion, then it is assumed that be possible " ghost " target.C is in 10~15 scope values.
Step 7, context update.
Real moving target and " pseudo-motion " target region and neighborhood produce change.In order to adapt to the dynamic of scene Change, reduction false drop rate, it is necessary to carry out adaptive updates to background model after " pseudo-motion " Target Segmentation.More new strategy is: The background model of each pixel includes a updating factor a, to each pixel for being detected as background in new compensation frame, is designated as background In pixel, the natural number from 1 to a, 1 number s is randomly selected.If s=1, from the corresponding background model of the background pixel with Machine selects a sampled pixel, and the sampled pixel is replaced with the background pixel.If s ≠ 1, according to probability to the background pixel Neighborhood carries out probability updating, and probability is 1/a.Therefore, the renewal frequency of each pixel background model is about 1/a, each during renewal The probability that background model sample is replaced about 1/n.If s ≠ 1, without context update.
Context update can be represented using equation below:
After handling according to the method described above, possible local parallax " pseudo-motion " region constantly updates, i.e., carried out more by a=1 Newly;Other regions are according to probability updating in new compensation frame, and obtained background model can more accurately describe current time Scape, is adapted to local parallax " pseudo-motion " change, on the premise of verification and measurement ratio is not reduced, can effectively reduce " pseudo-motion " Error detection, effectively improves accuracy of detection.
Updating factor a establishing methods are:
In formula (4):
A ∈ A represent that the pixel corresponding to updating factor a is contained in updating factor rectangle A;
Num (Obj) represents the pixel count that candidate target is included, if pixel count is not more than 3, directly by the candidate target Background is classified as, the updating factor a that the candidate target includes pixel is set to 1;If pixel count is more than 3 but no more than 20, right Belong to the pixel that candidate target belongs to updating factor rectangle A again simultaneously, its updating factor a is set to 5;If pixel count is more than 20, Belong to updating factor rectangle A pixel again simultaneously to belonging to candidate target, its updating factor a is set to 1;
Ghost represents " ghost " target, the pixel included to possible " ghost " target in new compensation frame, makes it update Factor a=1;
Other pixels, updating factor is set to 10.

Claims (4)

1. satellite video moving target detecting method, it is characterized in that, including:
Other frames in frame on the basis of the intermediate frame of S1 selection videos, video outside reference frame are compensation frame;Reference frame is divided Block obtains image blocks, extracts the Good Features characteristic points of each image blocks;
S2 is since the frame of video the 1st, using the point tracing of front and rear LK light streams to each image blocks between reference frame and compensation frame GoodFeatures characteristic points be tracked matching, obtain matching point set of the same name;
S3 estimates the interframe 2D affine model parameters of each image blocks of reference frame using point set of the same name is matched, and it is affine to set up interframe 2D Model conversion relation;One is created with benchmark frame sign identical null images as new compensation frame, is turned according to interframe 2D affine models Relation is changed, new compensation frame is adopted into region corresponding with each image blocks of reference frame in raw compensation frame again, raw compensation is completed The motion compensation of frame;
S4 randomly selects n pixel conduct to each pixel v in the first frame after motion compensation from pixel v and its neighborhood Pixel v sampled pixel, sets up pixel v background model Model (x)={ v1, v2... vn, a }, wherein, vxRepresent x-th Sampled pixel, x=1,2...n;A is pixel v updating factor;
S5 compares extraction moving target by newly compensating frame and background model, and obtains bianry image, and bianry image is extracted and connected Logical domain, connected domain is candidate target;Described extraction moving target is specially:To each pixel V (X), structure in new compensation frame Build centered on V (X), R is the Euclidean distance space S R (V (X)) of radius, obtains SR (V (X)) backgrounds corresponding with pixel V (X) Sampled pixel intersection of sets collection in model, if pixel count is more than or equal to amount threshold in occuring simultaneously, V (X) is background pixel, gray scale It is designated as 0;Otherwise, V (X) is motion pixel, and gray scale is designated as 255;R is empirical value, in 18~22 scope values;Amount threshold is warp Test value;
S6 isolates possible local parallax pseudo-motion target and possible ghost target from candidate target;
Described isolates possible local parallax pseudo-motion target from candidate target, is specially:
(1) minimum enclosed rectangle of candidate target is extracted, the length-width ratio ratio of minimum enclosed rectangle is calculated, if ratio>3.5, Then the candidate target is judged as possible local parallax pseudo-motion target;
(2) edge line of candidate target neighborhood is extracted, judges whether edge line intersects with candidate target contour line, if intersecting, The candidate target is judged as possible local parallax pseudo-motion target;
Described isolates possible ghost target from candidate target, is specially:
Candidate target is recorded in the position of frame of video, if candidate target is identical in the position of continuous 10 frame, is judged as possible Ghost target;
S7 randomly selects several s, if s=1, from the back of the body to each background pixel in new compensation frame from the natural number of scope [1, a] A sampled pixel is randomly choosed in the corresponding background model of scene element, the sampled pixel is replaced with the background pixel and carries out background Update, a is the updating factor of the background pixel;If s ≠ 1, without context update;
The updating factor a of each pixel is adopted in new compensation frame sets with the following method:
1. by minimum enclosed rectangle the putting down to interframe 2D affine models of possible local parallax pseudo-motion target in new compensation frame Move vector direction and extend out b pixel, obtain updating factor rectangle A, the b rule of thumb value in the range of 10~12;
2. count the pixel count that each candidate target is included in new compensation frame one by one, to comprising pixel count be not more than 3 candidate's mesh Mark, 1 is set to by the updating factor a of its interior pixel;To comprising pixel count be more than 3 but no more than 20 candidate target, belong to this Candidate target belongs to updating factor rectangle A pixel again simultaneously, and its updating factor a is set to 5;To comprising pixel count be more than 20 Candidate target, belongs to the pixel that the candidate target belongs to updating factor rectangle A again simultaneously, and its updating factor a is set to 1;
3. the pixel included to possible ghost target in new compensation frame, its updating factor a is set to 1;
4. the updating factor of other pixels is set to 10 in new compensation frame.
2. satellite video moving target detecting method as claimed in claim 1, it is characterized in that:
In S4, before the interframe 2D affine model parameters for estimating each image blocks of reference frame, rejected using MSAC methods and match point set of the same name In error matching same place.
3. satellite video moving target detecting method as claimed in claim 1, it is characterized in that:
In S5, Euclidean distance is represented using the gray scale difference between pixel.
4. satellite video moving object detection system, it is characterized in that, including:
Other frames in frame on the basis of feature point extraction module, the intermediate frame for selecting video, video outside reference frame are compensation Frame;Piecemeal is carried out to reference frame and obtains image blocks, the Good Features characteristic points of each image blocks are extracted;
Tracking and matching module, for since the frame of video the 1st, between reference frame and compensation frame using front and rear LK light streams point with Track method is tracked matching to the GoodFeatures characteristic points of each image blocks, obtains matching point set of the same name;
Motion compensating module, the interframe 2D affine model parameters that point set of the same name estimates each image blocks of reference frame are matched for utilizing, And set up interframe 2D affine model transformational relations;Create one and compensate frame as new with benchmark frame sign identical null images, according to Interframe 2D affine model transformational relations, new benefit is adopted by region corresponding with each image blocks in reference frame in raw compensation frame again Frame is repaid, the motion compensation of raw compensation frame is completed;
Background model builds module, for each pixel v in the first frame after motion compensation, from pixel v and its neighborhood N pixel is randomly selected as pixel v sampled pixel, pixel v background model Model (x)={ v is set up1, v2... vn, A }, wherein, vxRepresent x-th of sampled pixel, x=1,2...n;A is pixel v updating factor;
Moving target recognition module, for comparing extraction moving target with background model by newly compensating frame, and obtains binary map Picture, extracts connected domain, connected domain is candidate target to bianry image;Described extraction moving target is specially:To new compensation frame In each pixel V (X), build centered on V (X), R be radius Euclidean distance space S R (V (X)), obtain SR (V (X)) Sampled pixel intersection of sets collection in background model corresponding with pixel V (X), if pixel count is more than or equal to amount threshold in occuring simultaneously, Then V (X) is background pixel, and gray scale is designated as 0;Otherwise, V (X) is motion pixel, and gray scale is designated as 255;R is empirical value, 18~22 Scope value;Amount threshold is empirical value;
Pseudo-motion object extraction module, for isolating possible local parallax pseudo-motion target from candidate target and possible Ghost target;
Described pseudo-motion object extraction module further comprises local parallax pseudo-motion target separation module and possible ghost Target separation module, wherein:
Local parallax pseudo-motion target separation module, for isolating possible local parallax pseudo-motion mesh from candidate target Mark, be specially:
(1) minimum enclosed rectangle of candidate target is extracted, the length-width ratio ratio of minimum enclosed rectangle is calculated, if ratio>3.5, Then the candidate target is judged as possible local parallax pseudo-motion target;
(2) edge line of candidate target neighborhood is extracted, judges whether edge line intersects with candidate target contour line, if intersecting, The candidate target is judged as possible local parallax pseudo-motion target;
Possible ghost target separation module, isolates possible ghost target from candidate target, is specially:
Candidate target is recorded in the position of frame of video, if candidate target is identical in the position of continuous 10 frame, is judged as possible Ghost target;
Background compensation module, for each background pixel in new compensation frame, several s are randomly selected from the natural number of scope [1, a], If s=1, a sampled pixel is randomly choosed from the corresponding background model of the background pixel, being replaced with the background pixel should Sampled pixel carries out context update, and a is the updating factor of the background pixel;If s ≠ 1, without context update;
The updating factor a of each pixel is adopted in new compensation frame sets with the following method:
1. by minimum enclosed rectangle the putting down to interframe 2D affine models of possible local parallax pseudo-motion target in new compensation frame Move vector direction and extend out b pixel, obtain updating factor rectangle A, the b rule of thumb value in the range of 10~12;
2. count the pixel count that each candidate target is included in new compensation frame one by one, to comprising pixel count be not more than 3 candidate's mesh Mark, 1 is set to by the updating factor a of its interior pixel;To comprising pixel count be more than 3 but no more than 20 candidate target, belong to this Candidate target belongs to updating factor rectangle A pixel again simultaneously, and its updating factor a is set to 5;To comprising pixel count be more than 20 Candidate target, belongs to the pixel that the candidate target belongs to updating factor rectangle A again simultaneously, and its updating factor a is set to 1;
3. the pixel included to possible ghost target in new compensation frame, its updating factor a is set to 1;
4. the updating factor of other pixels is set to 10 in new compensation frame.
CN201710266714.0A 2017-04-21 2017-04-21 Satellite video moving target detection method and system Active CN107146239B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710266714.0A CN107146239B (en) 2017-04-21 2017-04-21 Satellite video moving target detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710266714.0A CN107146239B (en) 2017-04-21 2017-04-21 Satellite video moving target detection method and system

Publications (2)

Publication Number Publication Date
CN107146239A true CN107146239A (en) 2017-09-08
CN107146239B CN107146239B (en) 2020-01-07

Family

ID=59774431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710266714.0A Active CN107146239B (en) 2017-04-21 2017-04-21 Satellite video moving target detection method and system

Country Status (1)

Country Link
CN (1) CN107146239B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076341A (en) * 2017-12-19 2018-05-25 武汉大学 A kind of video satellite is imaged in-orbit real-time digital image stabilization method and system
CN110084765A (en) * 2019-05-05 2019-08-02 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN110084837A (en) * 2019-05-15 2019-08-02 四川图珈无人机科技有限公司 Object detecting and tracking method based on UAV Video
CN110175495A (en) * 2019-01-04 2019-08-27 北京理工大学 A kind of small and weak moving target detection method of remote sensing image
CN110378927A (en) * 2019-04-29 2019-10-25 北京佳讯飞鸿电气股份有限公司 A kind of object detecting and tracking method based on the colour of skin
CN111104870A (en) * 2019-11-27 2020-05-05 珠海欧比特宇航科技股份有限公司 Motion detection method, device and equipment based on satellite video and storage medium
CN111524082A (en) * 2020-04-26 2020-08-11 上海航天电子通讯设备研究所 Target ghost eliminating method
CN111815667A (en) * 2020-06-23 2020-10-23 成都信息工程大学 Method for detecting moving target with high precision under camera moving condition
WO2020238902A1 (en) * 2019-05-29 2020-12-03 腾讯科技(深圳)有限公司 Image segmentation method, model training method, apparatuses, device and storage medium
WO2021057455A1 (en) * 2019-09-25 2021-04-01 深圳大学 Background motion estimation method and apparatus for infrared image sequence, and storage medium
CN112991397A (en) * 2021-04-19 2021-06-18 深圳佑驾创新科技有限公司 Traffic sign tracking method, apparatus, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101087413A (en) * 2006-06-07 2007-12-12 中兴通讯股份有限公司 Division method of motive object in video sequence
CN106022263A (en) * 2016-05-19 2016-10-12 西安石油大学 Vehicle tracking method in fusion with feature matching and optical flow method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101087413A (en) * 2006-06-07 2007-12-12 中兴通讯股份有限公司 Division method of motive object in video sequence
CN106022263A (en) * 2016-05-19 2016-10-12 西安石油大学 Vehicle tracking method in fusion with feature matching and optical flow method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
M.R.ANJUM ET AL.: "Target detection and tracking analysis of moving object based on PCLFM simulated model for airborne satellite communication", 《2016 INTERNATIONAL CONFERENCE ON ELECTRONICS INFORMATION AND EMERGENCY COMMUNICATION》 *
ZHEN LEI ET AL.: "Tracking moving weak objects in celestial image sequences", 《IEEE TRANSACTION ON AEROSPACE AND ELECTRONIC SYSTEMS》 *
莫邵文 等: "基于改进视觉背景提取的运动目标检测算法", 《光学学报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076341A (en) * 2017-12-19 2018-05-25 武汉大学 A kind of video satellite is imaged in-orbit real-time digital image stabilization method and system
CN110175495A (en) * 2019-01-04 2019-08-27 北京理工大学 A kind of small and weak moving target detection method of remote sensing image
CN110378927A (en) * 2019-04-29 2019-10-25 北京佳讯飞鸿电气股份有限公司 A kind of object detecting and tracking method based on the colour of skin
CN110084765A (en) * 2019-05-05 2019-08-02 Oppo广东移动通信有限公司 A kind of image processing method, image processing apparatus and terminal device
CN110084765B (en) * 2019-05-05 2021-08-06 Oppo广东移动通信有限公司 Image processing method, image processing device and terminal equipment
CN110084837A (en) * 2019-05-15 2019-08-02 四川图珈无人机科技有限公司 Object detecting and tracking method based on UAV Video
CN110084837B (en) * 2019-05-15 2022-11-04 四川图珈无人机科技有限公司 Target detection and tracking method based on unmanned aerial vehicle video
WO2020238902A1 (en) * 2019-05-29 2020-12-03 腾讯科技(深圳)有限公司 Image segmentation method, model training method, apparatuses, device and storage medium
US11900613B2 (en) 2019-05-29 2024-02-13 Tencent Technology (Shenzhen) Company Limited Image segmentation method and apparatus, model training method and apparatus, device, and storage medium
WO2021057455A1 (en) * 2019-09-25 2021-04-01 深圳大学 Background motion estimation method and apparatus for infrared image sequence, and storage medium
CN111104870A (en) * 2019-11-27 2020-05-05 珠海欧比特宇航科技股份有限公司 Motion detection method, device and equipment based on satellite video and storage medium
CN111104870B (en) * 2019-11-27 2024-02-13 珠海欧比特卫星大数据有限公司 Motion detection method, device, equipment and storage medium based on satellite video
CN111524082A (en) * 2020-04-26 2020-08-11 上海航天电子通讯设备研究所 Target ghost eliminating method
CN111524082B (en) * 2020-04-26 2023-04-25 上海航天电子通讯设备研究所 Target ghost eliminating method
CN111815667A (en) * 2020-06-23 2020-10-23 成都信息工程大学 Method for detecting moving target with high precision under camera moving condition
CN111815667B (en) * 2020-06-23 2022-06-17 成都信息工程大学 Method for detecting moving target with high precision under camera moving condition
CN112991397A (en) * 2021-04-19 2021-06-18 深圳佑驾创新科技有限公司 Traffic sign tracking method, apparatus, device and storage medium

Also Published As

Publication number Publication date
CN107146239B (en) 2020-01-07

Similar Documents

Publication Publication Date Title
CN107146239A (en) Satellite video moving target detecting method and system
CN106780620B (en) Table tennis motion trail identification, positioning and tracking system and method
CN106846359B (en) Moving target rapid detection method based on video sequence
CN112734852B (en) Robot mapping method and device and computing equipment
CN104809689B (en) A kind of building point cloud model base map method for registering based on profile
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
CN106683121A (en) Robust object tracking method in fusion detection process
CN107564062A (en) Pose method for detecting abnormality and device
CN103700113B (en) A kind of lower regarding complex background weak moving target detection method
CN111209915A (en) Three-dimensional image synchronous identification and segmentation method based on deep learning
CN105279771B (en) A kind of moving target detecting method based on the modeling of online dynamic background in video
CN107169985A (en) A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN110992263B (en) Image stitching method and system
CN101383899A (en) Video image stabilizing method for space based platform hovering
CN103854283A (en) Mobile augmented reality tracking registration method based on online study
CN104197933B (en) Method for enhancing and extracting star sliding in telescope field of view
CN104077596A (en) Landmark-free tracking registering method
CN104182968B (en) The fuzzy moving-target dividing method of many array optical detection systems of wide baseline
Zhou et al. Exploring faster RCNN for fabric defect detection
CN104751466B (en) A kind of changing object tracking and its system based on conspicuousness
CN105279769A (en) Hierarchical particle filtering tracking method combined with multiple features
CN113744315B (en) Semi-direct vision odometer based on binocular vision
CN106683062B (en) A kind of moving target detecting method based on ViBe under Still Camera
CN112215925A (en) Self-adaptive follow-up tracking multi-camera video splicing method for coal mining machine
CN105913435A (en) Multidimensional remote sensing image matching method and multidirectional remote sensing image matching system suitable for large area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant