CN104717573B - A kind of generation method of video frequency abstract - Google Patents

A kind of generation method of video frequency abstract Download PDF

Info

Publication number
CN104717573B
CN104717573B CN201510098645.8A CN201510098645A CN104717573B CN 104717573 B CN104717573 B CN 104717573B CN 201510098645 A CN201510098645 A CN 201510098645A CN 104717573 B CN104717573 B CN 104717573B
Authority
CN
China
Prior art keywords
target
image
track
moving target
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510098645.8A
Other languages
Chinese (zh)
Other versions
CN104717573A (en
Inventor
王喆
叶泽雄
周励琨
林倞
张伟军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Wei'an Polytron Technologies Inc
Original Assignee
Guangzhou Wei An Electron Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Wei An Electron Technology Co Ltd filed Critical Guangzhou Wei An Electron Technology Co Ltd
Priority to CN201510098645.8A priority Critical patent/CN104717573B/en
Publication of CN104717573A publication Critical patent/CN104717573A/en
Application granted granted Critical
Publication of CN104717573B publication Critical patent/CN104717573B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The present invention proposes a kind of generation method of video frequency abstract, including:Obtain real-time video or playback video data carry out background modeling, extract the moving target of present image;Current kinetic target following is matched to obtain movement locus, trace information is saved in hard disk, the image of the background modeling background of renewal and positional information are saved in hard disk;The positional information of moving target is read, filtering motions target simultaneously retains the moving target in user designated area;The moving target being pointed in user designated area is grouped to obtain multiple target groups;All target groups are arranged into row position, calculate appearance order of each target group in summarized radio;The order generated according to each target group, is successively read corresponding view data and pastes and generate summarized radio on corresponding background picture.Implement the generation method of the video frequency abstract of the present invention, have the advantages that:Online Video can be analyzed in real time, moving target in specified region can be screened according to demand.

Description

A kind of generation method of video frequency abstract
Technical field
The present invention relates to field of video monitoring, more particularly to a kind of generation method of video frequency abstract.
Background technology
Video summarization technique is broadly divided into two classes:Static video frequency abstract (key-frame extraction) and dynamic video summary (depending on Frequency concentrates).Wherein, dynamic video summary is by carrying out moving target analysis extraction, and then carries out Analysis of Target and go out mesh Movement locus is marked, is combined finally by track position analysis so as to form the summarized radio of concentration.With China's camera The trend that quantity drastically increases, video summarization technique are more and more important in the effect of monitoring safety-security area.
Under monitoring scene, video summarization technique effect is to improve user's playing back videos efficiency, reduces user's video recording and looks into See the time.However, it is a more time-consuming process that analysis is carried out to video, by obtaining playback video recording and carrying out analysis needs Consume a longer time, existing digest algorithm can not be by analyzing camera real-time video generation summary in real time;Meanwhile video scene Content is sufficiently complex and various, and user's scene of interest is often confined in limited region, existing summarization generation side Method also has no idea to go out to meet the target of user demand according to user's setting regions Rules Filtering, not yet proposes effective solution at present Certainly scheme.
The content of the invention
The technical problem to be solved in the present invention is that above-mentioned for the prior art in real time can not divide Online Video Analysis, can not screen the defects of moving target in specified region according to demand, there is provided one kind in real time can analyze Online Video, The generation method of the video frequency abstract of moving target in specified region can be screened according to demand.
The technical solution adopted by the present invention to solve the technical problems is:A kind of generation method of video frequency abstract is constructed, is wrapped Include following steps:
A) obtain real time video data or playback video data carry out background modeling, and obtained using the background modeling To result extract the moving target of present image;
B tracking and matching) is carried out to current kinetic target and obtains its movement locus, and by the track of the current kinetic target Information is saved in hard disk, while the image information of the background modeling background of renewal and positional information are saved in hard disk;It is described Trace information includes positional information and image information;
C the positional information of moving target) is read, filtering motions target simultaneously retains the movement mesh in user designated area Mark;
D) moving target in user designated area is divided according to the positional information of the moving target Group obtains multiple target groups;
E) all target groups are arranged into row position, calculate appearance order of each target group in summarized radio;
F) the order generated according to each target group, is successively read corresponding view data and pastes corresponding background picture Upper generation summarized radio.
In the generation method of video frequency abstract of the present invention, in the step D) in when being grouped, will be with for the moment Between the moving target that occurs or have linking on space-time in region be classified as same target group.
In the generation method of video frequency abstract of the present invention, the step A) further comprise:
A1) video frame is calculated using mixed Gaussian background modeling algorithm, obtains the background mould of current video scene Type;
A2) by present image compared with the background model, judge that the present image whether there is moving target, If so, performing step B);Otherwise, the background model is updated.
In the generation method of video frequency abstract of the present invention, the step B) further comprise:
B1) using nearest neighbor algorithm to the same moving target in consecutive image into line trace;
B2) judge whether the tracing of the movement of moving target terminates, if so, performing step B3);Otherwise, step is performed B4);
B3 the motion track information of the moving target) is preserved;
B4) judge that the moving target rests on whether rearmost position exceedes setting frame number, if so, being updated to carry on the back Scape, and return to step B3);Otherwise, return to step A).
In the generation method of video frequency abstract of the present invention, in the step B3) in preserve motion track information when, It is divided into description information of image and view data two parts is preserved, described image description information preserves in the database, described The preserving type of view data writes down each image in file for all images are saved in same file in order In position and size;Described image description information includes the affiliated track ID of image, image top left co-ordinate, the length of image Degree, the width of image, image corresponding position, the frame number and figure of image size, image in former video in image file As the time occurred.
In the generation method of video frequency abstract of the present invention, the frame number that sets is 100 frame.
In the generation method of video frequency abstract of the present invention, the step C) further comprise:
C1 the description information of image of the movement locus of moving target) is read;
C2) judge whether the movement locus of the moving target falls in user designated area, if so, performing step D); Otherwise, the movement locus of the moving target is deleted.
In the generation method of video frequency abstract of the present invention, the user designated area is the polygon of closing.
In the generation method of video frequency abstract of the present invention, the step E) further comprise:
E1 the initial position for) initializing current goal group is m;Wherein, m is positive integer;
E2 the size that conflicts of the current goal group with all target groups before it) is calculated;
E3) judge whether each track conflict of the current goal group is more than setting value, if so, making m=m+10, return Step E1);Otherwise, the position for setting the current goal group is m.
In the generation method of video frequency abstract of the present invention, the setting value is 900.
Implement the generation method of the video frequency abstract of the present invention, have the advantages that:Except in conventional method to playback Video recording is carried out outside video frequency abstract, and the present invention can also obtain real time video data and carry out background modeling, and be obtained using background modeling To result extract the moving target of present image, so Online Video can be analyzed in real time, since it passes through reading Take the positional information of moving target, filtering motions target simultaneously retains the moving target in user designated area, so can be with Screen according to demand and specify moving target in region, so it in real time can analyze Online Video, can screen according to demand Specify moving target in region.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, without having to pay creative labor, may be used also To obtain other attached drawings according to these attached drawings.
Fig. 1 is the flow chart in generation method one embodiment of video frequency abstract of the present invention;
Fig. 2 is that real time video data or playback video data progress background modeling are obtained in the embodiment, and profit The result obtained with background modeling extracts the particular flow sheet of the moving target of present image;
Fig. 3 is to carry out tracking and matching to current kinetic target in the embodiment to obtain its movement locus, and will currently be transported The trace information of moving-target is saved in hard disk, while the image information of the background modeling background of renewal and positional information are preserved To the particular flow sheet of hard disk;
Fig. 4 be the embodiment in read moving target positional information, filtering motions target and retain refer to positioned at user Determine the particular flow sheet of the moving target in region;
Fig. 5 is that all target groups are arranged into row position in the embodiment, calculates each target group in summarized radio In appearance order particular flow sheet.
Embodiment
Below in conjunction with the attached drawing in the embodiment of the present invention, the technical solution in the embodiment of the present invention is carried out clear, complete Site preparation describes, it is clear that described embodiment is only part of the embodiment of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, those of ordinary skill in the art are obtained every other without creative efforts Embodiment, belongs to the scope of protection of the invention.
In the generation method embodiment of video frequency abstract of the present invention, its flow chart is as shown in Figure 1.The generation of the video frequency abstract Method is divided into two parts:Online Video analysis part and offline summarization generation part.In Fig. 1, the generation method of the video frequency abstract Include the following steps:
Step S01 obtains real time video data or playback video data carry out background modeling, and utilizes background modeling Obtained result extracts the moving target of present image:In this step, real time video data or playback video number are obtained According to carrying out background modeling, and the result obtained using background modeling extracts the moving target of present image, rear extended meeting to its into Row is described in detail.
Step S02 carries out tracking and matching to current kinetic target and obtains its movement locus, and by the rail of current kinetic target Mark information is saved in hard disk, while the image information of the background modeling background of renewal and positional information are saved in hard disk:This In step, tracking and matching is carried out to current kinetic target and obtains its movement locus, and the trace information of current kinetic target is protected Hard disk is stored to, while the image information of the background modeling background of renewal and positional information are saved in hard disk, is worth mentioning It is that above-mentioned trace information includes positional information and image information.The rail for the current kinetic target that this step generates video analysis Mark information and background information persistence are saved in local hard drive, using the data source as offline summarization generation part.Step S01 Belong to Online Video analysis part with step S02, can so solve be directed in video abstraction generating method in the prior art The problem of real-time video is analyzed.
Step S03 reads the positional information of moving target, and filtering motions target simultaneously retains in user designated area Moving target:In this step, the positional information of moving target is read, filtering motions target, leaves behind and appear in user and specify area Moving target in domain.Can so solve the problems, such as in the prior art can not be to specifying regional aim to screen.
The moving target that step S04 is pointed to according to the positional information of moving target in user designated area is grouped To multiple target groups:In this step, according to the positional information of moving target be pointed to moving target in user designated area into Row packet obtains multiple target groups, it is worth mentioning at this point that, when being grouped, will occur or on space-time in same time zone The moving target for having linking is classified as same target group.This step mainly has two effects, on the one hand, track is recombinated, energy It is enough still while to occur in summarized radio so that appearing in same period moving target originally;On the other hand, due to existing skill In art during to motion target tracking, occurs tracking disruption often, the movement locus of same moving target has can A plurality of track can be divided into, if classifying them as one group without restructuring, then go out when recalculating track in summary This several track time of occurrence are possible to when existing position can produce wadding disorderly, so as to cause same moving target to be divided in not Occur with the period, in the prior art, current background modeling technique combining target tracking technique still very cannot ideally carry out more Target following, always occurs tracking disruption, and the complete movement locus for extracting same target is still difficult to realize, it is deposited Intactly can not extract movement objective orbit the problem of.In this step, such as:Represent track A in original with Astart and Aend Starting frame position in beginning frequency, starting frame positions of the track B in original frequency is represented with Bstart and Bend, if track A ratios Track B first occurs, and track rule of classification is as follows:If Aend>Bstart, i.e., before when track, B appears in track A disappearances, this When track A and track B be classified as one group;If alternatively, Aend<Bstart but Aend+5>Bstar and track A last image Opening image with track B first has intersection area, then track A and track B is equally classified as one group.When the packet completion of all tracks Afterwards, fall in the same period or continuous track will be classified as same group on space-time.After packet is completed, step is turned to S04.This addresses the problem movement objective orbit disruption in the prior art.
Step S05 arranges all target groups into row position, and it is suitable to calculate appearance of each target group in summarized radio Sequence:In this step, all target groups are arranged into row position, calculate appearance order of each target group in summarized radio. After packet is completed, start to calculate each trajectory set genesis sequence and position.All trajectory sets are ranked up first, in original Before the trajectory set first occurred in video comes.
The order that step S06 is generated according to each target group, is successively read corresponding view data and pastes the corresponding back of the body Summarized radio is generated on scape picture:In this step, according to the order of each target group generation, corresponding view data is successively read Paste on corresponding background picture, generate summarized radio.Step S03 to step S06 belongs to offline summarization generation part.
For the present embodiment, above-mentioned steps S01 can also be refined further, and the flow chart after it is refined is as shown in Figure 2. In Fig. 2, above-mentioned steps S01 further comprises:
Step S11 calculates video frame using mixed Gaussian background modeling algorithm, obtains the back of the body of current video scene Scape model:In this step, video frame is calculated using mixed Gaussian background modeling algorithm, obtains the back of the body of current video scene Scape model.Certainly, under the certain situation of the present embodiment, background modeling algorithm can also use other similar algorithms, such as: Using average background modeling, codebook background modelings etc., these background modeling methods can equally have the function that similar.
Present image compared with the background model, is judged that present image whether there is moving target by step S12: In this step, by present image compared with above-mentioned background model, judge that present image whether there is moving target, if sentenced Disconnected result is yes, that is, draws the foreground area of present image, that is, the moving target of present image, then performs step S02; If it is determined that result be no, i.e., currently without moving target, then perform step S13.
Step S13 updates background model:If the judging result of above-mentioned steps S12 is no, this step is performed.This step In, background model is updated, changes background adaptation illumination etc..
For the present embodiment, above-mentioned steps S02 can also be refined further, and the flow chart after it is refined is as shown in Figure 3. In Fig. 3, above-mentioned steps S02 further comprises:
Step S21 is using nearest neighbor algorithm to the same moving target in consecutive image into line trace:In this step, use Nearest neighbor algorithm to the same moving target in consecutive image into line trace, specifically, it is main judge it is each in current image frame Moving target is corresponding with which moving target of former frame, so that the location strings by each moving target in continuous frame sequence Movement locus is unified into, track following starts including track, Track association and track disappearance three phases.Such as:Previous frame image In have two moving targets of a and b, their area is respectively Sa_pre and Sb_pre, and present frame has three moving targets c, d And e, their area are respectively Sc_cur, Sd_cur, Se_cur, intersecting for moving target c, d, e and a, b is calculated respectively at this time Area Sac, Sad, Sae, Sbc, Sbd, Sbe, intersection area is bigger, and it is closer to represent target location, and target association is also just more Greatly, Sac, Sad, Sae intermediate value the maximum and Sbc, Sbd, Sbe intermediate value the maximum are taken, several situation processing of this time-division, work as Sac Maximum respectively with Sbd, then a is associated with c at this time, and b is associated with d;And the beginning of the e then tracks new as one;When Sac and Sbc is maximum respectively, then a is associated with c at the same time with b, calculates a, b, c color histogram respectively at this time and carries out Nogata Figure matching, matching degree the higher person are associated with c;When tri- values of Sac, Sad, Sae are all 0, represent that a could not be with any target Association, a is as track end point at this time.This step adds histogram matching to carry out multiple target tracking using arest neighbors, certainly, The present embodiment once in the case of, this step can also use other similar algorithms, such as:Use Kalman filtering, grain Son filtering scheduling algorithm etc., these algorithms can equally be accomplished to multiple target into line trace.
Step S22 judges whether the tracing of the movement of moving target terminates:In this step, the movement of moving target is judged Whether track following terminates, if it is determined that result be yes, then perform step S23;Otherwise, step S24 is performed.
Step S23 preserves the motion track information of moving target:If the judging result of above-mentioned steps S22 is yes, also It is currently to have track disappearance, at this moment can obtains a complete track, then perform this step.In this step, movement mesh is preserved Target motion track information.It is noted that in this step, when preserving motion track information, it is divided into description information of image Preserved with view data two parts, formed since track is connected by a sequence image, corresponding preservation is every The description of image and view data.Trace image is generally smaller and quantity is more, if all images save as picture respectively If hard disk fragment can be caused more and take too many disk space, therefore in this step view data preserving type for by institute There is image to be saved in order in same file, while write down each image position hereof and size;The figure As description information includes the affiliated track ID of image, image top left co-ordinate, the length of image, the width of image, image in image The time that the frame number and image of corresponding position, image size, image in former video occur in file, description information of image Preserve in the database.
Step S24 judges that moving target rests on whether rearmost position exceedes setting frame number:If above-mentioned steps S22's sentences Disconnected result is no, then performs this step.In this step, judge that moving target rests on whether rearmost position exceedes setting frame number, If it is determined that result be yes, then perform step S25;Otherwise, return to step S01.It is noted that in the present embodiment, if Framing number is 100 frames, and certainly, under the certain situation of the present embodiment, the big I for setting frame number carries out phase as the case may be It should adjust.In this step, that is, for being associated with the track of former frame target, judge that the track appears in current location time Number.
Step S25 is updated to background:If the judging result of above-mentioned steps S24 is yes, that is, moving target goes out When present rearmost position is more than 100 frame, it is believed that the moving target is forever rested under current scene, should be updated to carry on the back Scape.In this step, moving target is updated to background.Since mixed Gaussian background modeling is only built in no moving target Mould, is merely able to eliminate the subtle effects such as illumination, the things increased newly for scene background, if without context update, can one Directly it is considered as moving target, it is therefore necessary to each moving target is judged in current location residence time, if it exceeds when specifying Between, then background is updated to, i.e., is pasted the image of moving target onto background image, while preserves the background image to originally Ground hard disk.
For the present embodiment, above-mentioned steps S03 can also be refined further, and the flow chart after it is refined is as shown in Figure 4. In Fig. 4, above-mentioned steps S03 further comprises:
Step S31 reads the description information of image of the movement locus of moving target:In this step, the fortune of moving target is read The description information of image of dynamic rail mark.In order to make track more densely be distributed on summarized radio, shorten summarized radio length, need root There is position to track according to track spatial and temporal distributions to recalculate, the track that user's specified time scope occurs in this step is believed Breath is all read in memory, and due to only needing to use track space time information, the image that this step need to only read track is retouched Information is stated, does not have to temporarily read into memory for the view data of track, avoids consuming a large amount of memories.
Step S32 judges whether the movement locus of moving target falls in user designated area:In this step, judge to move Whether the movement locus of target falls in user designated area.Specifically, working as the specified region of user setting, need to only filter out logical The generation for specifying the movement locus in region to carry out video frequency abstract is crossed, in the present embodiment, user designated area is made of n bar line segments Closing polygon, at this time, often read, into a trace information, first to judge the position of track, judge moving target Whether user designated area is entered, and determination methods are:If track is by m group of picture into then track can be considered as by m-1 bars Line segment forms, and judges whether this m-1 bars line segment has intersection point with the n bars line segment that user specifies successively at this time, if this m-1 bar line segment The n bars line segment with user designated area does not have intersection point completely respectively, then illustrates that the moving target never comes into specified region, this Shi Zhihang steps S33;After all movement objective orbits for appearing in specified region are found out, step S04 is performed.It is, If the result that this step judges is yes, step S04 is performed;Otherwise, step S33 is performed.This step carries out track with using again When family specifies the regional location to judge, track and region are divided into line segment respectively, are sentenced by judging whether the line segment time intersects Whether disconnected moving target passes through specified region, certainly, under the certain situation of the present embodiment, has many algorithms equally to make this The judgement of sample, such as judges the cross sectional area of trace image and specified region.
Step S33 deletes the movement locus of moving target:If the judging result of above-mentioned steps S32 is no, sheet is performed Step.If movement objective orbit not in user's range of interest, without by the Track Pick-up on summarized radio, at this time, The track is deleted from memory.In this step, the movement locus of moving target is deleted.
For the present embodiment, above-mentioned steps S05 can also be refined further, and the flow chart after it is refined is as shown in Figure 5. In Fig. 5, above-mentioned steps S05 further comprises:
The initial position of step S51 initialization current goal groups is m:The position of first aim group is 1, i.e. the 1st mesh Mark group starts to occur in the 1st frame of summarized radio.Then, it is assumed that the current position for having calculated preceding n target group, and the N target group position is m, i.e. n-th of target group starts to occur in the m frames of summarized radio, for (n+1)th trajectory set, The computational methods of its position are:The initial position of (n+1)th trajectory set is m.In this step, the initial bit of beginningization current goal group M is set to, wherein, m is positive integer.
Step S52 calculates conflict size of the current goal group with all target groups before it:In this step, (n+1)th is calculated A target group and the size that conflicts of preceding n target group, conflict size is all images in (n+1)th target group and preceding n target All images intersection area size in the corresponding frame of group.
Step S53 judges whether each track conflict of current goal group is more than setting value:In this step, current mesh is judged Whether each track conflict of mark group is more than setting value, if it is determined that result be yes, then execution step S54;Otherwise, step is performed Rapid S55.It is noted that in the present embodiment, setting value 900, certainly, under the certain situation of the present embodiment, setting value Big I adjust accordingly as the case may be.
Step S54 makes m=m+10:If the judging result of above-mentioned steps S53 is yes, this step is performed.In this step, M=m+10 is made, has performed this step, return to step S51.
It is m that step S55, which sets the position of current goal group,:If (n+1)th target group is averaged, each track conflict is less than 900, then it represents that the target group, which is placed on position m, to cause track to be seriously overlapped mutually, i.e., the position of target group n+1 is m, no Then, then it represents that track degree of overlapping is too high, and the position that need to recalculate target group n+1 causes picture too dense, at this time by mesh Mark group n+1 positions are set to m+10, and calculate conflict size from new, if conflict size is unsatisfactory for requiring, target group position is past 10 frames are moved afterwards until conflict size is met the requirements, and after all trajectory set location determinations, then perform step S06.In this step, The position for setting current goal group is m.
It is noted that in the present embodiment, in the generation summarized radio stage, read into background image, it is suitable according to target group Sequence and position sequentially generate the summarized radio frame of each frame, and generating mode is that the image on the correspondence position of track is original by its Position paste is on background image, but when needing to use trace image, just according to trace image description information to corresponding hard disk position Put and read memory, released immediately after image use, while the image that each frame is generated writes into video.Whenever arrival track During last frame, all need to judge whether background image needs replacing, that is, judge whether scene changes, if it happens change, Next background image is then read as background.After all track pastes, summarized radio generation also just finishes.
In short, in the present embodiment, System Partition is Online Video analysis and offline summarization generation two by the present invention Part, intermediate analysis result persistence is stored in hard disk so that whole system can be with real time execution in multichannel real-time high-definition Camera, is greatly reduced the time that user waits summary analysis.The present invention can be that user is individually created specified region summary, Unnecessary summary info is reduced for user.The present invention adds targeted packets step, the step after motion target tracking step So that movement objective orbit is more complete, again such that summarized radio is more complete, closer to original video information.
Said from social perspective, as the acceleration of step is built in safe city, China's camera quantity is more and more, checks number It is a difficult task to measure so many playback video, of the invention by carrying out on-line analysis for real time monitoring video, Allow users to carry out offline summarized radio generation, the monitor video generation summarized radio of an original hour with the shorter time A few minutes are only needed afterwards with regard to that can finish watching, and remain complete information substantially.Therefore, the present invention can significantly facilitate monitor video Playback, accelerate the construction of safety-security area, be made that contribution indirectly to maintain public order.Economically, this hair It is bright to greatly reduce monitor video playback duration, reduce monitoring playback input human resources and time energy, generally speaking The economic input checked for monitor video can be reduced.Said in terms of technique effect, video frequency abstract algorithm is divided into by the present invention Online and offline two parts, can reduce user and wait the summarization generation time, that is to say, that technically shorten summary life Into time.Furthermore the present invention adds Intrusion Detection Technique for summary so that system can generate plucking for user designated area Will, it is more convenient user's checking monitoring video;Finally, present invention adds track reconstitution steps, background modeling can be overcome to isolate And the defects of BREAK TRACK so that summarized radio is more complete closer to former video information.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all essences in the present invention With within principle, any modification, equivalent replacement, improvement and so on, should all be included in the protection scope of the present invention god.

Claims (8)

1. a kind of generation method of video frequency abstract, it is characterised in that the generation method of the video frequency abstract is divided into two parts:Regard online Frequency analysis part and offline summarization generation part, include the following steps:
A) obtain real time video data or playback video data carry out background modeling, and obtained using the background modeling As a result the moving target of present image is extracted;
B tracking and matching) is carried out to current kinetic target and obtains its movement locus, and by the trace information of the current kinetic target Hard disk is saved in, while the image information of the background modeling background of renewal and positional information are saved in hard disk;The track Information includes positional information and image information;The trace information for the current kinetic target that Online Video analysis part is generated and Background information persistence is saved in local hard drive, using the data source as offline summarization generation part;
C the positional information of moving target) is read, filtering motions target simultaneously retains the moving target in user designated area;
D) moving target in user designated area is grouped according to the positional information of the moving target To multiple target groups;Starting frame positions of the track A in original frequency is represented with Astart and Aend, is represented with Bstart and Bend Starting frame positions of the track B in original frequency, if track A first occurs than track B, track rule of classification is as follows:If Aend> Bstart, i.e., before when track, B appears in track A disappearances, track A and track B is classified as one group at this time;If alternatively, Aend< Bstart but Aend+5>Bstar and last image of track A open image with track B first intersection area, then equally rail Mark A and track B is classified as one group;
E) all target groups are arranged into row position, calculate appearance order of each target group in summarized radio;
F) the order generated according to each target group, is successively read corresponding view data and pastes life on corresponding background picture Into summarized radio;
The step B) further comprise:
B1) using nearest neighbor algorithm to the same moving target in consecutive image into line trace;Track following starts including track, Track association and track disappearance three phases;There are two moving targets of a and b in previous frame image, their area is respectively Sa_ Pre and Sb_pre, present frame have three moving targets c, d and e, their area is respectively Sc_cur, Sd_cur, Se_ Cur, calculates moving target c, d, e and a, intersection area Sac, Sad, Sae, Sbc, Sbd, Sbe of b, intersection area respectively at this time Bigger, it is closer to represent target location, target association also with regard to bigger, take Sac, Sad, Sae intermediate value the maximum and Sbc, Sbd, Sbe intermediate value the maximum, when Sac and Sbd is maximum respectively, then a is associated with c at this time, and b is associated with d;And e then makees For the beginning of a new track;When Sac and Sbc are maximum respectively, then a is associated with c at the same time with b, calculate respectively at this time a, B, c color histograms are gone forward side by side column hisgram matching, and matching degree the higher person is associated with c;When tri- values of Sac, Sad, Sae are all When 0, represent that a could not be with any target association, a is as track end point at this time;
B2) judge whether the tracing of the movement of moving target terminates, if so, performing step B3);Otherwise, step B4 is performed);
B3 the motion track information of the moving target) is preserved;
B4) judge that the moving target rests on whether rearmost position exceedes setting frame number, if so, background is updated to, and Return to step B3);Otherwise, return to step A).
2. the generation method of video frequency abstract according to claim 1, it is characterised in that the step A) further comprise:
A1) video frame is calculated using mixed Gaussian background modeling algorithm, obtains the background model of current video scene;
A2) by present image compared with the background model, judge that the present image whether there is moving target, such as It is to perform step B);Otherwise, the background model is updated.
3. the generation method of video frequency abstract according to claim 1, it is characterised in that in the step B3) in preserve fortune During dynamic trace information, it is divided into description information of image and view data two parts is preserved, described image description information is stored in In database, the preserving type of described image data is write down at the same time for all images are saved in same file in order Each image position hereof and size;Described image description information includes the affiliated track ID of image, the image upper left corner Coordinate, the length of image, the width of image, image in image file corresponding position, image size, image in former video Frame number and image occur time.
4. the generation method of video frequency abstract according to claim 1, it is characterised in that the frame number that sets is 100 frame.
5. the generation method of video frequency abstract according to claim 3, it is characterised in that the step C) further comprise:
C1 the description information of image of the movement locus of moving target) is read;
C2) judge whether the movement locus of the moving target falls in user designated area, if so, performing step D);Otherwise, Delete the movement locus of the moving target.
6. the generation method of video frequency abstract according to claim 5, it is characterised in that the user designated area is closing Polygon.
7. the generation method of video frequency abstract according to claim 5, it is characterised in that the step E) further comprise:
E1 the initial position for) initializing current goal group is m;Wherein, m is positive integer;
E2 the size that conflicts of the current goal group with all target groups before it) is calculated;
E3) judge whether each track conflict of the current goal group is more than setting value, if so, m=m+10 is made, return to step E1);Otherwise, the position for setting the current goal group is m.
8. the generation method of video frequency abstract according to claim 7, it is characterised in that the setting value is 900.
CN201510098645.8A 2015-03-05 2015-03-05 A kind of generation method of video frequency abstract Active CN104717573B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510098645.8A CN104717573B (en) 2015-03-05 2015-03-05 A kind of generation method of video frequency abstract

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510098645.8A CN104717573B (en) 2015-03-05 2015-03-05 A kind of generation method of video frequency abstract

Publications (2)

Publication Number Publication Date
CN104717573A CN104717573A (en) 2015-06-17
CN104717573B true CN104717573B (en) 2018-04-13

Family

ID=53416450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510098645.8A Active CN104717573B (en) 2015-03-05 2015-03-05 A kind of generation method of video frequency abstract

Country Status (1)

Country Link
CN (1) CN104717573B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105262932B (en) * 2015-10-20 2018-06-29 深圳市华尊科技股份有限公司 A kind of method and terminal of video processing
CN106887013A (en) * 2015-12-10 2017-06-23 北京航天长峰科技工业集团有限公司 Multi-object tracking method based on connected region combination arest neighbors and particle filter
CN106446002A (en) * 2016-08-01 2017-02-22 三峡大学 Moving target-based video retrieval method for track in map
CN106504270B (en) 2016-11-08 2019-12-20 浙江大华技术股份有限公司 Method and device for displaying target object in video
CN108460032A (en) * 2017-02-17 2018-08-28 杭州海康威视数字技术股份有限公司 A kind of generation method and device of video frequency abstract
CN107193905A (en) * 2017-05-11 2017-09-22 江苏东大金智信息***有限公司 A kind of method that moving target to be presented is rationally assembled in frame of video
CN109511019A (en) * 2017-09-14 2019-03-22 中兴通讯股份有限公司 A kind of video summarization method, terminal and computer readable storage medium
CN107967298A (en) * 2017-11-03 2018-04-27 深圳辉锐天眼科技有限公司 Method for managing and monitoring based on video analysis
CN110166851B (en) * 2018-08-21 2022-01-04 腾讯科技(深圳)有限公司 Video abstract generation method and device and storage medium
CN110781844B (en) * 2019-10-29 2023-05-16 贵州省烟草公司六盘水市公司 Security patrol monitoring method and device
CN111739128B (en) * 2020-07-29 2021-08-31 广州筷子信息科技有限公司 Target video generation method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection
CN103929685A (en) * 2014-04-15 2014-07-16 中国华戎控股有限公司 Video abstract generating and indexing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102740106B (en) * 2011-03-31 2014-12-03 富士通株式会社 Method and device for detecting movement type of camera in video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection
CN103929685A (en) * 2014-04-15 2014-07-16 中国华戎控股有限公司 Video abstract generating and indexing method

Also Published As

Publication number Publication date
CN104717573A (en) 2015-06-17

Similar Documents

Publication Publication Date Title
CN104717573B (en) A kind of generation method of video frequency abstract
US11755952B2 (en) System and method for predictive sports analytics using body-pose information
Long et al. Multimodal keyless attention fusion for video classification
US20200394413A1 (en) Athlete style recognition system and method
Pathak et al. Learning features by watching objects move
JP3692500B2 (en) Image processing method, image processing system, and recording medium
CN106599907A (en) Multi-feature fusion-based dynamic scene classification method and apparatus
CN106856577B (en) Video abstract generation method capable of solving multi-target collision and shielding problems
CN106797498A (en) Message processing device, information processing method and program
JP2004046647A (en) Method and device for tracking moving object based on dynamic image data
EP2119224A1 (en) Method and system for video indexing and video synopsis
KR20120099072A (en) Human interaction trajectory-based system
CN108600865A (en) A kind of video abstraction generating method based on super-pixel segmentation
CN111429341B (en) Video processing method, device and computer readable storage medium
CN108921023A (en) A kind of method and device of determining low quality portrait data
CN111488847B (en) Sports game video ball-feeding segment acquisition system, method and terminal
Sangüesa et al. Identifying basketball plays from sensor data; towards a low-cost automatic extraction of advanced statistics
Yao et al. A Comprehensive Survey on Sampling‐Based Image Matting
Ren et al. Football video segmentation based on video production strategy
US7590286B2 (en) Image recognition apparatus and program for recognizing the substance of an image, particularly in a motion picture environment
CN104182959B (en) target searching method and device
Maymin Acceleration in the NBA: Towards an algorithmic taxonomy of basketball plays
JP4546762B2 (en) Video event discriminating learning data generating device and program thereof, and video event discriminating device and program thereof
US11682209B2 (en) Prediction of NBA talent and quality from non-professional tracking data
CN109511019A (en) A kind of video summarization method, terminal and computer readable storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 510610 501, room 7, 7 building, No. 133, Dongguan Zhuang Yi Heng Road, Dongguan, Guangzhou, Guangdong (for office use only)

Patentee after: Guangzhou Wei'an Polytron Technologies Inc

Address before: 510610 Guangdong Guangzhou Tianhe District Dongguan Zhuang Yi Heng Road, No. 133, 7 501 building.

Patentee before: Guangzhou Wei An Electron Technology Co., Ltd

CP03 Change of name, title or address