CN102708182A - Rapid video concentration abstracting method - Google Patents

Rapid video concentration abstracting method Download PDF

Info

Publication number
CN102708182A
CN102708182A CN2012101420260A CN201210142026A CN102708182A CN 102708182 A CN102708182 A CN 102708182A CN 2012101420260 A CN2012101420260 A CN 2012101420260A CN 201210142026 A CN201210142026 A CN 201210142026A CN 102708182 A CN102708182 A CN 102708182A
Authority
CN
China
Prior art keywords
video
target
length
abstracting
concentration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012101420260A
Other languages
Chinese (zh)
Other versions
CN102708182B (en
Inventor
尚凌辉
刘嘉
陈石平
张兆生
高勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Icare Vision Technology Co ltd
Original Assignee
ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd filed Critical ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority to CN201210142026.0A priority Critical patent/CN102708182B/en
Publication of CN102708182A publication Critical patent/CN102708182A/en
Application granted granted Critical
Publication of CN102708182B publication Critical patent/CN102708182B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention relates to a rapid video concentration abstracting method. The conventional video concentration technology is poor in detection rate and tracking rate to a moving target, and cannot effectively concentrate the video length. The rapid video concentration abstracting method is characterized in that a server side detects and tracks the moving target in a pretreatment video, judges according to the length of the video or the number of the detection targets in the video, cuts the video into multiple concentration segments, performs collision detection and rearrangement on the target tracks in each concentration segment, and then, records the concentration segment information and enters in an index file; and a client side analyzes the index file stored in the server side, obtains the treated concentration segments, renders the concentration segments frame by frame to form a video sequence, and dynamically regulates the target density in the played concentrated video. According to the method, the target tracking continuity is excellent, profile region is complete, the detection rate is high, the false detecting rate is low, and the target density at each time point of the concentrated video is substantially consistent.

Description

A kind of fast video concentrates method of abstracting
Technical field
The invention belongs to video frequency searching and video frequency abstract field, especially a kind of fast video concentrates method of abstracting.
Background technology
[1] is used for the method and system-200780050610.0 of video index and video summary
[2] be used to produce the method and system-200680048754.8 of video summary
[3] based on the intelligent extraction video summarization method-201110170308.7 of temporal-spatial fusion
[4] video summarization system-201020660533.X
[5] based on the automatic method for concentration-201110208090.X of the video of video surveillance network
[6]?PRITCH?Y,?RAV-ACHA?A,?PELEG?S.?Nonchronological?video?synopsis?and?indexing?[J].?IEEE?Transactions?on?Pattern?Analysis?and?Machine?Intelligence,2008,30(11):?1971-1984
[7]?Y.?Pritch,?S.?Ratovitch,?A.?Hendel,?and?S.?Peleg,?Clustered?Synopsis?of?Surveillance?Video,?6th?IEEE?Int.?Conf.?on?Advanced?Video?and?Signal?Based?Surveillance?(AVSS'09),?Genoa,?Italy,?Sept.?2-4,?2009
Along with popularizing and the technological development of video monitoring of video monitoring, all there are every day the monitor video data of magnanimity to produce and be recorded on the equipment.How the data of these magnanimity are effectively browsed and analyzed and become the problem that this field receives much concern.Usually people are only interested in some target in the video (mainly being moving target) and content, hope the content of interest that occurs in ability fast browsing video one segment length's time video.The video concentration technique is cut apart moving target through to video content analysis, and their time of occurrence is reset, and makes in the shortest time, can effectively present all targets to the user.
[1] proposed in [2] [6] [7] to concentrate scheme based on the video of moving Object Segmentation, background modeling and collision detection.This scheme can obtain more satisfactory concentrated effect.But wherein the collision detection scheme need be calculated a plurality of different collision cost items, and calculated amount is bigger, is unfavorable for the processing real-time to HD video.[3] a kind of intelligent extraction video summarization method based on temporal-spatial fusion has been proposed.This method relies on frame difference method to obtain objective contour, and according to rectangular profile target is followed the tracks of.For the target sequence that traces into, reset the position that it occurs on time shaft, to form new concentrated video.Different target then carries out transparence and handles if stack is arranged.This method major defect is: the collision to target does not detect to obtain visual effect preferably; Do not carry out segmentation for long track, therefore, can influence the reduction length that concentrates video if target occurs pacing up and down for a long time.
[4] proposed a kind of video concentration systems scheme, this system comprises load module, analysis module, DBM and output module.After load module obtains video, send into analysis module and carry out target detection and tracking, the objective contour that traces into is cut out, be kept in the database.Output module presents the different frame object appearing in same frame of video.Not mentioned how support of this system exported the video that section processes is accomplished, and also do not provide and how to avoid the target collision, and how to guarantee that the target length that cuts out is moderate, to obtain visual effect preferably.
[5] the automatic method for concentration of a kind of video based on video surveillance network is proposed.This method is handled from two video source that the overlapping region shot by camera is arranged, and proposes based on figure coupling and random walk thought the different cameral projected footprint to be mated, and realizes striding the target following of video camera.On the panorama sketch of striding the video camera coupling, carry out concentrating of video, can obtain the concentrated video of large scene.When concentrating, 5 energy losses that counterweight has been ranked justice, and defined compressibility, degenerate with simulation and optimize the track rearrangement.This method does not mention how track being carried out segmentation, therefore if target occurs pacing up and down for a long time, can influence the reduction length that concentrates video.And the calculated amount of energy term design and simulated annealing optimization is bigger, is unfavorable for the processing real-time to HD video.
Summary of the invention
The present invention is directed to the defective that prior art exists, provide a kind of fast video to concentrate method of abstracting, effectively improve motion target detection rate and tracking rate, effectively concentrate video length, realize effective density control.
For this reason; The present invention takes following technical scheme: a kind of fast video concentrates method of abstracting, comprises server end, it is characterized in that by server end the moving target in the preprocessed video being detected tracking; Quantity according to detecting target in the length of video or the video is judged; Video is cut to a plurality of enriching sections, the target trajectory in each enriching section is carried out collision detection and rearrangement, write down enriching section information afterwards and get in the index file; Also comprise client, described client is analyzed the index file that deposits in the server end, obtains the enriching section of having handled, plays up enriching section by frame, forms video sequence, and the concentrated video in playing is dynamically adjusted target density.
Described moving object detection is to adopt the mixed Gaussian method of adaptive threshold that scene is carried out background modeling; Change the extraction prospect in conjunction with interframe; When extracting foreground area, utilize multi-scale information that region contour is become more meticulous; Utilize density estimation method that random area is positioned, adopt the method for random area sampling that background model is upgraded at last, effectively detect the target of low contrast; Described target following is to adopt the method for many hypothesis that association is carried out in the motion detection zone of multiframe; Objective contour is predicted; And, when target divides, collides, loses inspection, produce hypothesis according to this position based on the outline position of marginal information location present frame, utilize Hungary's algorithm to provide the hypothesis of optimum at last; And history hypothesis carried out cutting, obtain the pursuit path of target.
The generation of said enriching section is to adopt following method to realize: when one section video accumulated time length surpasses Tmax or destination number and surpasses Nmax (with course length maximum permissible value Lmax and preset density d positive correlation), then produce a new enriching section.
Can obtain the trace information that each target occurs through moving object detection with following the tracks of in video; Comprise frame, zone, bounding box; According to the bounding box position of each frame of cutting target trajectory, the long track in the video is carried out cutting, guarantee each course length greater than Lmin less than Lmax.
Judge the collision between target, the definition energy term is punished collision, adopts the greedy method of variable step iteration again; Guarantee that each iteration energy all has decline; And iterative convergence speed is fast, and avoids being absorbed in locally optimal solution with the randomization method, accomplishes the detection and the rearrangement of target collision.
The optimization step of the greedy method of said variable step iteration is following:
A. initialization: set primary iteration step-length S1, final iteration step length S2, wherein S2 < S1.Set step change and count ds, each step-length iterations N.Set current step-length S=S1.
B. with current step-length S iteration N time:
A) calculate current collision cost E1.
B) select a track at random.
C) be the interval with step-length S, all possible positions in enriching section reappose the track time of occurrence.
D) calculate minimum collision cost E2 in all positions.
E) if < E1 then is placed on minimum collision cost place with track to collision cost E2.
C. set S=S – ds.If S >=S2, repeating step 2, otherwise finish.
Client is when playing up by frame video; Earlier according to searching corresponding Background of this moment in the present frame ID indexed file; And search all this corresponding area pixel value of moment object appearing; The target area is added on the Background, if a position has a plurality of targets to occur, then the pixel value of this position is the average of a plurality of target pixel values.
Background is set between an accumulation area earlier through the multiple image cumulative mean is obtained, if the Background between adjacent accumulation area changes above thresholding T1, then writes down a new Background; If change to surpass thresholding T2 (T2>T1), then be labeled as new enriching section.
When enriching section generates, use the concentrated density d of acquiescence, video can dynamically be adjusted according to the broadcast density of hope when client terminal playing, when new broadcast density d n is set, arranges the time of occurrence of each target again, definition of T oBe the original time of occurrence of target, the then new time is T n=T o* d/d n
The present invention has the following advantages:
1. the tracking target continuity is good, and contour area is complete, and verification and measurement ratio is high, and false drop rate is low;
2. concentrate the target density basically identical of each time point of video;
3. can be cut into segment to long target and play, video compression efficiency is high, plays good visual effect;
4. collision detection and rearrangement speed are fast;
5. can support to play for the video that needs long time treatment while handling.
6. adjustable density as required when playing.
Description of drawings
Fig. 1 is a process flow diagram of the present invention.
Embodiment
Through embodiment, do further bright specifically below to technical scheme of the present invention.
Fast video as shown in Figure 1 concentrates method of abstracting; Comprise server end and client; The concrete steps of handling are following: server end detects earlier and cuts apart the moving object detection that occurs in this video; Adopt the mixed Gaussian method of adaptive threshold that scene is carried out background modeling, change the extraction prospect, when extracting foreground area, utilize multi-scale information that region contour is become more meticulous in conjunction with interframe.Combined with texture characteristic and interframe consistance change, and have effectively suppressed the interference of illumination variation, and can effectively detect the target of low contrast; Utilize density estimation method to leaf rock, ripples flow etc., and random area positions; Adopt the method for random area sampling that background model is upgraded at last, strengthened the robustness of background model.
Adopt the methods of many hypothesis that association is carried out in the motion detection zone of multiframe, objective contour is predicted, and based on the outline position of marginal information location present frame, when target divides, collides, loses inspection, produce and suppose based on this position.Utilize Hungary's algorithm to provide optimum hypothesis at last, and the history hypothesis is carried out cutting, obtain the pursuit path of target
Input video generally has two kinds of forms: video file and live video stream.For video file, its time length and frame per second are confirmed, and the time span of live video stream is uncertain.The target density of different time sections possibly also be different in the video, and for example the monitor video stream of people on daytime in street is than comparatively dense, and the stream of people at night is more sparse.And As time goes on, the increase and decrease of object in the variation of illumination or the visual field can cause scene to change.For guaranteeing that concentrating video has similar density in different time sections, and background passes in time and changes, and concentrates the mode that adopts staging treating.When following any condition satisfies, then produce a new enriching section: the accumulated video time span surpasses Tmax; When destination number surpasses Nmax (with course length maximum permissible value Lmax and preset density d positive correlation); When the background generation marked change of scene, can guarantee to concentrate each time point target density basically identical of back in conjunction with the rearrangement of track.
Moving object detection can obtain the trace information that each target occurs with following the tracks of in video, comprise frame (or absolute time), zone, bounding box.Long track for occurring in the video carries out cutting, is no more than maximum permissible value Lmax to guarantee each course length.Because too short target trajectory has flickering when browsing, for guaranteeing the human eye vision effect, track can not be shorter than the shortest predefined visual length L min after the cutting.
According to the bounding box position of each frame of cutting target trajectory, can judge the collision between target.I bar track is Ti in the objective definition track, and j bar track is Tj.The total area that overlaps between total collision cost of whole enriching section and each track and time are gone up the cost sum of dislocation: E=Eo+Et
Article two, the overlapping cost of track is defined as with the normalized bounding box of video image size overlapping region area: E o = &Sigma; i , j ( Ti &cap; Tj ) / Area
Because video slicing is enriching section, thus the time go up the dislocation cost and can ignore, approximate have an E ≈ Eo.This approximate collision cost computation amount that makes.Be the minimise collisions gross energy, adopt the greedy method of variable step iteration.Optimization step is following:
1. initialization: set primary iteration step-length S1, final iteration step length S2, wherein S2 < S1.Set step change and count ds, each step-length iterations N.Set current step-length S=S1.
2. with current step-length S iteration N time:
A. calculate current collision cost E1.
B. select a track at random.
C. be the interval with step-length S, all possible positions in enriching section reappose the track time of occurrence.
D. calculate minimum collision cost E2 in all positions.
E. if < E1 then is placed on minimum collision cost place with track to collision cost E2.
3. set S=S – ds.If S >=S2, repeating step 2.Otherwise finish.
Above optimization step can guarantee that energy progressively descends in the iterative process.Be to accelerate computing velocity, calculate minimum collision cost E2 in the step 2 and can replace with direct calculating collision cost and whether descend.After being each iteration, the placement location of choosing track is for locating with other track collision costs minimums on all possible positions.Owing to track is selected to have carried out randomization, can avoid energy-optimised process to be absorbed in local optimum.The optimizing of variable step is that the step-length efficient thinner than direct search is higher by thick searching idea to essence.The definition of above energy term and optimizing mode have been guaranteed target collision detection and rearrangement fast.
Background is through obtaining the multiple image cumulative mean.Set between an accumulation area,, then write down a new Background if the Background between adjacent accumulation area changes above thresholding T1; If change to surpass thresholding T2 (T2>T1), then be labeled as new enriching section.
The invention allows for a kind of S the video concentration systems framework of structure, and support to play concentrated video while handling.When service end is handled, video is carried out dynamic segmentation, to realize parallel processing.For each Parallel Unit, video-frequency band is pressed preceding method and is handled, and the self-adaptation cutting is an enriching section.Each enriching section canned data comprises: at this video-frequency band object appearing track; The time of each track appearing and subsiding; The Background of this enriching section accumulation; Every Background starting and ending time.
In information stores to an index file with all enriching sections in the video.The enriching section number preserved in index file head record, the initial concluding time of the corresponding original video of each enriching section, and the position of preserving in the indexed file.
After client is obtained index file, analyze the index file head, can obtain completed enriching section and play, realize broadcast, user experience preferably is provided while handling.Client by frame when playing up, earlier based on searching corresponding Background of this moment in the present frame ID indexed file, and is searched the area pixel value of all these moment object appearing correspondences to video, and the target area is added on the Background.If a position has a plurality of targets to occur, then the pixel value of this position is average (being transparent stack) of a plurality of target pixel values.
When aforementioned enriching section generates, can use the concentrated density d of acquiescence.Video possibly hope to play density and can dynamically adjust when client terminal playing.When the client is provided with new broadcast density and is dn, arrange the time of occurrence of each target again.Definition of T oOriginal time of occurrence for target.Then new time of occurrence is: T n=T o* d/d n
When this rearrangement mode can guarantee to concentrate the density reduction, collision energy reduced, and improved visual effect.Owing to only need directly to calculate the time of occurrence of each target trajectory, and need not calculate collision energy etc., therefore can realize real-time density adjustment.
What need particularly point out is; The mode of the foregoing description only limits to describe embodiment; But the present invention is confined to aforesaid way incessantly; And those skilled in the art can modify in not departing from the scope of the present invention in view of the above easily, and therefore scope of the present invention should comprise the disclosed principle and the maximum magnitude of new feature.

Claims (9)

1. a fast video concentrates method of abstracting; Comprise server end; It is characterized in that the moving target in the preprocessed video being detected tracking, judge, video is cut to a plurality of enriching sections according to the quantity that detects target in the length of video or the video by server end; Target trajectory in each enriching section is carried out collision detection and rearrangement, write down enriching section information afterwards and get in the index file; Also comprise client, described client is analyzed the index file that deposits in the server end, obtains the enriching section of having handled, plays up enriching section by frame, forms video sequence, and the concentrated video in playing is dynamically adjusted target density.
2. a kind of fast video according to claim 1 concentrates method of abstracting; It is characterized in that described moving object detection is to adopt the mixed Gaussian method of adaptive threshold that scene is carried out background modeling; Change the extraction prospect in conjunction with interframe; When extracting foreground area, utilize multi-scale information that region contour is become more meticulous; Utilize density estimation method that random area is positioned, adopt the method for random area sampling that background model is upgraded at last, effectively detect the target of low contrast; Described target following is to adopt the method for many hypothesis that association is carried out in the motion detection zone of multiframe; Objective contour is predicted; And, when target divides, collides, loses inspection, produce hypothesis according to this position based on the outline position of marginal information location present frame, utilize Hungary's algorithm to provide the hypothesis of optimum at last; And history hypothesis carried out cutting, obtain the pursuit path of target.
3. a kind of fast video according to claim 2 concentrates method of abstracting; The generation that it is characterized in that said enriching section is to adopt following method to realize: when one section video accumulated time length surpasses Tmax or destination number and surpasses Nmax, then produce a new enriching section.
4. a kind of fast video according to claim 3 concentrates method of abstracting, it is characterized in that through
Moving object detection can obtain the trace information that each target occurs with following the tracks of in video, the long track in the video is carried out cutting, guarantee each course length greater than Lmin less than Lmax.
5. a kind of fast video according to claim 4 concentrates method of abstracting, it is characterized in that through
Moving object detection can obtain the trace information that each target occurs with following the tracks of in video, comprise frame, zone, bounding box, according to the bounding box position of each frame of cutting target trajectory; Judge the collision between target, the definition energy term is punished collision, adopts the greedy method of variable step iteration again; Guarantee that each iteration energy all has decline; And iterative convergence speed is fast, and avoids being absorbed in locally optimal solution with the randomization method, accomplishes the detection and the rearrangement of target collision.
6. a kind of fast video according to claim 5 concentrates method of abstracting, it is characterized in that the optimization step of greedy method of said variable step iteration is following:
A. initialization: set primary iteration step-length S1, final iteration step length S2, wherein S2 < S1.Set step change and count ds, each step-length iterations N.Set current step-length S=S1.
B. with current step-length S iteration N time:
A) calculate current collision cost E1.
B) select a track at random.
C) be the interval with step-length S, all possible positions in enriching section reappose the track time of occurrence.
D) calculate minimum collision cost E2 in all positions.
E) if < E1 then is placed on minimum collision cost place with track to collision cost E2.
C. set S=S – ds.If S >=S2, repeating step 2, otherwise finish.
7. concentrate method of abstracting according to claim 1 or 6 described a kind of fast videos; It is characterized in that client is when playing up by frame video; Earlier according to searching corresponding Background of this moment in the present frame ID indexed file, and search the area pixel value of all these moment object appearing correspondences, the target area is added on the Background; If a position has a plurality of targets to occur, then the pixel value of this position is the average of a plurality of target pixel values.
8. concentrate method of abstracting based on the described a kind of fast video of claim 7; It is characterized in that Background passes through the multiple image cumulative mean is obtained; Set between an accumulation area earlier,, then write down a new Background if the Background between adjacent accumulation area changes above thresholding T1; Surpass thresholding T2 if change, and T2 T1, then be labeled as new enriching section.
9. a kind of fast video according to claim 8 concentrates method of abstracting; When it is characterized in that enriching section generates, use the concentrated density d of acquiescence, video is when client terminal playing; Can dynamically adjust according to the broadcast density of hope; When new broadcast density d n is set, arrange the time of occurrence of each target again, definition of T oBe the original time of occurrence of target, the then new time is T n=T o* d/d n
CN201210142026.0A 2012-05-08 2012-05-08 Rapid video concentration abstracting method Expired - Fee Related CN102708182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210142026.0A CN102708182B (en) 2012-05-08 2012-05-08 Rapid video concentration abstracting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210142026.0A CN102708182B (en) 2012-05-08 2012-05-08 Rapid video concentration abstracting method

Publications (2)

Publication Number Publication Date
CN102708182A true CN102708182A (en) 2012-10-03
CN102708182B CN102708182B (en) 2014-07-02

Family

ID=46900948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210142026.0A Expired - Fee Related CN102708182B (en) 2012-05-08 2012-05-08 Rapid video concentration abstracting method

Country Status (1)

Country Link
CN (1) CN102708182B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection
CN103079117A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Video abstract generation method and video abstract generation device
CN103096185A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method and device of video abstraction generation
CN103226586A (en) * 2013-04-10 2013-07-31 中国科学院自动化研究所 Video abstracting method based on optimal strategy of energy distribution
CN103345764A (en) * 2013-07-12 2013-10-09 西安电子科技大学 Dual-layer surveillance video abstraction generating method based on object content
CN103455625A (en) * 2013-09-18 2013-12-18 武汉烽火众智数字技术有限责任公司 Quick target rearrangement method for video abstraction
CN103617234A (en) * 2013-11-26 2014-03-05 公安部第三研究所 Device and method for active video concentration
CN103686095A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video concentration method and system
CN103778237A (en) * 2014-01-27 2014-05-07 北京邮电大学 Video abstraction generation method based on space-time recombination of active events
CN103793477A (en) * 2014-01-10 2014-05-14 同观科技(深圳)有限公司 System and method for video abstract generation
CN103957472A (en) * 2014-04-10 2014-07-30 华中科技大学 Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events
CN104284158A (en) * 2014-10-23 2015-01-14 南京信必达智能技术有限公司 Event-oriented intelligent camera monitoring method
CN104284057A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Video processing method and device
CN104301699A (en) * 2013-07-16 2015-01-21 浙江大华技术股份有限公司 Image processing method and device
CN104618681A (en) * 2013-11-01 2015-05-13 南京中兴力维软件有限公司 Method and device for multi-channel video condensation
CN105007433A (en) * 2015-06-03 2015-10-28 南京邮电大学 Target-based moving object arrangement method enabling energy constraint minimization
CN105262932A (en) * 2015-10-20 2016-01-20 深圳市华尊科技股份有限公司 Video processing method, and terminal
CN105357594A (en) * 2015-11-19 2016-02-24 南京云创大数据科技股份有限公司 Massive video abstraction generation method based on cluster and H264 video concentration algorithm
CN107680117A (en) * 2017-09-28 2018-02-09 江苏东大金智信息***有限公司 A kind of concentration video construction method based on irregular object boundary object
TWI638337B (en) * 2017-12-21 2018-10-11 晶睿通訊股份有限公司 Image overlapping method and related image overlapping device
CN110166851A (en) * 2018-08-21 2019-08-23 腾讯科技(深圳)有限公司 A kind of video abstraction generating method, device and storage medium
CN110322471A (en) * 2019-07-18 2019-10-11 华中科技大学 Method, apparatus, equipment and the storage medium of panoramic video concentration
CN111107376A (en) * 2019-12-09 2020-05-05 国网辽宁省电力有限公司营口供电公司 Video enhancement concentration method suitable for security protection of power system
CN112446358A (en) * 2020-12-15 2021-03-05 北京京航计算通讯研究所 Target detection method based on video image recognition technology
CN112507913A (en) * 2020-12-15 2021-03-16 北京京航计算通讯研究所 Target detection system based on video image recognition technology
CN112580548A (en) * 2020-12-24 2021-03-30 北京睿芯高通量科技有限公司 Video concentration system and method in novel intelligent security system
CN115941997A (en) * 2022-12-01 2023-04-07 石家庄铁道大学 Fragment-adaptive surveillance video concentration method
CN116156206A (en) * 2023-04-04 2023-05-23 石家庄铁道大学 Monitoring video concentration method taking target group as processing unit
CN117376638A (en) * 2023-09-02 2024-01-09 石家庄铁道大学 Video concentration method for segment segmentation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262568A (en) * 2008-04-21 2008-09-10 中国科学院计算技术研究所 A method and system for generating video outline
CN101689394A (en) * 2007-02-01 2010-03-31 耶路撒冷希伯来大学伊森姆研究发展有限公司 The method and system that is used for video index and video summary
US20110170749A1 (en) * 2006-09-29 2011-07-14 Pittsburgh Pattern Recognition, Inc. Video retrieval system for human face content
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110170749A1 (en) * 2006-09-29 2011-07-14 Pittsburgh Pattern Recognition, Inc. Video retrieval system for human face content
CN101689394A (en) * 2007-02-01 2010-03-31 耶路撒冷希伯来大学伊森姆研究发展有限公司 The method and system that is used for video index and video summary
CN101262568A (en) * 2008-04-21 2008-09-10 中国科学院计算技术研究所 A method and system for generating video outline
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930061A (en) * 2012-11-28 2013-02-13 安徽水天信息科技有限公司 Video abstraction method and system based on moving target detection
CN103079117B (en) * 2012-12-30 2016-05-25 信帧电子技术(北京)有限公司 Video abstraction generating method and video frequency abstract generating apparatus
CN103079117A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Video abstract generation method and video abstract generation device
CN103096185A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method and device of video abstraction generation
CN103096185B (en) * 2012-12-30 2016-04-20 信帧电子技术(北京)有限公司 A kind of video abstraction generating method and device
CN103226586A (en) * 2013-04-10 2013-07-31 中国科学院自动化研究所 Video abstracting method based on optimal strategy of energy distribution
CN103226586B (en) * 2013-04-10 2016-06-22 中国科学院自动化研究所 Video summarization method based on Energy distribution optimal strategy
CN104284057A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Video processing method and device
CN103345764A (en) * 2013-07-12 2013-10-09 西安电子科技大学 Dual-layer surveillance video abstraction generating method based on object content
CN103345764B (en) * 2013-07-12 2016-02-10 西安电子科技大学 A kind of double-deck monitor video abstraction generating method based on contents of object
CN104301699A (en) * 2013-07-16 2015-01-21 浙江大华技术股份有限公司 Image processing method and device
CN103455625B (en) * 2013-09-18 2016-07-06 武汉烽火众智数字技术有限责任公司 A kind of quick target rearrangement method for video abstraction
CN103455625A (en) * 2013-09-18 2013-12-18 武汉烽火众智数字技术有限责任公司 Quick target rearrangement method for video abstraction
CN104618681B (en) * 2013-11-01 2019-03-26 南京中兴力维软件有限公司 Multi-channel video concentration method and device thereof
CN104618681A (en) * 2013-11-01 2015-05-13 南京中兴力维软件有限公司 Method and device for multi-channel video condensation
CN103617234A (en) * 2013-11-26 2014-03-05 公安部第三研究所 Device and method for active video concentration
CN103617234B (en) * 2013-11-26 2017-10-24 公安部第三研究所 Active video enrichment facility and method
CN103686095B (en) * 2014-01-02 2017-05-17 中安消技术有限公司 Video concentration method and system
CN103686095A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video concentration method and system
CN103793477A (en) * 2014-01-10 2014-05-14 同观科技(深圳)有限公司 System and method for video abstract generation
CN103793477B (en) * 2014-01-10 2017-02-08 同观科技(深圳)有限公司 System and method for video abstract generation
CN103778237A (en) * 2014-01-27 2014-05-07 北京邮电大学 Video abstraction generation method based on space-time recombination of active events
CN103778237B (en) * 2014-01-27 2017-02-15 北京邮电大学 Video abstraction generation method based on space-time recombination of active events
CN103957472A (en) * 2014-04-10 2014-07-30 华中科技大学 Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events
CN103957472B (en) * 2014-04-10 2017-01-18 华中科技大学 Timing-sequence-keeping video summary generation method and system based on optimal reconstruction of events
CN104284158A (en) * 2014-10-23 2015-01-14 南京信必达智能技术有限公司 Event-oriented intelligent camera monitoring method
CN105007433A (en) * 2015-06-03 2015-10-28 南京邮电大学 Target-based moving object arrangement method enabling energy constraint minimization
CN105007433B (en) * 2015-06-03 2020-05-15 南京邮电大学 Moving object arrangement method based on energy constraint minimization of object
CN105262932A (en) * 2015-10-20 2016-01-20 深圳市华尊科技股份有限公司 Video processing method, and terminal
CN105262932B (en) * 2015-10-20 2018-06-29 深圳市华尊科技股份有限公司 A kind of method and terminal of video processing
CN105357594B (en) * 2015-11-19 2018-08-31 南京云创大数据科技股份有限公司 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264
CN105357594A (en) * 2015-11-19 2016-02-24 南京云创大数据科技股份有限公司 Massive video abstraction generation method based on cluster and H264 video concentration algorithm
CN107680117A (en) * 2017-09-28 2018-02-09 江苏东大金智信息***有限公司 A kind of concentration video construction method based on irregular object boundary object
CN107680117B (en) * 2017-09-28 2020-03-24 江苏东大金智信息***有限公司 Method for constructing concentrated video based on irregular target boundary object
US10785531B2 (en) 2017-12-21 2020-09-22 Vivotek Inc. Video synopsis method and related video synopsis device
TWI638337B (en) * 2017-12-21 2018-10-11 晶睿通訊股份有限公司 Image overlapping method and related image overlapping device
CN110166851A (en) * 2018-08-21 2019-08-23 腾讯科技(深圳)有限公司 A kind of video abstraction generating method, device and storage medium
CN110322471A (en) * 2019-07-18 2019-10-11 华中科技大学 Method, apparatus, equipment and the storage medium of panoramic video concentration
CN111107376A (en) * 2019-12-09 2020-05-05 国网辽宁省电力有限公司营口供电公司 Video enhancement concentration method suitable for security protection of power system
CN112446358A (en) * 2020-12-15 2021-03-05 北京京航计算通讯研究所 Target detection method based on video image recognition technology
CN112507913A (en) * 2020-12-15 2021-03-16 北京京航计算通讯研究所 Target detection system based on video image recognition technology
CN112580548A (en) * 2020-12-24 2021-03-30 北京睿芯高通量科技有限公司 Video concentration system and method in novel intelligent security system
CN115941997A (en) * 2022-12-01 2023-04-07 石家庄铁道大学 Fragment-adaptive surveillance video concentration method
CN116156206A (en) * 2023-04-04 2023-05-23 石家庄铁道大学 Monitoring video concentration method taking target group as processing unit
CN117376638A (en) * 2023-09-02 2024-01-09 石家庄铁道大学 Video concentration method for segment segmentation
CN117376638B (en) * 2023-09-02 2024-05-21 石家庄铁道大学 Video concentration method for segment segmentation

Also Published As

Publication number Publication date
CN102708182B (en) 2014-07-02

Similar Documents

Publication Publication Date Title
CN102708182A (en) Rapid video concentration abstracting method
CN106856577B (en) Video abstract generation method capable of solving multi-target collision and shielding problems
CN102222104B (en) Method for intelligently extracting video abstract based on time-space fusion
CN104217417B (en) A kind of method and device of video multi-target tracking
CN103065325B (en) A kind of method for tracking target based on the polymerization of color Distance geometry Iamge Segmentation
CN104835147A (en) Method for detecting crowded people flow in real time based on three-dimensional depth map data
CN103929685A (en) Video abstract generating and indexing method
CN103617410A (en) Highway tunnel parking detection method based on video detection technology
CN110633678B (en) Quick and efficient vehicle flow calculation method based on video image
KR101472674B1 (en) Method and apparatus for video surveillance based on detecting abnormal behavior using extraction of trajectories from crowd in images
CN102568003A (en) Multi-camera target tracking method based on video structural description
CN112884808B (en) Video concentrator set partitioning method for reserving target real interaction behavior
Ling et al. A background modeling and foreground segmentation approach based on the feedback of moving objects in traffic surveillance systems
CN102903121A (en) Fusion algorithm based on moving target tracking
CN104270608A (en) Intelligent video player and playing method thereof
CN104063692A (en) Method and system for pedestrian positioning detection
CN104301699B (en) A kind of image processing method and device
CN110674886A (en) Video target detection method fusing multi-level features
CN103793921B (en) Moving object extraction method and moving object extraction device
CN104168444A (en) Target tracking method of tracking ball machine and tracking ball machine
CN104683765A (en) Video concentration method based on mobile object detection
Zhang et al. A robust and efficient shot boundary detection approach based on fisher criterion
CN106339690A (en) Video object flow detecting method and system based on noise elimination and auxiliary determination line
CN111079527B (en) Shot boundary detection method based on 3D residual error network
CN112307895A (en) Crowd gathering abnormal behavior detection method under community monitoring scene

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent of invention or patent application
CB02 Change of applicant information

Address after: Hangzhou City, Zhejiang province Yuhang District 310013 West Street Wuchang No. 998 building 7 East

Applicant after: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Address before: 310013, Zhejiang, Xihu District, Hangzhou, Tian Shan Road, No. 398, Kun building, 4 floor, South Block

Applicant before: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee

Owner name: ZHEJIANG ICARE VISION TECHNOLOGY CO., LTD.

Free format text: FORMER NAME: HANGZHOU ICARE VISION TECHNOLOGY CO., LTD.

CP01 Change in the name or title of a patent holder

Address after: Hangzhou City, Zhejiang province Yuhang District 310013 West Street Wuchang No. 998 building 7 East

Patentee after: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Address before: Hangzhou City, Zhejiang province Yuhang District 310013 West Street Wuchang No. 998 building 7 East

Patentee before: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Rapid video concentration abstracting method

Effective date of registration: 20190820

Granted publication date: 20140702

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2019330000016

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20200917

Granted publication date: 20140702

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2019330000016

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A fast video summarization method

Effective date of registration: 20200921

Granted publication date: 20140702

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2020330000737

PE01 Entry into force of the registration of the contract for pledge of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140702

CF01 Termination of patent right due to non-payment of annual fee