CN105357594A - Massive video abstraction generation method based on cluster and H264 video concentration algorithm - Google Patents

Massive video abstraction generation method based on cluster and H264 video concentration algorithm Download PDF

Info

Publication number
CN105357594A
CN105357594A CN201510802199.4A CN201510802199A CN105357594A CN 105357594 A CN105357594 A CN 105357594A CN 201510802199 A CN201510802199 A CN 201510802199A CN 105357594 A CN105357594 A CN 105357594A
Authority
CN
China
Prior art keywords
frame
video
background
target
fragment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510802199.4A
Other languages
Chinese (zh)
Other versions
CN105357594B (en
Inventor
张真
刘鹏
杨雪松
曹骝
秦恩泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Innovative Data Technologies Inc
Original Assignee
Nanjing Innovative Data Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Innovative Data Technologies Inc filed Critical Nanjing Innovative Data Technologies Inc
Priority to CN201510802199.4A priority Critical patent/CN105357594B/en
Publication of CN105357594A publication Critical patent/CN105357594A/en
Application granted granted Critical
Publication of CN105357594B publication Critical patent/CN105357594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8549Creating video summaries, e.g. movie trailer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a massive video abstraction generation method based on a cluster and H264 video concentration algorithm. The method comprises the following steps: (1), an original video is selected and is cut to obtain n segments with approximately equal length, wherein the coding format is H264 and the n is a natural number; (2), video decoding is carried out on all cut segments, a foreground target is obtained according to motion estimation and a background picture, detection rates of all segments are perfected according to a sparse- optical-flow-based misinformation deletion and leak detection repairing algorithm, and the background picture is updated; and (3), the single segment containing the motion information is used as a concentration unit to carry out compression, splicing is carried out after compression completion, and then a complete video abstract is generated.

Description

Video based on cluster and H264 concentrates the massive video abstraction generating method of algorithm
Technical field
The invention belongs to massive video data concentration technique field, particularly a kind of video based on cluster and H264 concentrates the massive video abstraction generating method of algorithm.
Background technology
As everyone knows, video monitoring system is just penetrating among the various occasions of society, and plays more and more important effect under many industries are as environment such as security protection, traffic, industrial production; Along with the quantity of monitoring camera increases rapidly, every day, the video data of magnanimity also produced thereupon, was browse these videos to manually at present mostly, and extracted wherein significant information.
On the one hand, video is more, and required personnel are also more; On the other hand, artificial treatment efficiency also can become low because of increasing data, and result is omitted to some extent unavoidably or gone wrong; But its processing cost is considerable; Video summarization technique is arisen at the historic moment, and it can automatically remain with meaning video data, casts out garbage, and significant data browsed by an artificial like this need, and cost is effectively reduced.
Video frequency abstract concentrates also known as video, the processing procedure of this technology normally: first by background modeling, obtain foreground object; Use track algorithm subsequently, preserve movement locus; Finally the track of object is combined according to certain mode, and copy in background image, form concentrated video; But, video length after existing video concentration technique concentrates is generally much smaller than former video, if there is one section of duration to be the HD video of 10 hours, common background modeling algorithm, as GMM, its speed of service approximates real-time broadcasting speed, if will browse the video after whole concentrating, will consume 10 hours equally, efficiency does not but significantly promote.
Summary of the invention
Technical problem to be solved by this invention is, overcomes the shortcoming of prior art, provides a kind of video based on cluster and H264 to concentrate the massive video abstraction generating method of algorithm.
In order to solve above technical problem, the invention provides the massive video abstraction generating method that a kind of video based on cluster and H264 concentrates algorithm, comprising the steps:
1. choose original video, and cut it, obtain n the approximately equalised fragment of length, coded format is H264, and wherein n is natural number;
2. video decode is carried out to each fragment after cutting, obtain foreground target according to estimation and Background, and deleted and undetected restore design by the wrong report based on sparse optical flow, carry out perfect to the verification and measurement ratio of each fragment, and upgrade Background;
3. the single fragment comprising movable information is regarded as upgrading unit, compress, splice after having compressed, generate one section of complete video frequency abstract.
The technical scheme that the present invention limits further is:
Further, the aforesaid video based on cluster and H264 concentrates the massive video abstraction generating method of algorithm, the approximately equalised fragment of a described n length be parallel carry out step 2. and step 3. in concentration operation, separate.
The aforesaid video based on cluster and H264 concentrates the massive video abstraction generating method of algorithm, and 1. described step specifically comprises following process:
Suppose that original video i-th frame is the video cut point that user sets, definition frame scope F ∈ [ik × fi+k × f], k is iterations, and f is constant, searches image j within the scope of this, makes without foreground target in jth frame, and | ij| is minimum;
In scope [ik × fi+k × f] if in find the numerical value of the estimation of continuous m frame to be less than threshold value Tmv, then think without foreground target, if this m two field picture is all Background, obtain a width Background like this, if without Background in F, 1. k=k+1, forward step to; If jth frame is Background (j ∈ [ik × fi+k × f]), calculate | ij| and making | ij| is minimum, then j is the cut point of video, exits circulation;
2. described step specifically comprises following process:
If estimation numerical value is less than threshold value Tmv, then think without foreground target, if continuous multiple frames is without foreground target, obtain a width Background like this; For P frame or B frame, first judge whether the estimation in present frame is greater than threshold value Tmv, if be greater than Tmv, then by both present image and Background gray processing and respective pixel subtract each other, if the absolute value after subtracting each other is greater than certain threshold value Tdiff, composing is 255, otherwise composing is 0, so just obtains a width bianry image M; If estimation is less than Tmv, then do not do any process, enter into next frame and continue to calculate; For I frame, calculate the difference value M of present image and Background in the same way;
Remove noise and Objective extraction, first bianry image M is done closed operation, extract the outermost profile of each object subsequently, represent its size and location with boundary rectangle; If the length and width of rectangle is all greater than threshold value Tlen, then thinks a foreground target, otherwise be regarded as noise;
Target following, calculates each target of present frame and each target of next frame rectangle registration between any two, if the greatest coincide degree of certain target and next frame is greater than threshold value Toverlap, then thinks that these two rectangles are same targets, follow the tracks of successfully; If follow the tracks of unsuccessfully, and object has been moved near image boundaries, then object has shifted out the visual field (or area-of-interest) at next frame, without the need to following the tracks of;
Remove wrong report and repair undetected, if certain target is not followed the tracks of successfully at next frame, then think there occurs undetected; For undetected, first the harris angle point in rectangle is calculated, if the pixel that certain harris angle point is corresponding in binary map M is 0, then reject this angle point, next utilizes optical flow method to follow the tracks of all angle points, calculate average displacement dx and dy of angle point in level and vertical direction, by current goal in level and water quality direction translation dx and dy pixel respectively, as the position of next frame; If certain target continuous multiple frames there occurs undetected, and continuous undetected frame number is greater than Tm, then think that this target is a wrong report, deleted;
The renewal of background image Bg, after above treatment step performs, determine in image, which region belongs to prospect, which belongs to background, context update is only for the point in those nontarget areas, concrete steps are, if certain pixel p xl of present frame Fcur is not in arbitrary target rectangle, then this background pixel is replaced by the average of Fcur and the Bg of respective coordinates; Preserve the foreground object coordinate, size, subgraph and the motion segments information that detect;
Described step 3. comprise following concrete process:
(1) based on the background modeling of H264, the information such as start frame, end frame of foreground object coordinate, size, subgraph and the motion segments detected is preserved; If segment number is accumulated to Tsec or has arrived video last frame and segments is greater than 1 in internal memory, then forward step (2) to; If arrive last frame and segments is 0, then quit a program;
(2) first fragment is joined set A and all target images in this fragment first frame are copied to Background;
(3) judge that fragment in remaining fragment and set A is average at object successively, whether meet the value that user set in the greatest coincide degree two, if satisfied, then this fragment is added set A;
(4) target image in corresponding for fragments all in set A frame number is copied to Background; If certain fragment has copied last frame, it is deleted from set A; If all fragments have all copied and without rest segment, forwarded step (1) to, otherwise enter next frame in set A, forward step (3) to.
The invention has the beneficial effects as follows:
1. the massive video abstraction generating method that the video based on cluster and H264 designed by the present invention concentrates algorithm utilizes the mode of parallel processing, significantly improves the efficiency that video is concentrated;
2. the massive video abstraction generating method that the video based on cluster and H264 designed by the present invention concentrates algorithm can solve the two class screen flicker problems caused by undetected, object adhesion.
Accompanying drawing explanation
Fig. 1 is the general flow chart based on the video abstraction generating method of cluster in the present invention;
Fig. 2 is the detail flowchart based on the video abstraction generating method of cluster in the present invention;
Fig. 3 is the hardware unit schematic diagram based on the video abstraction generating method of cluster in the present invention;
Fig. 4 is that the motion segments of video abstraction generating method based on cluster in the present invention concentrates schematic diagram;
Fig. 5 is concentrated variation relation figure between total time and the number of processes of parallel processing in the present invention.
Embodiment
As Figure 1-Figure 4, a kind of video based on cluster and H264 that the present embodiment provides concentrates the massive video abstraction generating method of algorithm, comprises the steps:
1. choose original video, and cut it, obtain n the approximately equalised fragment of length, coded format is H264, and wherein n is natural number;
2. video decode is carried out to each fragment after cutting, obtain foreground target according to estimation and Background, and deleted and undetected restore design by the wrong report based on sparse optical flow, carry out perfect to the verification and measurement ratio of each fragment, and upgrade Background;
3. the single fragment comprising movable information is regarded as upgrading unit, compress, splice after having compressed, generate one section of complete video frequency abstract; The approximately equalised fragment of a described n length be parallel carry out step 2. and step 3. in concentration operation, separate.
1. described step specifically comprises following process:
Suppose that original video i-th frame is the video cut point that user sets, definition frame scope F ∈ [ik × fi+k × f], k is iterations, and f is constant, searches image j within the scope of this, makes without foreground target in jth frame, and | ij| is minimum;
In scope [ik × fi+k × f] if in find the numerical value of the estimation of continuous m frame to be less than threshold value Tmv, then think without foreground target, if this m two field picture is all Background, obtain a width Background like this, if without Background in F, 1. k=k+1, forward step to; If jth frame is Background (j ∈ [ik × fi+k × f]), calculate | ij| and making | ij| is minimum, then j is the cut point of video, exits circulation;
2. described step specifically comprises following process:
If estimation numerical value is less than threshold value Tmv, then think without foreground target, if continuous multiple frames is without foreground target, obtain a width Background like this; For P frame or B frame, first judge whether the estimation in present frame is greater than threshold value Tmv, if be greater than Tmv, then by both present image and Background gray processing and respective pixel subtract each other, if the absolute value after subtracting each other is greater than certain threshold value Tdiff, composing is 255, otherwise composing is 0, so just obtains a width bianry image M; If estimation is less than Tmv, then do not do any process, enter into next frame and continue to calculate; For I frame, calculate the difference value M of present image and Background in the same way;
Remove noise and Objective extraction, first bianry image M is done closed operation, extract the outermost profile of each object subsequently, represent its size and location with boundary rectangle; If the length and width of rectangle is all greater than threshold value Tlen, then thinks a foreground target, otherwise be regarded as noise;
Target following, calculates each target of present frame and each target of next frame rectangle registration between any two, if the greatest coincide degree of certain target and next frame is greater than threshold value Toverlap, then thinks that these two rectangles are same targets, follow the tracks of successfully; If follow the tracks of unsuccessfully, and object has been moved near image boundaries, then object has shifted out the visual field (or area-of-interest) at next frame, without the need to following the tracks of;
Remove wrong report and repair undetected, if certain target is not followed the tracks of successfully at next frame, then think there occurs undetected; For undetected, first the harris angle point in rectangle is calculated, if the pixel that certain harris angle point is corresponding in binary map M is 0, then reject this angle point, next utilizes optical flow method to follow the tracks of all angle points, calculate average displacement dx and dy of angle point in level and vertical direction, by current goal in level and water quality direction translation dx and dy pixel respectively, as the position of next frame; If certain target continuous multiple frames there occurs undetected, and continuous undetected frame number is greater than Tm, then think that this target is a wrong report, deleted;
The renewal of background image Bg, after above treatment step performs, determine in image, which region belongs to prospect, which belongs to background, context update is only for the point in those nontarget areas, concrete steps are, if certain pixel p xl of present frame Fcur is not in arbitrary target rectangle, then this background pixel is replaced by the average of Fcur and the Bg of respective coordinates; Preserve the foreground object coordinate, size, subgraph and the motion segments information that detect;
Described step 3. comprise following concrete process:
(1) based on the background modeling of H264, the information such as start frame, end frame of foreground object coordinate, size, subgraph and the motion segments detected is preserved; If segment number is accumulated to Tsec or has arrived video last frame and segments is greater than 1 in internal memory, then forward step (2) to; If arrive last frame and segments is 0, then quit a program;
(2) first fragments join set A and all target images in this fragment first frame are copied to Background;
(3) judge that fragment in remaining fragment and set A is average at object successively, whether meet the value that user set in the greatest coincide degree two, if satisfied, then this fragment is added set A;
(4) target image in corresponding for fragments all in set A frame number is copied to Background; If certain fragment has copied last frame, it is deleted from set A; If all fragments have all copied and without rest segment, forwarded step (1) to, otherwise enter next frame in set A, forward step (3) to.
In the test of reality, one section of duration is selected to be the monitor video of 8 minutes, resolution 1280 × 720, with mixing Gauss background modeling, vibe background modeling and modeling algorithm of the present invention, (video cuts into 5 sections respectively, each section of use one independently process concentrates), of the present inventionly always consuming timely comprise following a few class: 1, video clipping time; 2, every section of video concentration time; 3, concentrated rear splicing every section of video spent time, performance comparison is in table 1;
Table 1
GMM modeling Vibe modeling Algorithm of the present invention
Average every frame is consuming time 41.46ms 22.45ms 12.24ms
Always consuming time 550.66s 316.44s 47.82s
Can be found out by table 1, adopt the video based on cluster and H264 done designed by invention to concentrate the massive video abstraction generating method of algorithm compared with prior art, greatly can raise the efficiency and reach more than 100%; Fig. 5 selects the video of 100 minutes durations to concentrate, concentrated variation relation between total time and the number of processes of parallel processing.
Above embodiment is only and technological thought of the present invention is described, can not limit protection scope of the present invention with this, and every technological thought proposed according to the present invention, any change that technical scheme basis is done, all falls within scope.

Claims (3)

1. concentrate a massive video abstraction generating method for algorithm based on the video of cluster and H264, it is characterized in that, comprise the steps:
1. choose original video, and cut it, obtain n the approximately equalised fragment of length, coded format is H264, and wherein n is natural number;
2. video decode is carried out to each fragment after cutting, obtain foreground target according to estimation and Background, and deleted and undetected restore design by the wrong report based on sparse optical flow, carry out perfect to the verification and measurement ratio of each fragment, and upgrade Background;
3. the single fragment comprising movable information is regarded as upgrading unit, compress, splice after having compressed, generate one section of complete video frequency abstract.
2. the video based on cluster and H264 according to claim 1 concentrates the massive video abstraction generating method of algorithm, it is characterized in that, the approximately equalised fragment of a described n length be parallel carry out step 2. and step 3. in concentration operation, separate.
3. the video based on cluster and H264 according to claim 1 concentrates the massive video abstraction generating method of algorithm, and it is characterized in that, 1. described step specifically comprises following process:
Suppose that original video i-th frame is the video cut point that user sets, definition frame scope F ∈ [ik × fi+k × f], k is iterations, and f is constant, searches image j within the scope of this, makes without foreground target in jth frame, and | ij| is minimum;
In scope [ik × fi+k × f] if in find the numerical value of the estimation of continuous m frame to be less than threshold value Tmv, then think without foreground target, if this m two field picture is all Background, obtain a width Background like this, if without Background in F, 1. k=k+1, forward step to; If jth frame is Background (j ∈ [ik × fi+k × f]), calculate | ij| and making | ij| is minimum, then j is the cut point of video, exits circulation;
2. described step specifically comprises following process:
If estimation numerical value is less than threshold value Tmv, then think without foreground target, if continuous multiple frames is without foreground target, obtain a width Background like this; For P frame or B frame, first judge whether the estimation in present frame is greater than threshold value Tmv, if be greater than Tmv, then by both present image and Background gray processing and respective pixel subtract each other, if the absolute value after subtracting each other is greater than certain threshold value Tdiff, composing is 255, otherwise composing is 0, so just obtains a width bianry image M; If estimation is less than Tmv, then do not do any process, enter into next frame and continue to calculate; For I frame, calculate the difference value M of present image and Background in the same way;
Remove noise and Objective extraction, first bianry image M is done closed operation, extract the outermost profile of each object subsequently, represent its size and location with boundary rectangle; If the length and width of rectangle is all greater than threshold value Tlen, then thinks a foreground target, otherwise be regarded as noise;
Target following, calculates each target of present frame and each target of next frame rectangle registration between any two, if the greatest coincide degree of certain target and next frame is greater than threshold value Toverlap, then thinks that these two rectangles are same targets, follow the tracks of successfully; If follow the tracks of unsuccessfully, and object has been moved near image boundaries, then object has shifted out the visual field (or area-of-interest) at next frame, without the need to following the tracks of;
Remove wrong report and repair undetected, if certain target is not followed the tracks of successfully at next frame, then think there occurs undetected; For undetected, first the harris angle point in rectangle is calculated, if the pixel that certain harris angle point is corresponding in binary map M is 0, then reject this angle point, next utilizes optical flow method to follow the tracks of all angle points, calculate average displacement dx and dy of angle point in level and vertical direction, by current goal in level and water quality direction translation dx and dy pixel respectively, as the position of next frame; If certain target continuous multiple frames there occurs undetected, and continuous undetected frame number is greater than Tm, then think that this target is a wrong report, deleted;
The renewal of background image Bg, after above treatment step performs, determine in image, which region belongs to prospect, which belongs to background, context update is only for the point in those nontarget areas, concrete steps are, if certain pixel p xl of present frame Fcur is not in arbitrary target rectangle, then this background pixel is replaced by the average of Fcur and the Bg of respective coordinates; Preserve the foreground object coordinate, size, subgraph and the motion segments information that detect;
Described step 3. comprise following concrete process:
(1) based on the background modeling of H264, the information such as start frame, end frame of foreground object coordinate, size, subgraph and the motion segments detected is preserved; If segment number is accumulated to Tsec or has arrived video last frame and segments is greater than 1 in internal memory, then forward step (2) to; If arrive last frame and segments is 0, then quit a program;
(2) first fragment is joined set A and all target images in this fragment first frame are copied to Background;
(3) judge that fragment in remaining fragment and set A is average at object successively, whether meet the value that user set in the greatest coincide degree two, if satisfied, then this fragment is added set A;
(4) target image in corresponding for fragments all in set A frame number is copied to Background; If certain fragment has copied last frame, it is deleted from set A; If all fragments have all copied and without rest segment, forwarded step (1) to, otherwise enter next frame in set A, forward step (3) to.
CN201510802199.4A 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264 Active CN105357594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510802199.4A CN105357594B (en) 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510802199.4A CN105357594B (en) 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264

Publications (2)

Publication Number Publication Date
CN105357594A true CN105357594A (en) 2016-02-24
CN105357594B CN105357594B (en) 2018-08-31

Family

ID=55333431

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510802199.4A Active CN105357594B (en) 2015-11-19 2015-11-19 The massive video abstraction generating method of algorithm is concentrated based on the video of cluster and H264

Country Status (1)

Country Link
CN (1) CN105357594B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107018367A (en) * 2017-04-11 2017-08-04 深圳市粮食集团有限公司 A kind of method and system for implementing grain monitoring
CN107657626A (en) * 2016-07-25 2018-02-02 浙江宇视科技有限公司 The detection method and device of a kind of moving target
CN107943837A (en) * 2017-10-27 2018-04-20 江苏理工学院 A kind of video abstraction generating method of foreground target key frame
WO2019041661A1 (en) * 2017-08-31 2019-03-07 苏州科达科技股份有限公司 Video abstract generating method and device
CN110996169A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Method, device, electronic equipment and computer-readable storage medium for clipping video
CN111526434A (en) * 2020-04-24 2020-08-11 西北工业大学 Converter-based video abstraction method
CN113051415A (en) * 2019-12-27 2021-06-29 浙江宇视科技有限公司 Image storage method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090219300A1 (en) * 2005-11-15 2009-09-03 Yissum Research Deveopment Company Of The Hebrew University Of Jerusalem Method and system for producing a video synopsis
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN104284057A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Video processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090219300A1 (en) * 2005-11-15 2009-09-03 Yissum Research Deveopment Company Of The Hebrew University Of Jerusalem Method and system for producing a video synopsis
CN102375816A (en) * 2010-08-10 2012-03-14 中国科学院自动化研究所 Online video concentration device, system and method
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN104284057A (en) * 2013-07-05 2015-01-14 浙江大华技术股份有限公司 Video processing method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周刚,谢善益: "《智能视频浓缩与检索技术在变电站监控中的应用》", 《通讯世界》 *
马婷婷: "《基于目标分离的视频浓缩技术在安防行业的应用和研究》", 《电脑知识与技术》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107657626A (en) * 2016-07-25 2018-02-02 浙江宇视科技有限公司 The detection method and device of a kind of moving target
CN107657626B (en) * 2016-07-25 2021-06-01 浙江宇视科技有限公司 Method and device for detecting moving target
CN107018367A (en) * 2017-04-11 2017-08-04 深圳市粮食集团有限公司 A kind of method and system for implementing grain monitoring
WO2019041661A1 (en) * 2017-08-31 2019-03-07 苏州科达科技股份有限公司 Video abstract generating method and device
CN107943837A (en) * 2017-10-27 2018-04-20 江苏理工学院 A kind of video abstraction generating method of foreground target key frame
CN107943837B (en) * 2017-10-27 2022-09-30 江苏理工学院 Key-framed video abstract generation method for foreground target
CN110996169A (en) * 2019-07-12 2020-04-10 北京达佳互联信息技术有限公司 Method, device, electronic equipment and computer-readable storage medium for clipping video
CN113051415A (en) * 2019-12-27 2021-06-29 浙江宇视科技有限公司 Image storage method, device, equipment and storage medium
CN111526434A (en) * 2020-04-24 2020-08-11 西北工业大学 Converter-based video abstraction method

Also Published As

Publication number Publication date
CN105357594B (en) 2018-08-31

Similar Documents

Publication Publication Date Title
CN105357594A (en) Massive video abstraction generation method based on cluster and H264 video concentration algorithm
CN102006475B (en) Video coding and decoding device and method
CN102708182A (en) Rapid video concentration abstracting method
CN105704434A (en) Stadium population monitoring method and system based on intelligent video identification
CN102833492A (en) Color similarity-based video scene segmenting method
CN109377515A (en) A kind of moving target detecting method and system based on improvement ViBe algorithm
CN103618911A (en) Video streaming providing method and device based on video attribute information
SG11201903285VA (en) Image encoding device, image encoding method, and image encoding program, and image decoding device, image decoding method, and image decoding program
CN109785356A (en) A kind of background modeling method of video image
CN105405153A (en) Intelligent mobile terminal anti-noise interference motion target extraction method
CN103914822A (en) Interactive video foreground object extraction method based on super pixel segmentation
CN103824074A (en) Crowd density estimation method based on background subtraction and texture features and system
CN104253994A (en) Night monitored video real-time enhancement method based on sparse code fusion
CN103974068B (en) A kind of method that video size based on content reduces
CN110751668B (en) Image processing method, device, terminal, electronic equipment and readable storage medium
CN104867110A (en) Lattice Boltzmann model-based video image defect repairing method
CN113139507B (en) Automatic capturing method and system for drainage pipeline defect photos
CN115661280A (en) Method and device for implanting multimedia into video, electronic equipment and storage medium
CN114298992A (en) Video frame duplication removing method and device, electronic equipment and storage medium
CN108961300B (en) Image segmentation method and device
CN107483936B (en) A kind of light field video inter-prediction method based on macro pixel
CN115908427B (en) Pavement disease maintenance cost prediction method and system based on semantic segmentation and SVM
CN104935830A (en) Splicing display apparatus video information rendering and displaying methods and systems
Ratnarajah et al. Moving object based collision-free video synopsis
CN104486524A (en) Method for detecting whether images are subjected to two times of JPEG compression with same compression quality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Massive video summarization generation method based on clustering and H264 video concentration algorithm

Effective date of registration: 20221121

Granted publication date: 20180831

Pledgee: Nanjing Branch of Jiangsu Bank Co.,Ltd.

Pledgor: NANJING YUNCHUANG BIG DATA TECHNOLOGY Co.,Ltd.

Registration number: Y2022980022505