CN103686095A - Video concentration method and system - Google Patents

Video concentration method and system Download PDF

Info

Publication number
CN103686095A
CN103686095A CN201410001188.1A CN201410001188A CN103686095A CN 103686095 A CN103686095 A CN 103686095A CN 201410001188 A CN201410001188 A CN 201410001188A CN 103686095 A CN103686095 A CN 103686095A
Authority
CN
China
Prior art keywords
target
video
frame
sequence
frame number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410001188.1A
Other languages
Chinese (zh)
Other versions
CN103686095B (en
Inventor
秦兴德
唐伟
吴金勇
王军
刁德峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Security and Fire Technology Co Ltd
Original Assignee
China Security and Fire Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Security and Fire Technology Co Ltd filed Critical China Security and Fire Technology Co Ltd
Priority to CN201410001188.1A priority Critical patent/CN103686095B/en
Publication of CN103686095A publication Critical patent/CN103686095A/en
Application granted granted Critical
Publication of CN103686095B publication Critical patent/CN103686095B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video concentration method and a video concentration system, and belongs to the technical field of video processing. The method comprises the following steps that video frames are subjected to background modeling analysis, foreground targets and background images of each frame are divided, the moving target track of each target is extracted, in addition, the target sequence and background images are stored, the target sequence of the moving target track are subjected to optimized ranking, a new frame number sequence is generated and stored, according to a pixel merging algorithm, the foreground targets and the background images are subjected to seamless merging, the stored new frame number sequence and the stored background images are read, and a compressed video is generated. When the method and the system provided by the embodiment of the invention are adopted, the length of the compressed video is shortened, in addition, the moving object information in the video is retained as far as possible, the collision among multiple targets can be effectively prevented, and better visual effects are realized.

Description

A kind of video method for concentration and system
Technical field
The present invention relates to technical field of video processing, relate in particular to a kind of video method for concentration and system.
Background technology
In recent years, wisdom city, safe city high speed development, ten hundreds of monitoring cameras is installed in the place of the public activities such as park, stadiums, large-scale square, school, hospital, commercial street, residential quarters and gathering, multimedia, traffic video and security protection video data become explosive growth, to the management of these videos with analyze suitable difficulty.The original browsing mode of tradition often needs to spend a large amount of manpowers, time, can not meet the demand of people to video information access and inquiry far away.In the urgent need to a kind of rapid and convenient, and the video tour with good visual effect consults method and system, and therefore, video concentration technique is arisen at the historic moment.
The optimization of multiple target movement locus and fusion are two concentrated key algorithms of video, wherein the optimization of multiple target movement locus will guarantee the change procedure of moving target in concentrated video, to prevent the mutual collision between target simultaneously and block, the existing conventional simulated annealing of this type of optimizing process, population, figure are cut apart scheduling algorithm and are solved, there is the features such as complexity is high, realization difficulty, inefficiency in this type of algorithm, is difficult for using in production system.Between target in the concentrated video that fusion method makes finally to obtain in every two field picture and background, target and target, there is no visually appreciable edge, wherein the Poisson's equation of conventional Poisson Image Fusion solves difficulty, performance is low, cannot process real-time.
Summary of the invention
In view of this, the technical problem to be solved in the present invention is to provide a kind of video method for concentration and system, and to overcome, in prior art, to calculate consumption of natural resource many, and efficiency is low, cannot reach the defect of real-time processing.
It is as follows that the present invention solves the problems of the technologies described above adopted technical scheme:
According to an aspect of the present invention, a kind of video method for concentration providing comprises:
Moving object detection and extraction: frame of video is carried out to background modeling analysis, be partitioned into foreground target and the background image of every frame, extract the movement objective orbit of each target, and store target sequence and background image;
Movement objective orbit Combinatorial Optimization: the target sequences to movement objective orbit is optimized sequence, generates new frame number sequence storage;
Target and background merge: according to pixel fusion algorithm, foreground target and background image are carried out to seamless fusion;
Concentrated video generates: read new frame number sequence and the background image of storage, generate compressed video.
Preferably, before method, also comprise video acquisition: obtaining pending video, the video of different coding is decoded, is RGB color data by every frame decoding.
Preferably, movement objective orbit Combinatorial Optimization further comprises:
The time sequencing occurring in video according to target sorts to target;
Generate the track frame number sequence of first moving target;
Circulation generates the track frame number sequence of next moving target, until complete the trace generator frame number sequence of all targets;
The movement locus sequence of all targets is stored in database.
Preferably, the track frame number sequence that generates next moving target further comprises: if the target number that present frame comprises surpasses given target number threshold value, after the initial frame of current goal, move, otherwise calculate the intersection area sum of the target sequence of current goal and the target sequence of other targets that present frame occurs, if the area sum of intersecting surpasses given intersection area threshold, after the initial frame of current goal, move.
The intersection area sum of the target sequence of other targets that preferably, the target sequence of calculating current goal and initial frame occur further comprises:
Calculate and intersect area in each frame: from initial frame position, start to calculate the area that intersects with other target sequences, in each frame, to be the rectangle frame that comprises this target intersect area sum with the rectangle frame of other targets to intersection area;
Calculate total intersection area sum: the intersection area sum of all frames that comprise current goal.
Preferably, target and background fusion further comprise:
Target image is carried out to preliminary treatment, obtain boundary point coordinate and weight;
According to tuning coordinate, calculate its interpolation weights;
Calculate the difference pixel value of the corresponding boundary point of background image and target image;
The mean difference in calculating sampling meeting point;
Merge the image of sampled point and non-sampled point.
Preferably, target sequence comprises: Target id, frame number, left margin, right margin, lower boundary and the coboundary of target in former frame of video.
According to another aspect of the present invention, a kind of video concentration systems providing comprises:
Moving object detection and extraction module: frame of video is carried out to background modeling analysis, be partitioned into foreground target and the background image of every frame, extract the movement objective orbit of each target, and store target sequence and background image;
Movement objective orbit Combinatorial Optimization module: the target sequences to movement objective orbit is optimized sequence, generates new frame number sequence storage;
Target and background Fusion Module: according to pixel fusion algorithm, foreground target and background image are carried out to seamless fusion;
Concentrated video generation module: read new frame number sequence and the background image of storage, generate compressed video.
Preferably, movement objective orbit Combinatorial Optimization module specifically for: the time sequencing occurring in video according to target sorts to target; Generate the track frame number sequence of first moving target; Circulation generates the track frame number sequence of next moving target, until complete the trace generator frame number sequence of all targets; The movement locus sequence of all targets is stored in database.
Preferably, target and background Fusion Module specifically for: target image is carried out to preliminary treatment, obtains boundary point coordinate and weight; According to tuning coordinate, calculate its interpolation weights; Calculate the difference pixel value of the corresponding boundary point of background image and target image; The mean difference in calculating sampling meeting point; Merge the image of sampled point and non-sampled point.
The method and system of the embodiment of the present invention, avoids the collision between target by the shielded area between target trajectory, meets the time consistency between target according to the end frame position of target simultaneously as far as possible; By sampling, reduce algorithm complex, retain as far as possible target information simultaneously.Thereby, shorten concentrated video length, and retained as far as possible the moving object information in video, there is the collision having prevented between multiple target, there is better visual effect.
Accompanying drawing explanation
A kind of video method for concentration flow chart that Fig. 1 provides for the embodiment of the present invention.
A kind of movement objective orbit combined optimization method flow chart that Fig. 2 provides for the preferred embodiment of the present invention.
Fig. 3 is the intersection area sample calculation between target trajectory of the present invention.
A kind of target and background Fusion Module method flow diagram that Fig. 4 provides for the preferred embodiment of the present invention.
Fig. 5 is that exemplary plot is divided in interior zone and the outer region of a width target image of the present invention.
The structural representation of a kind of video concentration systems that Fig. 6 provides for the embodiment of the present invention.
Embodiment
In order to make technical problem to be solved by this invention, technical scheme and beneficial effect clearer, clear, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
Embodiment mono-
Be illustrated in figure 1 a kind of video method for concentration flow chart that the embodiment of the present invention provides, the method comprises:
S102, moving object detection and extraction: frame of video is carried out to background modeling analysis, be partitioned into foreground target and the background image of every frame, extract the movement objective orbit of each target, and store target sequence and background image.
Specifically, this step S102 further comprises:
(1) input video to be concentrated.
Specifically, obtaining pending video in this step, the video of different coding is decoded, is RGB color data by every frame decoding.
(2) video is carried out to context update, target detection, target following.
Particularly, context update adopts mixed Gauss model.This model is specially: mixed Gauss model is used K(to be generally 3 to 5) individual Gauss model carrys out the feature of each pixel in token image, after obtaining, a new two field picture upgrades mixed Gauss model, with each pixel in present image, mate with mixed Gauss model, if the match is successful, judge that this point is background dot, otherwise be foreground point; Can generation background image through multiframe study, prospect is moving target, has completed cutting apart of background modeling and moving target simultaneously.
Target detection is used frame difference method to obtain precision target image with cutting apart.After completing background modeling, use present frame subtracting background image, obtain a difference and cover figure, this difference is covered to figure and carry out binaryzation, and use medium filtering filtered noise point, obtain target image profile clearly, extracting target is left margin, right margin, lower boundary, coboundary in the positional information of this frame.Constantly update background image, complete moving object detection, and isolate each frame moving target, if the distance between target is less than a predetermined threshold, target is merged into a target.Described predetermined threshold is rule of thumb to set.
The method of moving average is used in target following, when initial, if the distance between the center of the target of consecutive frame is less than given threshold value, be judged to be a target, if target sequence length is greater than at 2 o'clock, calculate the average central of the individual target of K (generally getting 2 to 10) after all target sequences, if the center of present frame target and the distance between mean place are less than setting threshold, to be judged to be same target apart from reckling, otherwise be a fresh target.
(3) by above-mentioned, obtain target sequence and background image is stored in database respectively.Wherein, target sequence comprises frame number, left margin, right margin, lower boundary, the coboundary of each target in former frame of video, and each target and background image has unique separately No. ID, respectively they is stored in database.
S104, movement objective orbit Combinatorial Optimization: the target sequence of other targets that occur according to the target sequence of the target number of present frame and current goal and present frame intersect area sum, target sequences to movement objective orbit is optimized sequence, generates new frame number sequence storage.
S106, target and background merge: according to pixel fusion algorithm, foreground target and background image are carried out to seamless fusion.
S108, concentrated video generate: read new frame number sequence and the background image of storage, generate compressed video.
Specifically, from database, read the new frame number of background image, target travel sequence and generation, by background and the target two field picture that permeates, finally generate concentrated video.
The method of the embodiment of the present invention, avoids the collision between target by the shielded area between target trajectory, meets the time consistency between target according to the end frame position of target simultaneously as far as possible; By sampling, reduce algorithm complex, retain as far as possible target information simultaneously.Thereby, shorten concentrated video length, and retained as far as possible the moving object information in video, there is the collision having prevented between multiple target, there is better visual effect.By automatically extract useful target and event from magnanimity video, realize video concentrated, saved and watched the needed manpower of video, reduce memory space, thereby greatly facilitate browsing and storing of monitor video, can be used for the place monitor video data of the public activities such as park, stadiums, large-scale square, school, hospital, commercial street, residential quarters and gathering.
Be illustrated in figure 2 a kind of movement objective orbit combined optimization method flow chart that the preferred embodiment of the present invention provides, the method comprises:
S202, the time sequencing occurring in video according to target sort to target.
Specifically, the target occurring the earliest comes foremost, the corresponding moving target sequence of movement locus of each target, and each moving target sequence has a corresponding ID, and target adds up to ObjNum;
S204, generate the track frame number sequence of first moving target.
Specifically, this sequence first frame frame number is designated as StartFrame=O) frame number sequence be:
{StartFrame,StartFrame+1,……,StartFrame+ObjLength}
The length that wherein ObjLength is target trajectory;
S206, generate the track frame number sequence of next moving target.
Specifically, the track frame number sequence of this target travel is designated as:
CurTrace={obj 1,obj 2,……,obj objLength}
Wherein, each Obj i=[RightEdge, LeftEdge, Bottom, Top], RightEdge, LeftEdge, Bottom, ToP represents respectively left margin, right margin, lower boundary, the coboundary of target in original video, i=1 wherein ..., ObjLength, calculates initial frame number StartFrame process as follows:
(a) obtain end frame number corresponding in the ID of all targets of initial frame and each target, initial frame is updated to the minimum value that these finish frame number, can guarantee the time consistency that video is concentrated herein;
(b) obtain ID and the target number of all targets of initial frame;
(c) if the target number that initial frame comprises is greater than default target number threshold value, upgrade initial frame number StartFame:
StartFme=StartFrame+Interval
Wherein, target number threshold value represents the maximum target number threshold value that every frame occurs, its span is that [5,25, as a kind of preferred scheme, value is 15.Interval is frame-skipping number, common Interval=5, the number between desirable 1-30; Then return to step (b).
If the target number of present frame is less than default target number threshold value, go in step (d).
(d) obtain the motion sequence of the included all targets of initial frame, calculate CurTrace and all Trace iintersection area sum, if intersection area sum is greater than intersection area threshold, continues execution step d, otherwise goes to step e.
Specifically, the target number that this frame comprises is n, and the position of the start frame of motion sequence is StartFame frame, is designated as:
Trace i={obj StartFrame,obj StartFrame+1,……,obj end}
Calculate CurTrace and all Trace iintersect area sum CrossArea, i=1 wherein ..., n;
As a kind of preferred scheme, CrossAreaThreshold=30000 pixel, the intersection area threshold between target, this value is less, and the part of mutually colliding between target is fewer; CrossAreaThreshold scope [0,30000].
Wherein, Fig. 3 is the example that intersection area calculates, wherein CurTace is current goal sequence, Trace1, Trace2, Trace3 are three target sequences that generated new frame number, calculate to intersect area: CurTrace in each frame and from StartFrame position, start to calculate the area that intersects with other target sequences: each frame is interior, and to intersect area be that rectangle frame by the rectangle frame that comprises this target and other targets intersects area sum;
Calculate total intersection area sum: total intersection area sum is the intersection area sum of all frames of comprising this target.
CurTrace and the area cross1 that intersects of Trace1 are CurTrace{obj1, obj2, and obj3, obj4, obj5} and corresponding Trace1{obj4, obj5, obj6, obj7,5 rectangle frames of obj8} intersect area sums;
CurTrace and the area cross3 that intersects of Trace3 are CurTrace{obj1, obj2, and obj3, obj4, obj5, obj6} and corresponding Trace3{bj7, obj8, obj9, obj10, obj11,6 rectangle frames of obj12} intersect area sums;
CurTrace is after Trace2 last frame, and the two is without intersection; So CurTrace in initial frame number StartFrame position with the area sum of intersecting of other target sequences is:
CrossArea=cross1+cross3
If CrossArea > is CrossAreaThresold:
StartFrame=SrartFram+Interval
(e) generate the frame number sequence of CurTrace:
CurTrace={StartFrame,StartFrame+1,……,StartFrame+ObjLength
Wherein, ObjLength is the length of CurTrace;
S208, judged whether the trace generator frame number sequence of all targets, if so, execution step S210, otherwise return to step S206;
S210, the movement locus sequence of all targets is stored in database.
Pass through the embodiment of the present invention, realized the collision-proof method that a kind of efficient multiple target movement locus is optimized, by the shielded area between target trajectory, avoid the collision between target, according to the end frame position of target, meet the time consistency between target simultaneously as far as possible, by target number and two parameters of area of intersecting, control and concentrate, there is good visual experience.
Be illustrated in figure 4 a kind of target and background Fusion Module method flow diagram that the preferred embodiment of the present invention provides, the method comprises:
Specifically, blending algorithm is that a target image is fused in the background image of equal size, comprises the following steps:
S302, target image is carried out to preliminary treatment, obtain boundary point coordinate and weight.
Specifically, this step further comprises:
(a) the boundary point coordinate θ P of equidistant sampled targets image.
Specifically, equidistantly sample in the direction of the clock at interval of point of 12 point sampling, spaced points numerical value is larger, and effect is poorer, the number between common desirable 2-20.
(b) extract the pixel coordinate P of target image.
Specifically, image is divided into two parts: interior zone and outer region, example is divided in the interior zone that Fig. 5 is target image and outer region, for outer region retain coordinate P a little exter, for the method for interior zone sampling sampling, extract coordinate P inter, sampling method is: from the upper left corner of interior zone, and the value P of given 0=-50, if | P(i, j)-P 0| > thresh, the coordinate of retention point P (i, j), upgrades P 0=P (i, j), wherein thresh is given threshold value, generally gets the number between 5~25, P (i, j) is the pixel value of the first passage of a point of interior zone.The P=[P of the point coordinates set that all participations are calculated exter, P inter]; The coordinate of residual pixel point is designated as
Figure BDA0000452615770000093
(c) harmony coordinate (Mean-Value Coordinates).
Specifically, to arbitrary coordinate x ∈ P and boundary point θ P, the harmony Coordinate calculation method of x is:
λ i = w i Σ j = 0 m - 1 w j , i = 0 , . . . , m - 1
Wherein
Figure BDA0000452615770000092
α i-1angle < p i-1, x, p i>, α iit is angle
< p i, x, p i+1, boundary point p i-1, p i, p i+1∈ θ P, this process is designated as
MVC (x, y, θ P, (x, y) is the concrete coordinate figure of x ∈ P.
S304, according to tuning coordinate, calculate its interpolation weights.
Specifically, for putting arbitrarily x ∈ P=[P exter, P inter], according to tuning coordinate, calculate its interpolation weights:
λ 0(X),λ l(X),·…¨,λ m-l(X)=MVC(x,y,θP)
Wherein, x (x, y) is the coordinate of x ∈ P, and m is the number of boundary point θ P, λ i(x) be to calculate interpolation weights by tuning coordinate, MVC () is preliminary treatment (1) c) function of the calculating weight introduced.The difference pixel value of S306, calculating background image and the corresponding boundary point of target image.
diff i=f*(p i)-g(p i)
Wherein, p i ∈θ P represents the coordinate of boundary point, f* (p i) be that background image is at a p ipixel value, g (p i) be that target image is at a p ipixel value;
The mean difference in S308, calculating sampling meeting point.
Specifically, the average interpolation r (x) of calculation level x ∈ P:
r ( x ) = &Sigma; i = 0 m - 1 &lambda; i ( x ) . diffi
Wherein, m is the number of boundary point, λ i(x) be to calculate interpolation weights, diff by tuning coordinate iit is the difference value of the corresponding boundary point of background image and target image.
The image of S310, fusion sampled point.
Specifically, fused images is in a fusion method of x ∈ P:
f(x)=g(x)+r(x)
Wherein f (x) is the pixel value after a some x ∈ P merges, g (x) be target image at a pixel value of x ∈ P,
R (x) is an average interpolation of x ∈ P.
S312, merge the image of non-sampled point.
Specifically, to the point not being sampled fusion:
f ( x ) = g ( x ) + r ( x ) &OverBar;
Wherein, g (x) is that target image is at point pixel value,
Figure BDA0000452615770000105
a little
Figure BDA0000452615770000106
corresponding points P during corresponding to sampling 0average interpolation r (x), r (x) is in previous step as calculated.
The method that the embodiment of the present invention provides, realizes image co-registration by the method for sampling, and has reduced the complexity merging, and has retained as far as possible again the information of moving target simultaneously.
The structural representation of a kind of video concentration systems that Fig. 6 provides for the embodiment of the present invention, this system comprises:
Moving object detection and extraction module 10: frame of video is carried out to background modeling analysis, be partitioned into foreground target and the background image of every frame, extract the movement objective orbit of each target, and store target sequence and background image.
Movement objective orbit Combinatorial Optimization module 20: the target sequences to movement objective orbit is optimized sequence, generates new frame number sequence storage.
Wherein, movement objective orbit Combinatorial Optimization module 20 specifically for: the time sequencing occurring in video according to target sorts to target; Generate the track frame number sequence of first moving target; Circulation generates the track frame number sequence of next moving target, until complete the trace generator frame number sequence of all targets; The movement locus sequence of all targets is stored in database.
Target and background Fusion Module 30: according to pixel fusion algorithm, foreground target and background image are carried out to seamless fusion.
Wherein, target and background Fusion Module 30 specifically for: target image is carried out to preliminary treatment, obtains boundary point coordinate and weight; According to tuning coordinate, calculate its interpolation weights; Calculate the difference pixel value of the corresponding boundary point of background image and target image; The mean difference in calculating sampling meeting point; Merge the image of sampled point and non-sampled point.
Concentrated video generation module 40: read new frame number sequence and the background image of storage, generate compressed video.
As a preferred embodiment of the present invention, this system also comprises video acquiring module 00, for obtaining pending video, the video of different coding is decoded, and by every frame decoding, be RGB color data.
It should be noted that, the system of the embodiment of the present invention is corresponding with said method embodiment, and all technical characterictics in said method embodiment are applicable equally in native system, no longer repeat here.
Embodiment of the present invention system, avoids the collision between target by the shielded area between target trajectory, meets the time consistency between target according to the end frame position of target simultaneously as far as possible; By sampling, reduce algorithm complex, retain as far as possible target information simultaneously.Thereby, shorten concentrated video length, and retained as far as possible the moving object information in video, effectively prevent the collision between multiple target, there is better visual effect.By automatically extract useful target and event from magnanimity video, realize video concentrated, saved and watched the needed manpower of video, reduce memory space, thereby greatly facilitate browsing and storing of monitor video, can be used for the place monitor video data of the public activities such as park, stadiums, large-scale square, school, hospital, commercial street, residential quarters and gathering.
One of ordinary skill in the art will appreciate that all or part of step realizing in above-described embodiment method is can control relevant hardware by program to complete, described program can be stored in computer read/write memory medium, described storage medium, as ROM/RAM, disk, CD etc.
With reference to the accompanying drawings of the preferred embodiments of the present invention, not thereby limit to interest field of the present invention above.Those skilled in the art do not depart from the scope and spirit of the present invention, and can have multiple flexible program to realize the present invention, such as the feature as an embodiment can be used for another embodiment, obtain another embodiment.Allly using any modification of doing within technical conceive of the present invention, be equal to and replace and improve, all should be within interest field of the present invention.

Claims (10)

1. a video method for concentration, is characterized in that, the method comprises:
Moving object detection and extraction: frame of video is carried out to background modeling analysis, be partitioned into foreground target and the background image of every frame, extract the movement objective orbit of each target, and store target sequence and background image;
Movement objective orbit Combinatorial Optimization: the target sequences to described movement objective orbit is optimized sequence, generates new frame number sequence storage;
Target and background merge: according to pixel fusion algorithm, foreground target and background image are carried out to seamless fusion;
Concentrated video generates: read new frame number sequence and the background image of storage, generate compressed video.
2. video method for concentration according to claim 1, is characterized in that, also comprises video acquisition: obtaining pending video, the video of different coding is decoded, is RGB color data by every frame decoding before described method.
3. video method for concentration according to claim 1, is characterized in that, described movement objective orbit Combinatorial Optimization further comprises:
The time sequencing occurring in video according to target sorts to target;
Generate the track frame number sequence of first moving target;
Circulation generates the track frame number sequence of next moving target, until complete the trace generator frame number sequence of all targets;
The movement locus sequence of all targets is stored in database.
4. video method for concentration according to claim 3, it is characterized in that, the track frame number sequence of the next moving target of described generation further comprises: if the target number that present frame comprises surpasses given target number threshold value, after the initial frame of current goal, move, otherwise calculate the intersection area sum of the target sequence of current goal and the target sequence of other targets that present frame occurs, if the area sum of intersecting surpasses given intersection area threshold, after the initial frame of current goal, move.
5. video method for concentration according to claim 4, is characterized in that, the intersection area sum of the target sequence of other targets that the target sequence of described calculating current goal and initial frame occur further comprises:
Calculate and intersect area in each frame: from initial frame position, start to calculate the area that intersects with other target sequences, in each frame, to be the rectangle frame that comprises this target intersect area sum with the rectangle frame of other targets to intersection area;
Calculate total intersection area sum: the intersection area sum of all frames that comprise current goal.
6. video method for concentration according to claim 4, is characterized in that, described target and background fusion further comprise:
Target image is carried out to preliminary treatment, obtain boundary point coordinate and weight;
According to tuning coordinate, calculate its interpolation weights;
Calculate the difference pixel value of the corresponding boundary point of background image and target image;
The mean difference in calculating sampling meeting point;
Merge the image of sampled point and non-sampled point.
7. according to the video method for concentration described in claim 1-6 any one claim, it is characterized in that, described target sequence comprises: Target id, frame number, left margin, right margin, lower boundary and the coboundary of target in former frame of video.
8. a video concentration systems, is characterized in that, this system comprises:
Moving object detection and extraction module: frame of video is carried out to background modeling analysis, be partitioned into foreground target and the background image of every frame, extract the movement objective orbit of each target, and store target sequence and background image;
Movement objective orbit Combinatorial Optimization module: the target sequences to described movement objective orbit is optimized sequence, generates new frame number sequence storage;
Target and background Fusion Module: according to pixel fusion algorithm, foreground target and background image are carried out to seamless fusion;
Concentrated video generation module: read new frame number sequence and the background image of storage, generate compressed video.
9. video concentration systems according to claim 8, is characterized in that, described movement objective orbit Combinatorial Optimization module specifically for: the time sequencing occurring in video according to target sorts to target; Generate the track frame number sequence of first moving target; Circulation generates the track frame number sequence of next moving target, until complete the trace generator frame number sequence of all targets; The movement locus sequence of all targets is stored in database.
10. video concentration systems according to claim 8, is characterized in that, described target and background Fusion Module specifically for: target image is carried out to preliminary treatment, obtains boundary point coordinate and weight; According to tuning coordinate, calculate its interpolation weights; Calculate the difference pixel value of the corresponding boundary point of background image and target image; The mean difference in calculating sampling meeting point; Merge the image of sampled point and non-sampled point.
CN201410001188.1A 2014-01-02 2014-01-02 Video concentration method and system Expired - Fee Related CN103686095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410001188.1A CN103686095B (en) 2014-01-02 2014-01-02 Video concentration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410001188.1A CN103686095B (en) 2014-01-02 2014-01-02 Video concentration method and system

Publications (2)

Publication Number Publication Date
CN103686095A true CN103686095A (en) 2014-03-26
CN103686095B CN103686095B (en) 2017-05-17

Family

ID=50322212

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410001188.1A Expired - Fee Related CN103686095B (en) 2014-01-02 2014-01-02 Video concentration method and system

Country Status (1)

Country Link
CN (1) CN103686095B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104284198A (en) * 2014-10-27 2015-01-14 李向伟 Video concentration method
CN104376580A (en) * 2014-11-21 2015-02-25 西安理工大学 Processing method for non-interest area events in video summary
CN104394353A (en) * 2014-10-14 2015-03-04 浙江宇视科技有限公司 Video compression method and device
CN104504668A (en) * 2014-12-30 2015-04-08 宇龙计算机通信科技(深圳)有限公司 Face-contained image sharpening method and device
CN104735518A (en) * 2015-03-31 2015-06-24 北京奇艺世纪科技有限公司 Information display method and device
CN105007464A (en) * 2015-07-20 2015-10-28 江西洪都航空工业集团有限责任公司 Method for concentrating video
CN105262932A (en) * 2015-10-20 2016-01-20 深圳市华尊科技股份有限公司 Video processing method, and terminal
CN105872859A (en) * 2016-06-01 2016-08-17 深圳市唯特视科技有限公司 Video compression method based on moving target trajectory extraction of object
CN106550283A (en) * 2015-09-17 2017-03-29 杭州海康威视数字技术股份有限公司 Play the method and device of video frequency abstract
CN106937120A (en) * 2015-12-29 2017-07-07 北京大唐高鸿数据网络技术有限公司 Object-based monitor video method for concentration
WO2017121020A1 (en) * 2016-01-12 2017-07-20 中兴通讯股份有限公司 Moving image generating method and device
CN108366303A (en) * 2018-01-25 2018-08-03 努比亚技术有限公司 A kind of video broadcasting method, mobile terminal and computer readable storage medium
CN108769598A (en) * 2018-06-08 2018-11-06 复旦大学 Across the camera video method for concentration identified again based on pedestrian
CN110322471A (en) * 2019-07-18 2019-10-11 华中科技大学 Method, apparatus, equipment and the storage medium of panoramic video concentration
CN110519532A (en) * 2019-09-02 2019-11-29 中移物联网有限公司 A kind of information acquisition method and electronic equipment
CN110602504A (en) * 2019-10-09 2019-12-20 山东浪潮人工智能研究院有限公司 Video decompression method and system based on YOLOv2 target detection algorithm
CN110708511A (en) * 2019-10-17 2020-01-17 山东浪潮人工智能研究院有限公司 Monitoring video compression method based on image target detection
CN110753228A (en) * 2019-10-24 2020-02-04 山东浪潮人工智能研究院有限公司 Garage monitoring video compression method and system based on Yolov1 target detection algorithm
CN111079663A (en) * 2019-12-19 2020-04-28 深圳云天励飞技术有限公司 High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN111369469A (en) * 2020-03-10 2020-07-03 北京爱笔科技有限公司 Image processing method and device and electronic equipment
CN111464882A (en) * 2019-01-18 2020-07-28 杭州海康威视数字技术股份有限公司 Video abstract generation method, device, equipment and medium
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium
CN112422898A (en) * 2020-10-27 2021-02-26 中电鸿信信息科技有限公司 Video concentration method introducing deep behavior understanding
CN114422720A (en) * 2022-01-13 2022-04-29 广州光信科技有限公司 Video concentration method, system, device and storage medium
CN114650397A (en) * 2022-03-14 2022-06-21 西安邮电大学 Multi-channel video concentration method based on cross-camera target pipe association
CN115190267A (en) * 2022-06-06 2022-10-14 东风柳州汽车有限公司 Automatic driving video data processing method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095435A1 (en) * 2001-03-23 2008-04-24 Objectvideo, Inc. Video segmentation using statistical pixel modeling
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN103079117A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Video abstract generation method and video abstract generation device
CN103473333A (en) * 2013-09-18 2013-12-25 北京声迅电子股份有限公司 Method and device for extracting video abstract from ATM (Automatic Teller Machine) scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080095435A1 (en) * 2001-03-23 2008-04-24 Objectvideo, Inc. Video segmentation using statistical pixel modeling
CN102156707A (en) * 2011-02-01 2011-08-17 刘中华 Video abstract forming and searching method and system
CN102708182A (en) * 2012-05-08 2012-10-03 浙江捷尚视觉科技有限公司 Rapid video concentration abstracting method
CN103079117A (en) * 2012-12-30 2013-05-01 信帧电子技术(北京)有限公司 Video abstract generation method and video abstract generation device
CN103473333A (en) * 2013-09-18 2013-12-25 北京声迅电子股份有限公司 Method and device for extracting video abstract from ATM (Automatic Teller Machine) scene

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394353B (en) * 2014-10-14 2018-03-09 浙江宇视科技有限公司 Video concentration method and device
CN104394353A (en) * 2014-10-14 2015-03-04 浙江宇视科技有限公司 Video compression method and device
CN104284198A (en) * 2014-10-27 2015-01-14 李向伟 Video concentration method
CN104376580A (en) * 2014-11-21 2015-02-25 西安理工大学 Processing method for non-interest area events in video summary
CN104504668A (en) * 2014-12-30 2015-04-08 宇龙计算机通信科技(深圳)有限公司 Face-contained image sharpening method and device
CN104735518A (en) * 2015-03-31 2015-06-24 北京奇艺世纪科技有限公司 Information display method and device
CN105007464A (en) * 2015-07-20 2015-10-28 江西洪都航空工业集团有限责任公司 Method for concentrating video
CN106550283A (en) * 2015-09-17 2017-03-29 杭州海康威视数字技术股份有限公司 Play the method and device of video frequency abstract
CN106550283B (en) * 2015-09-17 2019-05-21 杭州海康威视数字技术股份有限公司 Play the method and device of video frequency abstract
CN105262932A (en) * 2015-10-20 2016-01-20 深圳市华尊科技股份有限公司 Video processing method, and terminal
CN106937120B (en) * 2015-12-29 2019-11-12 北京大唐高鸿数据网络技术有限公司 Object-based monitor video method for concentration
CN106937120A (en) * 2015-12-29 2017-07-07 北京大唐高鸿数据网络技术有限公司 Object-based monitor video method for concentration
WO2017121020A1 (en) * 2016-01-12 2017-07-20 中兴通讯股份有限公司 Moving image generating method and device
CN105872859A (en) * 2016-06-01 2016-08-17 深圳市唯特视科技有限公司 Video compression method based on moving target trajectory extraction of object
CN108366303A (en) * 2018-01-25 2018-08-03 努比亚技术有限公司 A kind of video broadcasting method, mobile terminal and computer readable storage medium
CN108769598A (en) * 2018-06-08 2018-11-06 复旦大学 Across the camera video method for concentration identified again based on pedestrian
CN111464882B (en) * 2019-01-18 2022-03-25 杭州海康威视数字技术股份有限公司 Video abstract generation method, device, equipment and medium
CN111464882A (en) * 2019-01-18 2020-07-28 杭州海康威视数字技术股份有限公司 Video abstract generation method, device, equipment and medium
CN110322471A (en) * 2019-07-18 2019-10-11 华中科技大学 Method, apparatus, equipment and the storage medium of panoramic video concentration
CN110519532A (en) * 2019-09-02 2019-11-29 中移物联网有限公司 A kind of information acquisition method and electronic equipment
CN110602504A (en) * 2019-10-09 2019-12-20 山东浪潮人工智能研究院有限公司 Video decompression method and system based on YOLOv2 target detection algorithm
CN110708511A (en) * 2019-10-17 2020-01-17 山东浪潮人工智能研究院有限公司 Monitoring video compression method based on image target detection
CN110753228A (en) * 2019-10-24 2020-02-04 山东浪潮人工智能研究院有限公司 Garage monitoring video compression method and system based on Yolov1 target detection algorithm
CN111079663A (en) * 2019-12-19 2020-04-28 深圳云天励飞技术有限公司 High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN111079663B (en) * 2019-12-19 2022-01-11 深圳云天励飞技术股份有限公司 High-altitude parabolic monitoring method and device, electronic equipment and storage medium
CN111369469A (en) * 2020-03-10 2020-07-03 北京爱笔科技有限公司 Image processing method and device and electronic equipment
CN111369469B (en) * 2020-03-10 2024-01-12 北京爱笔科技有限公司 Image processing method and device and electronic equipment
CN112333537B (en) * 2020-07-27 2023-12-05 深圳Tcl新技术有限公司 Video integration method, device and computer readable storage medium
CN112333537A (en) * 2020-07-27 2021-02-05 深圳Tcl新技术有限公司 Video integration method and device and computer readable storage medium
CN112422898A (en) * 2020-10-27 2021-02-26 中电鸿信信息科技有限公司 Video concentration method introducing deep behavior understanding
CN112422898B (en) * 2020-10-27 2022-06-17 中电鸿信信息科技有限公司 Video concentration method introducing deep behavior understanding
CN114422720A (en) * 2022-01-13 2022-04-29 广州光信科技有限公司 Video concentration method, system, device and storage medium
CN114422720B (en) * 2022-01-13 2024-03-19 广州光信科技有限公司 Video concentration method, system, device and storage medium
CN114650397A (en) * 2022-03-14 2022-06-21 西安邮电大学 Multi-channel video concentration method based on cross-camera target pipe association
CN115190267A (en) * 2022-06-06 2022-10-14 东风柳州汽车有限公司 Automatic driving video data processing method, device, equipment and storage medium
CN115190267B (en) * 2022-06-06 2024-05-14 东风柳州汽车有限公司 Automatic driving video data processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN103686095B (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN103686095A (en) Video concentration method and system
CN106203277B (en) Fixed lens based on SIFT feature cluster monitor video feature extraction method in real time
CN104715471B (en) Target locating method and its device
Wang et al. 3D-CenterNet: 3D object detection network for point clouds with center estimation priority
CN106997459B (en) People counting method and system based on neural network and image superposition segmentation
CN104050481B (en) Multi-template infrared image real-time pedestrian detection method combining contour feature and gray level
CN111027505B (en) Hierarchical multi-target tracking method based on significance detection
Hohmann et al. CityFit-High-quality urban reconstructions by fitting shape grammars to images and derived textured point clouds
CN101470809A (en) Moving object detection method based on expansion mixed gauss model
CN112990086A (en) Remote sensing image building detection method and device and computer readable storage medium
CN107122792A (en) Indoor arrangement method of estimation and system based on study prediction
CN107808524A (en) A kind of intersection vehicle checking method based on unmanned plane
Wang et al. Improving facade parsing with vision transformers and line integration
CN115937461A (en) Multi-source fusion model construction and texture generation method, device, medium and equipment
Liu et al. Multi-lane detection by combining line anchor and feature shift for urban traffic management
CN106127813A (en) The monitor video motion segments dividing method of view-based access control model energy sensing
Pang et al. Multi-Scale Feature Fusion Model for Bridge Appearance Defect Detection
CN107832732A (en) Method for detecting lane lines based on ternary tree traversal
JP2013045152A (en) Dynamic body tracker
CN114357958A (en) Table extraction method, device, equipment and storage medium
CN114820931B (en) Virtual reality-based CIM (common information model) visual real-time imaging method for smart city
CN109492579A (en) A kind of video object detection method and system based on ST-SIN
CN102054278A (en) Object tracking method based on grid contraction
CN107292239A (en) A kind of parking offense detection method based on triple context updates
CN114170678A (en) Pedestrian trajectory prediction method and device based on multiple space maps and time fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170517

Termination date: 20200102