CN102810208B - Based on the criminal investigation video pre-filtering method that direct of travel detects - Google Patents

Based on the criminal investigation video pre-filtering method that direct of travel detects Download PDF

Info

Publication number
CN102810208B
CN102810208B CN201210257970.0A CN201210257970A CN102810208B CN 102810208 B CN102810208 B CN 102810208B CN 201210257970 A CN201210257970 A CN 201210257970A CN 102810208 B CN102810208 B CN 102810208B
Authority
CN
China
Prior art keywords
video
frame
direct
travel
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210257970.0A
Other languages
Chinese (zh)
Other versions
CN102810208A (en
Inventor
严国建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd
Original Assignee
WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd filed Critical WUHAN DAQIAN INFORMATION TECHNOLOGY Co Ltd
Priority to CN201210257970.0A priority Critical patent/CN102810208B/en
Publication of CN102810208A publication Critical patent/CN102810208A/en
Application granted granted Critical
Publication of CN102810208B publication Critical patent/CN102810208B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of criminal investigation video pre-filtering method detected based on direct of travel, first it arrange concern line according to video pictures at area-of-interest, concern line again in conjunction with setting carries out the detection of moving target direct of travel to video content, extracts crucial frame number; Finally the frame of video of extraction is merged into new video.This preprocess method can detect in conjunction with direct of travel and detect across paying close attention to line target the video frame number obtaining and have concern scope internal object in criminal investigation video, filter out uninterested frame of video in video, then the key frame extracted these reintegrates into one section of new video for browsing, decrease the frame of video quantity needing viewing, do not omit the information wanting to pay close attention to simultaneously, thus improve the browse efficiency of criminal investigation video.

Description

Based on the criminal investigation video pre-filtering method that direct of travel detects
Technical field
The present invention relates to method for processing video frequency in multimedia, refer to a kind of criminal investigation video pre-filtering method detected based on direct of travel particularly, for criminal investigation Video processing.
Background technology
In order to improve social security integrated prevention and control capacity, a large amount of built video monitoring system is widely used in police criminal detection business, and the video investigation technology finding and follow the trail of suspected target from video record has become the fourth-largest technical support of solving criminal cases after technology detectd by criminal technique, action technology, net.The essence of video investigation is found investigation clue by monitor video, found suspected target.
But a large amount of video monitoring systems also brings the monitor video video recording of magnanimity, in current video investigation work, criminal detective is that eyes " stare at " viewing limit, player limit record, even when seldom having moving target to occur in the monitoring video in night or remote location, also can only " complete " browse, and can not occur even the omission of " a second "; Long-time browsing video video recording, be very easy to cause the visual fatigue of criminal detective and have impact on video tour work quality, even cause the vision impairment of investigator, browsed, searched that the working method of suspected target is time-consuming, effort completely by investigator's manual type, inefficiency.
When criminal detective utilizes monitoring video to find and analyzes criminal offence, often compare the picture paid close attention to and have moving target in monitoring video, especially, criminal detective more may pay close attention to the moving target of a certain regional area (such as parking lot) in video, criminal detective also likely pays close attention to the moving target striding across a certain warning line (such as delimiting a warning line at cell entry place), and reducing of regional extent often more contributes to finding and analyzing criminal offence.Under these circumstances, criminal detective often more pays close attention in area-of-interest (regional area and warning line) whether have moving target, but often has a large amount of pictures " static " (comprising the moving target not in area-of-interest) in video recording.Although conventional video player has fast playing function, but the stationary part do not distinguished in video and movable part, if there is a kind of video pre-filtering method, the video frame number (hereinafter referred to as motion frame number) in video with moving target (or the moving target only in area-of-interest) can be extracted automatically before criminal detective watches monitor video, filter out in video the video frame number (hereinafter referred to as non-athletic frame number) without moving target (comprising the moving target not in area-of-interest), then the motion frame number that these extract is split, be organized into video segment, criminal detective just can only watch these video segments, and whole video record need not be watched, the efficiency browsing multitude of video must be improved.But, yet there are no such video pre-filtering method.
Summary of the invention
The invention reside in and overcome above-mentioned the deficiencies in the prior art and provide a kind of criminal investigation video pre-filtering method detected based on direct of travel, moving target fragment can be extracted in user's area-of-interest of video, filter out the fragment of " static ", while reducing the video pictures needing viewing, do not omit important picture, thus improve the efficiency of video tour.
The technical scheme realizing the object of the invention employing is: a kind of criminal investigation video pre-filtering method detected based on direct of travel, comprises the following steps:
Obtain video pictures, straight line delimited with mouse area-of-interest on described video pictures, and record the trajectory coordinates of mouse, described mouse track is plotted in the equal all black picture of another width size, mouse track straight line is set to white, the pixel of widening described mouse track straight line obtains a rectangular closed region, and described closed region is set to white, obtains a mask artwork;
Arrange according to the straight line of delimiting and pay close attention to direction, detected activity target, judges the direct of travel of each moving target detected, recorded key frame number;
Frame of video corresponding to the crucial frame number extracted is merged into new video.
In technique scheme, described arrange pay close attention to direction be that reference direction is set in pending video, this reference direction is corresponding with the index dial of clock plane.
In technique scheme, describedly utilize Background difference detected activity target, concrete steps are:
Read in pending video, background image;
The frame getting present frame and Background is poor;
Described frame difference being compared with area threshold, if be greater than area threshold, is then moving target.
Further, compare with area threshold again after binaryzation, expansion, median filtering operation being carried out to described frame difference.
Further, by following formula, binaryzation is carried out to described frame difference:
D ( x , y ) = 255 ifD &GreaterEqual; T 0 ifD < T
Wherein, D (x, y) represents the grey scale pixel value at position (x, y) place in frame difference result images, and T is a given threshold value.
In technique scheme, the direct of travel concrete steps of each moving target detected of described judgement are:
Utilize timestamp, use motion history figure to record the gradient information of historical track in each moving component a period of time;
The mode of weighting is adopted to calculate the direction of each parts according to the gradient information of record;
Retaining the parts consistent with paying close attention to direction, being the direction that moving target is advanced.
Further, by the overall gradient direction of following formulae discovery moving component:
Wherein, be the overall gradient direction of the parts calculated, be the direction of moving component, for basic reference angle, w (stamp) is the weight according to time stamp setting, for gained direction of motion is with poor with reference to the minimum angles between angle.
In technique scheme, recorded key frame number comprises: judge whether that the central point that there is the target retained drops in the concern line regional extent of setting, exist and then think that present frame is key frame and records the frame number of present frame, otherwise be non-key frame.
In technique scheme, described in be merged into new video and comprise the following steps:
According to the crucial frame number chosen, read first key frame and it can be used as the start frame of output video:
By crucial frame number order, read the key frame after described first key frame and add output video according to the order of sequence, until read last key frame and it can be used as the end frame of output video.
Compared with direct viewing original video in prior art, beneficial effect of the present invention is: the criminal investigation video pre-filtering algorithm detected based on motion, by the overall situation, arbitrarily appointed area, detect the video frame number in three kinds of methods acquisition videos with moving target across warning line moving target, filter out " static " picture frame number, again by motion frame number division, be organized into video segment, by this pre-service just decrease criminal detective need watch frame of video quantity, but do not omit important picture, thus improve the browse efficiency of video.
Accompanying drawing explanation
Fig. 1 is the process flow diagram that the present invention is based on the criminal investigation video pre-filtering method that direct of travel detects;
Fig. 2 pays close attention to line method to set up process flow diagram in Fig. 1;
Fig. 3 is crucial frame number extracting method process flow diagram in Fig. 1;
Fig. 4 is that in Fig. 1, key frame merges method flow diagram;
Fig. 5-1 is certain moment motion history figure example;
Fig. 5-2 is results of corresponding walking direction.
Embodiment
Below in conjunction with the drawings and specific embodiments, the present invention is further illustrated.
As shown in Figure 1, the criminal investigation video pre-filtering method detected based on direct of travel comprises the following steps:
Step S101: import original video, specify area-of-interest in the video pictures of original video, arrange concern line in area-of-interest, as shown in Figure 2, its concrete steps are:
Step S201: obtain a frame video pictures.
Step S202: delimit straight line with the area-of-interest of mouse on this video pictures, and record the trajectory coordinates of mouse.
Step S203: create the black white image that in a width and step S202, video pictures size is equal, and all pixel values of this black white image are set to 0, become entirely black by black white image, by the mouse track line drawing of being recorded by step S202 in this all black picture.
Step S204: mouse track straight line in the black white image obtained in step S203 is set to white, width adds to 5 pixels, because desirable straight line is that 1 pixel is wide, straight line is extended to 5 pixels herein wide, form a rectangular region, be convenient to judge whether there is moving target cross-line in this region.The rectangle closed region obtained thus is set to white (pixel value is 255), obtains a binary map like this.White portion in the corresponding binary map of concern line after widening, pixel value is all 255, and other parts are black, and pixel value is all 0.The figure obtained thus is mask images.
Step S102: the concern line according to arranging carries out the detection of moving target direct of travel to video content, and extract crucial frame number, as shown in Figure 3, its concrete steps are:
Step S301: first for by video to be processed, the direction that setting is paid close attention to, the present embodiment adopts 12 hr clock methods to carry out assigned direction, such as, for a horizontal road, if only pay close attention to the target striding across and pay close attention to and move on the right of alignment, then can be appointed as the direction along 3 o'clock of dial plate;
Step S302: initialize motion historigram, is initialized as all black picture (pixel value is 0) in the present embodiment, the effect of motion history figure is that the historical track of moving target in reservation a period of time is to be used for calculating the direction of motion of target;
Step S303: read in pending video;
Step S304: with the pixel average background image of 100 two field pictures before video, and changed into gray-scale map;
Step S305: the first two field picture again reading in video, transfers gray-scale map to, starts processing procedure;
Step S306: this two field picture and background image are made frame difference, to take absolute value and stored in a width gray-scale map to frame difference result.It is adjustable for making background subtraction threshold value used here, contrasts unconspicuous video for the backgrounds such as night and sport foreground, and background subtraction threshold value can be turned down a bit, and contrasting this threshold value of obvious video for background and sport foreground can heighten relatively.The adjustment of background subtraction threshold value can realize on user interface, is arranged voluntarily to reach best Detection results according to specific criminal investigation video by user.
Step S307: each pixel in the frame difference result images obtain step S306 carries out binaryzation by following formula:
D ( x , y ) = 255 ifD &GreaterEqual; T 0 ifD < T
Wherein, D (x, y) position (x in frame difference result images is represented, y) grey scale pixel value at place, T is a given threshold value, compares, if D is more than or equal to T by the gray-scale value D of each pixel in frame difference result images with threshold value T, then by this pixel value assignment 255, otherwise assignment is 0;
Step S308: expansive working of the prior art is carried out to the image obtained in step S307;
Step S309: median filtering operation of the prior art is carried out to the image obtained in step S308;
Whether each moving target area in step S310: determining step S309 gained image is greater than S, and remove the target that area is less than S, wherein S is a given area threshold, and S arranges the impact can removing the less noises such as leaf disturbance.In addition, in different criminal investigation videos, the value of S can not be fixed, and when target is close to camera lens, S can establish higher, and target can be established lower from S in situation away from camera lens relatively.The adjustment of area threshold S can realize on user interface, when user adjusts the size of S, we can with dynamic and visual, (all moving targets in video can be lived by a boundary rectangle circle by testing result, the value that user arranges S is also that the rectangle frame by arbitrarily adjusting different size is arranged intuitively) form feed back to user, facilitate user to adjust.The mode arranged voluntarily according to specific criminal investigation video by this user reaches best Detection results.Area threshold is for setting according to moving target needed for be determined, and if moving target needed for be determined is pedestrian, then the minimum value now in the area threshold size scope of behaving, the size scope of people can draw by adding up.
Step S311: the moving target pixel assignment that the moving image (being set to silh) obtained for step S310 remains is current time stamp (timestamp), is updated in motion history figure (mhi):
mhi ( x , y ) = timestamp ifsilh ( x , y ) ! = 0 0 ifsilh ( x , y ) = 0 andmhi ( x , y ) < timestamp - duration mhi ( x , y ) otherwise
The design of timestamp is to ensure that the up-to-date moving target detected has the gray-scale value larger than the track before it, like this in motion history figure, just remain each moving target a period of time (duration namely in above formula, be set to 0.5 second) in historical track, prior, the timestamp added makes the track of different times have different gray-scale values, calculates the direction of motion of target below convenient further.
Fig. 5-1 is motion history figure effect example sometime, and Fig. 5-2 is results of corresponding walking direction.Can see that the profile of current frame motion target has the brightest gray scale, the history profile gray scale more of a specified duration apart from current time is lighter, and the historical track pixel value exceeding the time interval of setting reverts to 0(black).Result in formation of the calculating of gradient information for direction.
Step S312: the motion history figure in the step S311 moving target (hereinafter referred to as moving component) that may record multiple band gradient information, needs them to split calculated direction respectively one by one;
Step S313: for each moving component split in step S312, first calculate the gradient direction of each pixel: wherein for moving component is in the direction of motion at point (x, y) place, represent by 0 ~ 360 angle, F y(x, y) and F xthe gradient image that (x, y) is calculated by gradient mask for pixel (x, y).
Then adopt method of weighting to calculate the overall gradient direction of moving component, rule is that time nearer its corresponding weights of history pixel are larger:
the overall gradient direction of the parts calculated, for obtained basic reference angle (peak point in direction histogram), w (stamp) is the weight according to time stamp setting, for gained direction of motion is with poor with reference to the minimum angles between angle. be the direction of moving component, it is the angle value of 0 ~ 360 degree, for direction ratio is paid close attention to comparatively with arranging in direction, it is converted into clock notation:
Pointer in Fig. 5-1 and Fig. 5-2 is depicted as all parts walking direction result of a certain moment motion history figure, and this instruction result also can simultaneous display in video.
Step S314: if the concern direction of the parts direction of calculating and setting is close, such as angular deviation within 30 degree, then being thought that its direction of motion is consistent with concern direction, being retained the parts that those are consistent with paying close attention to direction.For concern line, the setting of concern direction of different scene video, for reaching best effects, the decision rule of " direction is consistent " also can be self-defined at user interface here, defines great drift angle allow to regard as unanimously by user according to particular video frequency.
Step S315: judge whether that the central point of the parts retained in S314 in steps drops on (operation of mask images logical and can realize) in the masks area of setting, if having, judge that this frame is as key frame, record this frame frame number;
Step S316: judge that whether present frame is the last frame of video, if then terminate to select crucial frame number module, if not then go to step S317;
Step S317: read the next frame image of video, transfer gray-scale map to, then upgrade background image, the thinking of context update is that former background and the two field picture that newly reads in respectively account for certain ratio and reconstruct new background:
background(x,y)=(1-α)·background(x,y)+α·newframe(x,y)
Wherein background represents Background, and newframe represents the two field picture newly read in, and α controls renewal rate, in our experiment, be set to 0.003.Consider that the moving target in actual conditions in video may stop in the visual field, this will have influence on the accurate of background segment.Can arrange a mask artwork in addition to address this problem, mask the object in video when renewal, allow " newframe " that import context update into only to comprise the part not having Moving Objects, the method obtains good effect in an experiment.
Forward step S306 after completing context update to and continue choosing of key frame.
Step S103: the frame of video corresponding to the crucial frame number extracted is merged into new video.As shown in Figure 4, its concrete steps comprise:
Step S401: according to the crucial frame number chosen, reads the 1st key frame and it can be used as the start frame of output video;
Step S402: judge whether this frame is last key frame, is, goes to step S404 using the end frame of this frame as output video, otherwise goes to step next motion frame of S403 continuation reading;
Step S403: read next motion frame, continues the deterministic process of step S402;
Step S404: using the end frame of this frame as output video.
Above-described specific embodiment; object of the present invention, technical scheme and beneficial effect are further described; be understood that; the foregoing is only preferred embodiment of the present invention; not for limiting the scope of the invention, for the those of ordinary skill of the technical field belonging to the present invention, without departing from the inventive concept of the premise; can make various possible equivalent change or replacement, these change or replacement all should belong to protection scope of the present invention.

Claims (5)

1., based on the criminal investigation video pre-filtering method that direct of travel detects, it is characterized in that, comprise the following steps:
Obtain video pictures, straight line delimited with mouse area-of-interest on described video pictures, and record the trajectory coordinates of mouse, described mouse track is plotted in the equal all black picture of another width size, mouse track straight line is set to white, the pixel of widening described mouse track straight line obtains a rectangular closed region, and described closed region is set to white, obtains a mask artwork;
Arrange according to the straight line of delimiting and pay close attention to direction, detected activity target, judges the direct of travel of each moving target detected, recorded key frame number; Described arrange pay close attention to direction be that reference direction is set in pending video, this reference direction is corresponding with the index dial of clock plane; Described detected activity target utilizes Background difference to detect, and concrete steps are:
Step S102: the concern line according to arranging carries out the detection of moving target direct of travel to video content, and extract crucial frame number, its concrete steps are:
Step S301: first for by video to be processed, set the direction paid close attention to, adopt 12 hr clock methods to carry out assigned direction;
The effect of step S302: initialize motion historigram, is initialized as all black picture, motion history figure is that the historical track of moving target in reservation a period of time is to be used for calculating the direction of motion of target;
Step S303: read in pending video;
Step S304: with the pixel average background image of 100 two field pictures before video, and changed into gray-scale map;
Step S305: the first two field picture again reading in video, transfers gray-scale map to, starts processing procedure;
Step S306: this two field picture and background image are made frame difference, to take absolute value and stored in a width gray-scale map to frame difference result;
Step S307: each pixel in the frame difference result images obtain step S306 carries out binaryzation by following formula:
D ( x , y ) = 255 i f D &GreaterEqual; T 0 i f D < T
Wherein, D (x, y) position (x in frame difference result images is represented, y) grey scale pixel value at place, T is a given threshold value, compares, if D is more than or equal to T by the gray-scale value D of each pixel in frame difference result images with threshold value T, then by pixel value assignment 255, otherwise assignment is 0;
Step S308: expansive working of the prior art is carried out to the image obtained in step S307;
Step S309: median filtering operation of the prior art is carried out to the image obtained in step S308;
Whether each moving target area in step S310: determining step S309 gained image is greater than S, and remove the target that area is less than S, wherein S is a given area threshold;
Step S311: the moving target pixel assignment that the moving image silh obtained for step S310 remains is current time stamp timestamp, is updated in motion history figure mhi:
m h i ( x , y ) = t i m e s t a m p i f s i l h ( x , y ) ! = 0 0 i f s i l h ( x , y ) = 0 a n d m h i ( x , y ) < t i m e s t a m p - d u r a t i o n m h i ( x , y ) o t h e r w i s e
In above formula, duration is in motion history figure, a period of time that each moving target continues;
Step S312: the motion history figure in the step S311 moving target that may record multiple band gradient information, splits calculated direction respectively one by one them;
Step S313: for each moving component split in step S312, first calculate the gradient direction of each pixel: wherein for moving component is in the direction of motion at point (x, y) place, represent by 0 ~ 360 angle, F y(x, y) and F xthe gradient image that (x, y) is calculated by gradient mask for pixel (x, y); Then adopt method of weighting to calculate the overall gradient direction of moving component, rule is that time nearer its corresponding weights of history pixel are larger:
the overall gradient direction of the parts calculated, for obtained basic reference angle, w (stamp) is the weight according to time stamp setting, for gained direction of motion is with poor with reference to the minimum angles between angle, be the direction of moving component, it is the angle value of 0 ~ 360 degree, for direction ratio is paid close attention to comparatively with arranging in direction, it is converted into clock notation:
Step S314: if the concern direction of the parts direction of calculating and setting is close, then thinking that its direction of motion is consistent with concern direction, retaining the parts that those are consistent with paying close attention to direction;
Step S315: judge whether that the central point of the parts retained in S314 in steps drops in the masks area of setting, if having, judge that this frame is as key frame, record this frame frame number;
Step S316: judge that whether present frame is the last frame of video, if then terminate to select crucial frame number module, if not then go to step S317;
Step S317: read the next frame image of video, transfer gray-scale map to, then upgrade background image, the thinking of context update is that former background and the two field picture that newly reads in respectively account for certain ratio and reconstruct new background:
background(x,y)=(1-α)·background(x,y)+α·newframe(x,y)
Wherein background represents Background, and newframe represents the two field picture newly read in, and α controls renewal rate, forwards step S306 to and continue choosing of key frame after completing context update;
Frame of video corresponding to the crucial frame number extracted is merged into new video.
2., according to claim 1 based on the criminal investigation video pre-filtering method that direct of travel detects, it is characterized in that, the direct of travel concrete steps of each moving target detected of described judgement are:
Utilize timestamp, use motion history figure to record the gradient information of historical track in each moving component a period of time;
The mode of weighting is adopted to calculate the direction of each parts according to the gradient information of record;
Retaining the parts consistent with paying close attention to direction, being the direction that moving target is advanced.
3., according to claim 2 based on the criminal investigation video pre-filtering method that direct of travel detects, it is characterized in that, the overall gradient direction by following formulae discovery moving component:
Wherein, be the overall gradient direction of the parts calculated, be the direction of moving component, for basic reference angle, w (stamp) is the weight according to time stamp setting, for gained direction of motion is with poor with reference to the minimum angles between angle.
4. according to claim 1 based on the criminal investigation video pre-filtering method that direct of travel detects, it is characterized in that, described recorded key frame number comprises: judge whether that the central point that there is the target retained drops in the concern line regional extent of setting, exist and then think that present frame is key frame and records the frame number of present frame, otherwise be non-key frame.
5., according to claim 1 based on the criminal investigation video pre-filtering method that direct of travel detects, it is characterized in that, described in be merged into new video and comprise the following steps:
According to the crucial frame number chosen, read first key frame and it can be used as the start frame of output video;
By crucial frame number order, read the key frame after described first key frame and add output video according to the order of sequence, until read last key frame and it can be used as the end frame of output video.
CN201210257970.0A 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects Active CN102810208B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210257970.0A CN102810208B (en) 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210257970.0A CN102810208B (en) 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects

Publications (2)

Publication Number Publication Date
CN102810208A CN102810208A (en) 2012-12-05
CN102810208B true CN102810208B (en) 2015-12-16

Family

ID=47233910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210257970.0A Active CN102810208B (en) 2012-07-24 2012-07-24 Based on the criminal investigation video pre-filtering method that direct of travel detects

Country Status (1)

Country Link
CN (1) CN102810208B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103092929B (en) * 2012-12-30 2016-12-28 信帧电子技术(北京)有限公司 A kind of generation method and device of video frequency abstract
CN105744345B (en) * 2014-12-12 2019-05-31 深圳Tcl新技术有限公司 Video-frequency compression method and device
CN107770528B (en) * 2016-08-19 2023-08-25 中兴通讯股份有限公司 Video playing method and device
CN107133580B (en) * 2017-04-24 2020-04-10 杭州空灵智能科技有限公司 Synthetic method of 3D printing monitoring video
CN111866428B (en) * 2019-04-29 2023-03-14 杭州海康威视数字技术股份有限公司 Historical video data processing method and device
CN110933455B (en) * 2019-12-16 2023-03-14 云粒智慧科技有限公司 Video screening method and device, electronic equipment and storage medium
CN111738769B (en) * 2020-06-24 2024-02-20 湖南快乐阳光互动娱乐传媒有限公司 Video processing method and device
CN112312087B (en) * 2020-10-22 2022-07-29 中科曙光南京研究院有限公司 Method and system for quickly positioning event occurrence time in long-term monitoring video

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101119481A (en) * 2007-08-27 2008-02-06 刘文萍 Remote alarm video monitoring system and method
CN101123721A (en) * 2007-09-30 2008-02-13 湖北东润科技有限公司 An intelligent video monitoring system and its monitoring method
CN102054510A (en) * 2010-11-08 2011-05-11 武汉大学 Video preprocessing and playing method and system
CN102547244A (en) * 2012-01-17 2012-07-04 深圳辉锐天眼科技有限公司 Video monitoring method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101119481A (en) * 2007-08-27 2008-02-06 刘文萍 Remote alarm video monitoring system and method
CN101123721A (en) * 2007-09-30 2008-02-13 湖北东润科技有限公司 An intelligent video monitoring system and its monitoring method
CN102054510A (en) * 2010-11-08 2011-05-11 武汉大学 Video preprocessing and playing method and system
CN102547244A (en) * 2012-01-17 2012-07-04 深圳辉锐天眼科技有限公司 Video monitoring method and system

Also Published As

Publication number Publication date
CN102810208A (en) 2012-12-05

Similar Documents

Publication Publication Date Title
CN102810208B (en) Based on the criminal investigation video pre-filtering method that direct of travel detects
CN111209810B (en) Boundary frame segmentation supervision deep neural network architecture for accurately detecting pedestrians in real time through visible light and infrared images
CN106096577B (en) A kind of target tracking method in camera distribution map
Changzhen et al. A traffic sign detection algorithm based on deep convolutional neural network
US7555046B2 (en) Method and system for searching and verifying magnitude change events in video surveillance
CN102542552B (en) Frontlighting and backlighting judgment method of video images and detection method of shooting time
CN102222104B (en) Method for intelligently extracting video abstract based on time-space fusion
CN104599502A (en) Method for traffic flow statistics based on video monitoring
CN105760831A (en) Pedestrian tracking method based on low-altitude aerial photographing infrared video
CN105574506A (en) Intelligent face tracking system and method based on depth learning and large-scale clustering
CN103413444A (en) Traffic flow surveying and handling method based on unmanned aerial vehicle high-definition video
CN106203277A (en) Fixed lens real-time monitor video feature extracting method based on SIFT feature cluster
CN106203513A (en) A kind of based on pedestrian&#39;s head and shoulder multi-target detection and the statistical method of tracking
CN101515995A (en) System for tracking a moving object, by using particle filtering
CN102833465A (en) Criminal investigation video pretreatment method based on movement detection
CN107105193B (en) Robot monitoring system based on human body information
CN103325115A (en) Pedestrian counting monitoring method based on head top camera
CN108447076A (en) Multi-object tracking method based on depth enhancing study
CN102915542A (en) Image processing apparatus, image processing method, and program
CN103456009A (en) Method, device and monitoring system for target detection
Xia et al. Vision-based traffic accident detection using matrix approximation
CN104778723A (en) Method for performing motion detection on infrared image with three-frame difference method
CN101848369B (en) Method for detecting video stop event based on self-adapting double-background model
Li et al. Intelligent transportation video tracking technology based on computer and image processing technology
CN103324950B (en) Human body reappearance detecting method and system based on online study

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant