WO2018095082A1 - Rapid detection method for moving target in video monitoring - Google Patents

Rapid detection method for moving target in video monitoring Download PDF

Info

Publication number
WO2018095082A1
WO2018095082A1 PCT/CN2017/097647 CN2017097647W WO2018095082A1 WO 2018095082 A1 WO2018095082 A1 WO 2018095082A1 CN 2017097647 W CN2017097647 W CN 2017097647W WO 2018095082 A1 WO2018095082 A1 WO 2018095082A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
frame
point
value
tracking
Prior art date
Application number
PCT/CN2017/097647
Other languages
French (fr)
Chinese (zh)
Inventor
顾晓东
马小骏
Original Assignee
江苏东大金智信息***有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 江苏东大金智信息***有限公司 filed Critical 江苏东大金智信息***有限公司
Publication of WO2018095082A1 publication Critical patent/WO2018095082A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the invention belongs to the field of video detection, and particularly relates to a method for quickly detecting a moving target in video monitoring.
  • the demand for intelligent video surveillance systems comes mainly from those that are sensitive to security requirements, such as military, public security, banking, roads, parking lots, etc.
  • security requirements such as military, public security, banking, roads, parking lots, etc.
  • theft occurs or an abnormality occurs such a system can actively and timely issue an alarm to the security personnel, so that the staff can fully utilize the video surveillance network to implement alarm linkage and emergency command and disposal, thereby avoiding crime. It also reduces the investment in hiring a large number of surveillance personnel.
  • Moving target monitoring is the basis of target tracking, which directly affects the tracking effect.
  • external factors such as camera shake, illumination changes, target shadow interference, target occlusion, and same-color background interference
  • these realistic factors will bring considerable difficulty to target monitoring.
  • the illumination angle changes the target shadow will be detected as the target foreground in different stages during the monitoring process, which has a great influence on the overall shape of the target.
  • the target occlusion makes it difficult to obtain the complete shape of the target and lose the relevant information of the target.
  • the occlusion problem is the key issue of current target tracking. Most target tracking systems can't effectively solve the problem of mutual occlusion between the target and other targets. Target occlusion is a random, unpredictable problem during tracking. It is unreliable to rely on background modeling to achieve target detection or tracking algorithms for such problems. It is necessary to establish better target models or feature templates and to solve them by using exact matching of the visible target parts with the models or features. Therefore, improving the efficiency and accuracy of target detection is a crucial part of intelligent video surveillance systems.
  • Target detection in surveillance video is mainly divided into two categories: (1) moving target detection; (2) target detection based on image recognition. Both have their own advantages and disadvantages.
  • Moving target detection can quickly and effectively detect moving targets in surveillance video, but it can not detect targets that are still in the scene, and it is also weak in the separation of adhesion targets; based on image recognition Target detection, detecting all targets in the whole picture, whether it is a moving target or a stationary target, due to the dependence on feature determination, such methods are affected by higher accuracy, detection rate and less adhesion.
  • these advantages generally require more runtime.
  • a method for quickly detecting a moving target in video monitoring includes the following steps:
  • the foreground is extracted by subtracting the image I k currently being analyzed from the background model D k-1 point by point, and the point where the difference exceeds the constant 10 is used as the front point; otherwise, it is used as the background point; after the point-to-point subtraction is completed, the morphology is opened. Closed operation, filtering noise and making the connected area relatively regular, and then using the region growing method to obtain all the connected areas as the foreground, and merging the adjacent areas to obtain the set S k of potential motion areas;
  • Step three two step S i for each potential motion area s, s detection target to be detected may be present, all detected targets are added to the set O k;
  • step 4 all the targets in O k-1 are tracked in the kth frame, and the obtained target to be inspected is also added to O k , and continues to move forward to the last occurrence of all the short-lived targets in the short time.
  • the frame is tracked, and the obtained target is also added to the current target set O k ;
  • Step five the target set O k for a certain time is not appearing in the disappearance of the processing, the target is deleted from the set O k in;
  • Step 6 update background model;
  • Step 7 Repeat the above steps 2 to 6 for the next frame image until the last frame of the video is detected, and the detection result is obtained.
  • the method for quickly detecting a moving target in the video monitoring, and the method for detecting a target to be detected in the s in the third step are a multi-scale, a floating window, and a method for target recognition based on HoG feature calculation.
  • the method for quickly detecting a moving target in the video monitoring also needs to set a determining mechanism, which is to manually manually calibrate a target data sample library, and the data sample library is
  • a determining mechanism which is to manually manually calibrate a target data sample library, and the data sample library is
  • the data sample library is A large number of pictures of the same size are composed, each picture is marked with a specific detection target, and the HoG features of each pixel in each picture are acquired; the determination mechanism is formed by acquiring the HoG features of each pixel in each picture.
  • the identification model data of the detection target in the specific detection process, the window to be tested is first adjusted to the same size as the picture in the determination mechanism, and then the recognition model data is used to determine whether a specific detection target exists in the window to be tested.
  • the method for quickly detecting a moving target in the video monitoring and the method for detecting a target to be detected in the s in the third step may also be performed as follows:
  • C(x, y, k) is the current background image, and F(k) has a value range of 20-40;
  • the method for quickly detecting a moving target in the video monitoring and the method for tracking in the fourth step are as follows:
  • the diamond position around p is the eight positions: the four adjacent points of the oblique diagonal + the upper and lower positions of each of the upper and lower sides are respectively calculated as the center of the rectangular frame. Tracking the mean square error of the target in the tracking source frame. If the eight values exceed the mean squared difference obtained in the previous step, the tracking result is considered to be absent, and the algorithm ends; otherwise, the point that produces the minimum mean squared value is substituted for p. ;
  • the frame within a certain range before the current frame in the step 4) refers to the first 25 frames of the current frame.
  • the fast detection method of the moving target in the video monitoring provided by the invention can quickly detect the moving target in the video, and perform the calculation and detection of the characteristic target according to the pre-stored target data sample library, which can well avoid the target missed detection and repeated detection. problem.
  • FIG. 1 is a flowchart of a method for quickly detecting a moving object according to Embodiment 1 of the present invention
  • Embodiment 3 is a picture after background difference according to Embodiment 1 of the present invention.
  • Embodiment 4 is a morphologically operated picture according to Embodiment 1 of the present invention.
  • Figure 5 is a picture of the final detection structure described in Embodiment 1 of the present invention.
  • FIG. 1 is a flowchart of a method for quickly detecting a moving target according to the embodiment, which includes the following steps:
  • Step M1 detecting, for each frame image in the video sequence, all potential motion regions therein, and obtaining locations of the regions;
  • Step M1 further includes the following steps:
  • the result of subtraction of the background model where the point where the difference exceeds a certain constant (for example, 10) is considered to be the former attraction, otherwise it is considered to be the background point.
  • step M12 according to the preliminary judgment of the foreground/background obtained in the previous step, the potential motion area S k in the current image is extracted.
  • the initial value O k ⁇ of the set of targets in Sk is set to be an empty set, and then the morphological opening and closing operations are sequentially used to filter the noise and make the joint area relatively regular.
  • the region growing method is used to obtain Among all the connected areas of the foreground, each of the connected areas is a potential moving area, and a merged area is merged for each of the connected areas: if the distance between the smallest containing rectangles of the two connected areas does not exceed a certain threshold (eg , 3), then merge the two connected areas.
  • Figure 3 shows the results after morphological operation.
  • step M13 the background model D 0 is updated with the current image.
  • Step M2 performing target recognition on each potential motion area outputted in step M12 to find a target in a true sense, and step M2 further includes:
  • step M21 the division of the multi-scale floating window to the area is determined, and the size of the floating window is set according to the actual possible size of the target.
  • the position of the window is up to the lower right corner of the area, and the displacement of each float is 1/2 of the window size (ie, the span from left to right is s/2, and the span from top to bottom is t/2), when the float
  • the end of the process that is, the floating process of the first scale ends; the float into the second scale, in the float of this second scale, the floating window
  • the width and height are simultaneously adjusted to a constant multiple of the width and height of the floating window in the previous scale (eg, 1.05), and then float like the first scale, and then enter the next scale after the end.
  • the window size exceeds the maximum size of the target (for example: twice the width and height of the smallest size).
  • step M22 the HoG feature is calculated for all points in the region, and the calculation of the HoG feature can be directly performed by using the corresponding function of OpenCV.
  • step M23 for each window position divided by the M21 region, target recognition based on HoG features is performed in the window (personal/vehicle recognition is performed separately), and as a result of the recognition, each window of the region division is judged at the Yes/No in the window there is a target.
  • step M4 for machine learning to obtain a decision mechanism (ie, the recognition model, that is, model data D3).
  • a target data sample library D2 is manually calibrated (for a pedestrian and a vehicle, each of which has such a data sample set, which is now only described for one of them), and the data sample library D2 is determined by a certain A quantity (for example, 1000 frames) of the same size (for example, 32 ⁇ 64), each picture is marked with the presence or absence of a specific target (ie, pedestrian/vehicle), and the HoG characteristics of each pixel in each picture Have acquired (acquired features);
  • step M4 machine learning is performed based on D2 to obtain recognition model data of the target. Since the size of the picture in D2 is fixed, the number of bytes of the HoG feature of each pixel in the picture is also fixed, so each picture is actually regarded as a high-dimensional vector formed by sequentially arranging HoG features of all pixels. And whether this image contains a specific target (0,1) is a classification, which is a very typical classification problem, we use SVM algorithm to complete this learning process (SVM algorithm directly uses the function in OpenCV), Thereby obtaining the identification model data D3 of the target;
  • step M24 the recognition result is sorted out. Since there is a large overlap between those windows divided by the M21 region, it is possible that a single target is repeatedly identified. Therefore, it is necessary to simply determine which of the targets identified in M23 are duplicated, and by finishing, the final result is All the targets detected in the area, and these targets are added to the set Ok ( Figure 5 is the final recognition result of Figure 4, it can be seen that it separates the two targets stuck in the motion area);
  • C(x, y, k) is the current background image, and F(k) has a value range of 20-40;
  • step M3 target tracking is performed.
  • Step M2 detects the target for all potential motion regions of the current image, and combines them to obtain all the targets in the current image. We know that these targets are not isolated, and will persist for a period of time in the video sequence. We use step 3 to string together the targets between different images to obtain the moving trajectory of the target;
  • Step M31 screening out some obviously unsatisfactory targets, which is a retention step, and may need to do a preliminary screening for various reasons, such as boundaries, such as false positives due to insufficient recognition accuracy;
  • step M32 forward tracking is performed on the target in the set O k-1 before the current image I k , and the tracking method is as follows:
  • the diamond position around p is the eight positions: the four adjacent points of the oblique diagonal + the upper and lower positions of each of the upper and lower sides are respectively calculated as the center of the rectangular frame. Tracking the mean square error of the target in the tracking source frame. If the eight values exceed the mean squared difference obtained in the previous step, the tracking result is considered to be absent, and the algorithm ends; otherwise, the point that produces the minimum mean squared value is substituted for p. ;
  • step M33 we need to make some disappearances for the targets that are not present for a long time; once the disappearance process is performed, the system will forget the target, and any future targets will no longer be associated with them. In our implementation, We simply eliminate the targets that have not appeared in some of the most recent frames (for example, 600 frames).

Landscapes

  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Disclosed by the present invention is a rapid detection method for a moving target in video monitoring: a moving target in video monitoring is rapidly detected by means of establishing a background model for a video to be detected, detecting a movement region, detecting a target to be detected and specifically tracking the object. The rapid detection method for a moving target in video monitoring provided by the present invention may rapidly detect a moving target in a video and may calculate and detect a characteristic target according to a pre-stored target data sample library, thereby nicely avoiding missed detection and repeated detection of targets.

Description

一种视频监测中运动目标的快速检测方法Method for quickly detecting moving target in video monitoring 技术领域Technical field
本发明属于视频检测领域,具体涉及一种视频监测中运动目标的快速检测方法。The invention belongs to the field of video detection, and particularly relates to a method for quickly detecting a moving target in video monitoring.
背景技术Background technique
随着全程数字化、网络化的视频监控***的发展,视频监控的作用变得愈发明显,其高度的开发性、集成性和灵活性,为整个安防产业的发展提供了更加广阔的发展空间,智能视频监控由于赋予更多的智能化、主动化、有效性等特点,成为新一代视频监控趋势。With the development of digital and networked video surveillance systems, the role of video surveillance has become more and more obvious. Its high degree of development, integration and flexibility have provided a broader development space for the development of the entire security industry. Intelligent video surveillance has become a new generation of video surveillance trends due to its more intelligent, active and effective features.
智能视频监控***的需求主要来自于那些对安全要求敏感的场合,如军队、公安、银行、道路、停车场等。当盗窃发生或异常发生时,该类***能够主动向保卫人员及时准确地发出警报,使工作人员能够充分利用视频监控网络实施报警联动和应急指挥处置,从而避免犯罪的发生。同时也减少了雇佣大批监视人员的投入。The demand for intelligent video surveillance systems comes mainly from those that are sensitive to security requirements, such as military, public security, banking, roads, parking lots, etc. When theft occurs or an abnormality occurs, such a system can actively and timely issue an alarm to the security personnel, so that the staff can fully utilize the video surveillance network to implement alarm linkage and emergency command and disposal, thereby avoiding crime. It also reduces the investment in hiring a large number of surveillance personnel.
作为智能监控技术的重要内容和基础,对目标(尤其是车辆和行人目标)检测和跟踪的不断优化是智能监控技术不断进步的必要过程。目前目标跟踪检测存在的以下难题:1)运动目标监测作为目标跟踪的基础,直接影响跟踪的效果。但是由于视频获取过程中容易受到外界因素的干扰,如摄像机抖动、光照变化、目标阴影干扰、目标遮挡、同色背景干扰等,这些现实因素都会给目标监测带来相当大的难度。如随着光照角度的变化目标阴影在监测过程中会被不同程度的检测为目标前景,对目标整体形状产生很大影响;目标遮挡使检测过程很难获得目标的完整形状,丢失目标相关信息,不利于后续的跟踪;光照变化、摄像机抖动、背景干扰都可以看成噪声,同样影响目标检测的准确性。2)遮挡问题是目前目标跟踪的重点问题,现在大多数目标跟踪***都不能有效的解决运用目标受背景或者其他目标之间的相互遮挡问题。在跟踪过程中目标遮挡是随机的、不可预测的问题。对于这样的问题简单的依赖背景建模实现目标检测或跟踪算法是不可靠的,必须建立更好地目标模型或特征模板,并利用那些可见的目标部分与模型或特征的准确匹配来解决。因此,提高目标检测的效率和准确率是智能视频监控***中至关重要的内容。As an important content and foundation of intelligent monitoring technology, continuous optimization of detection and tracking of targets (especially vehicle and pedestrian targets) is a necessary process for the continuous advancement of intelligent monitoring technology. At present, the following problems exist in target tracking detection: 1) Moving target monitoring is the basis of target tracking, which directly affects the tracking effect. However, due to the interference of external factors, such as camera shake, illumination changes, target shadow interference, target occlusion, and same-color background interference, these realistic factors will bring considerable difficulty to target monitoring. For example, as the illumination angle changes, the target shadow will be detected as the target foreground in different stages during the monitoring process, which has a great influence on the overall shape of the target. The target occlusion makes it difficult to obtain the complete shape of the target and lose the relevant information of the target. It is not conducive to subsequent tracking; illumination changes, camera shake, and background interference can all be regarded as noise, which also affects the accuracy of target detection. 2) The occlusion problem is the key issue of current target tracking. Most target tracking systems can't effectively solve the problem of mutual occlusion between the target and other targets. Target occlusion is a random, unpredictable problem during tracking. It is unreliable to rely on background modeling to achieve target detection or tracking algorithms for such problems. It is necessary to establish better target models or feature templates and to solve them by using exact matching of the visible target parts with the models or features. Therefore, improving the efficiency and accuracy of target detection is a crucial part of intelligent video surveillance systems.
监控视频中的目标检测,主要分为两类:(一)运动目标检测;(二)基于图像识别的目标检测。两者各有优缺点,运动目标检测能够快速有效地检测监控视频中的运动目标,但对静止于场景中的目标无法检测到,并且在粘连目标的分离上也显得比较无力;基于图像识别的目标检测,对全图中的所有目标进行检测,无论是运动目标还是静止目标,由于依赖于特征判定,这类方法在获得更高的准确率、检出率且较少受到粘连等情况的影响这些优势的同时,它一般需要更多的运行时间。Target detection in surveillance video is mainly divided into two categories: (1) moving target detection; (2) target detection based on image recognition. Both have their own advantages and disadvantages. Moving target detection can quickly and effectively detect moving targets in surveillance video, but it can not detect targets that are still in the scene, and it is also weak in the separation of adhesion targets; based on image recognition Target detection, detecting all targets in the whole picture, whether it is a moving target or a stationary target, due to the dependence on feature determination, such methods are affected by higher accuracy, detection rate and less adhesion. At the same time, these advantages generally require more runtime.
发明内容Summary of the invention
本发明的目的在于为了克服以上现有技术的不足而提供一种视频监测中运动目标的快速检测方法。It is an object of the present invention to provide a method for rapid detection of moving objects in video surveillance in order to overcome the deficiencies of the prior art.
本发明的技术方案如下:The technical solution of the present invention is as follows:
一种视频监测中运动目标的快速检测方法,包括以下步骤: A method for quickly detecting a moving target in video monitoring includes the following steps:
步骤一,待测视频V={I0,I1,I2,…,Ik},其中Ik是视频V中的第k帧图像,将视频的第零帧图像设定为初始背景模型D0,即D0=I0Step 1: The video to be tested V={I 0 , I 1 , I 2 , . . . , I k }, where I k is the kth frame image in the video V, and the zeroth frame image of the video is set as the initial background model. D 0 , that is, D 0 =I 0 ;
步骤二,通过计算Fk=Ik-Dk-1提取前景,并在Fk上提取潜在运动区域的集合Sk,设定Sk中目标的集合的初始值Ok={}为空集;其中提取前景为将当前正在分析的图像Ik与背景模型Dk-1点对点相减,差值超过常量10的点作为前景点,否则作为背景点;点对点相减完成之后采用形态学开、闭操作,过滤噪声并使联通区域相对更规整,再采用区域生长方法获得其中作为前景的所有联通区域,将临近区域合并,得到潜在运动区域的集合SkStep two, by calculating F k = I k -D k- 1 extracted foreground, and extracts a set of potential motion area S k in the F k, the initial value of the set O k S k = {target} is empty The foreground is extracted by subtracting the image I k currently being analyzed from the background model D k-1 point by point, and the point where the difference exceeds the constant 10 is used as the front point; otherwise, it is used as the background point; after the point-to-point subtraction is completed, the morphology is opened. Closed operation, filtering noise and making the connected area relatively regular, and then using the region growing method to obtain all the connected areas as the foreground, and merging the adjacent areas to obtain the set S k of potential motion areas;
步骤三,对于步骤二Si中每一个潜在运动区域s,检测s中可能存在的待检目标,所有检测到的目标均加入集合Ok中;Step three, two step S i for each potential motion area s, s detection target to be detected may be present, all detected targets are added to the set O k;
步骤四,对Ok-1中的所有目标,在第k帧中进行跟踪,所得待检目标同样加入到Ok中,并继续向前对所有短时间内丢失的目标的最后一次出现在当前帧进行跟踪,所得目标同样加入到当前目标集合Ok中;In step 4, all the targets in O k-1 are tracked in the kth frame, and the obtained target to be inspected is also added to O k , and continues to move forward to the last occurrence of all the short-lived targets in the short time. The frame is tracked, and the obtained target is also added to the current target set O k ;
步骤五,对于集合Ok中一定时间不出现的目标作消失处理,将该目标从集合Ok中删除;步骤六,更新背景模型;Step five, the target set O k for a certain time is not appearing in the disappearance of the processing, the target is deleted from the set O k in; Step 6 update background model;
步骤七,对于下一帧图像,重复上述步骤二到步骤六,直到检测到视频最后一帧图像后,得到检测结果。Step 7: Repeat the above steps 2 to 6 for the next frame image until the last frame of the video is detected, and the detection result is obtained.
进一步地,所述的视频监测中运动目标的快速检测方法,步骤三中检测s中可能存在的待检目标的方法为多尺度、浮动窗和基于HoG特征计算的的目标识别的方法。Further, the method for quickly detecting a moving target in the video monitoring, and the method for detecting a target to be detected in the s in the third step are a multi-scale, a floating window, and a method for target recognition based on HoG feature calculation.
更进一步地,所述的视频监测中运动目标的快速检测方法,基于HoG特征计算的的目标识别,还需要设定一个判定机制,该机制为首先手动标定一个目标数据样本库,数据样本库由大量相同尺寸的图片组成,每张图片都标定了其是否存在特定检测目标,每张图片中各像素点的HoG特征均被获取;判定机制通过获取每张图片中各像素点的HoG特征进而形成对于检测目标的识别模型数据;特定检测过程中,首先将待测窗口调整为与判定机制中图片相同的大小,然后通过识别模型数据判定待测窗口中是否存在特定检测目标。Further, the method for quickly detecting a moving target in the video monitoring, based on the target recognition calculated by the HoG feature, also needs to set a determining mechanism, which is to manually manually calibrate a target data sample library, and the data sample library is A large number of pictures of the same size are composed, each picture is marked with a specific detection target, and the HoG features of each pixel in each picture are acquired; the determination mechanism is formed by acquiring the HoG features of each pixel in each picture. For the identification model data of the detection target; in the specific detection process, the window to be tested is first adjusted to the same size as the picture in the determination mechanism, and then the recognition model data is used to determine whether a specific detection target exists in the window to be tested.
进一步地,所述的视频监测中运动目标的快速检测方法,步骤三中检测s中可能存在的待检目标的方法还可以按照以下进行:Further, the method for quickly detecting a moving target in the video monitoring, and the method for detecting a target to be detected in the s in the third step may also be performed as follows:
1)将第一帧图像作为初始背景,即C(x,y,1)=T(x,y,k),其中x,y为像素点的坐标,k为帧数号;1) taking the first frame image as the initial background, that is, C(x, y, 1)=T(x, y, k), where x, y are the coordinates of the pixel point, and k is the frame number;
2)对当前帧T(x,y,k),进行目标标识矩阵D(x,y,k)的计算,计算公式为:2) For the current frame T(x, y, k), calculate the target identification matrix D(x, y, k), and the calculation formula is:
Figure PCTCN2017097647-appb-000001
Figure PCTCN2017097647-appb-000001
其中C(x,y,k)为当前背景图像,F(k)的取值范围为20-40;Where C(x, y, k) is the current background image, and F(k) has a value range of 20-40;
3)统计各像素点Ai,j,k和Ai+n,j+m,k+f在待测像素点的时间和空间领域内的概率密度,计算信息差和检测点像素点的值,其中信息差M(x,y,k)为:3) Calculate the probability density of each pixel point A i,j,k and A i+n, j+m,k+f in the time and space domain of the pixel to be measured, calculate the information difference and the value of the detection point pixel point , where the information difference M(x, y, k) is:
Figure PCTCN2017097647-appb-000002
Figure PCTCN2017097647-appb-000002
式中,g(Ai,j,k)和g(Ai+n,j+m,k+f)分别为像素点Ai,j,k和Ai+n,j+m,k+f在时间域和空间域 内的概率密度函数;像素点的值Z(Ai,j,k)为:Z(Ai,j,k)=g(Ai,j,k)+M(x,y,k)/26,(k≥2);Where g(A i,j,k ) and g(A i+n,j+m,k+f ) are pixel points A i,j,k and A i+n,j+m,k+ The probability density function of f in the time domain and the spatial domain; the value of the pixel point Z(A i,j,k ) is: Z(A i,j,k )=g(A i,j,k )+M(x , y, k) / 26, (k ≥ 2);
4)将像素点的值Z(Ai,j,k)≥0.02的区域作为前景标记,标记为Qq;把像素点的值Z(Ai,j,k)≤0.02的区域作为背景标记,标记为Qb4) The area where the value of the pixel point Z(A i,j,k )≥0.02 is used as the foreground mark, labeled as Q q ; the area where the value of the pixel point Z(A i,j,k )≤0.02 is used as the background mark , marked as Q b ;
5)利用多尺度形态学方法计算当前帧的多尺度形态学梯度图像g;5) calculating a multi-scale morphological gradient image g of the current frame using a multi-scale morphological method;
6)利用以上标记Qq与Qb进行优化:g,=imimposemin(g,Qq|Qb),然后利用分水岭算法得到目标图像:contour=watershed(g,);6) Using the above markers Q q and Q b to optimize: g, =imimposemin(g, Q q |Q b ), and then use the watershed algorithm to obtain the target image: contour=watershed(g,);
7)对下一帧图像更新并建立新背景,重复步骤2)-6),直至最后一帧图像,得到检测结果。7) Update the next frame image and create a new background, repeat steps 2)-6) until the last frame image, and get the detection result.
进一步地,所述的视频监测中运动目标的快速检测方法,步骤四中跟踪的方法如下:Further, the method for quickly detecting a moving target in the video monitoring, and the method for tracking in the fourth step are as follows:
1)首先初始化待跟踪帧中与跟踪源帧中目标位置相同的位置为初始检索位置p;1) Initially initializing the same position in the to-be-tracked frame as the target position in the tracking source frame as the initial retrieval position p;
2)把待跟踪帧中检索位置p为中心、大小与目标在跟踪源帧中大小相等的矩形框作为潜在的跟踪结果,将它与跟踪源中的目标求均方差,如果这个值小于阈值2,则认为跟踪完成,找到了跟踪目标,算法结束;2) The rectangular frame with the retrieval position p in the to-be-tracked frame as the center and the size of the target in the tracking source frame is used as the potential tracking result, and the target is found to be equal to the target in the tracking source. If the value is smaller than the threshold 2 , that the tracking is completed, the tracking target is found, and the algorithm ends;
3)上一步骤获得的均方差值不满足条件的话,把p周围菱形位置即八个位置:斜对角的四个邻点+上下左右各自隔一点的位置分别作为矩形框中心各自计算与跟踪目标在跟踪源帧中的均方差,如果这八个值均超过上一步骤获得的均方差值则认为不存在跟踪结果,算法结束;否则,以产生最小均方差值的点替换p;3) If the mean squared difference obtained in the previous step does not satisfy the condition, the diamond position around p is the eight positions: the four adjacent points of the oblique diagonal + the upper and lower positions of each of the upper and lower sides are respectively calculated as the center of the rectangular frame. Tracking the mean square error of the target in the tracking source frame. If the eight values exceed the mean squared difference obtained in the previous step, the tracking result is considered to be absent, and the algorithm ends; otherwise, the point that produces the minimum mean squared value is substituted for p. ;
4)对当前帧之前一定范围内的帧中的目标的最后一次出现,都采用以上步骤中的方法进行跟踪。4) The last occurrence of the target in the frame within a certain range before the current frame is tracked by the method in the above steps.
进一步地,所述的视频监测中运动目标的快速检测方法,步骤六中更新背景模型具体为对位置是(x,y)的点,如在当前背景模型中的亮度值是d,而在当前图像中的亮度值是p,则经过当前图像,背景模型的值更新为Dk=(1-α)×d+α×p,如果位置是(x,y)的点在步骤二中判定为背景点,则α值取0.1,如果在步骤二中判定为前景点,则α值取0.01。Further, in the method for quickly detecting a moving target in the video monitoring, the background model is updated in step 6 specifically to a point where the position is (x, y), as the brightness value in the current background model is d, but at the current If the brightness value in the image is p, the value of the background model is updated to D k = (1 - α) × d + α × p after the current image, and if the position is (x, y), the point is determined in step 2 as At the background point, the value of α is 0.1, and if it is determined as the front spot in step 2, the value of α is 0.01.
更进一步地,所述的视频监测中运动目标的快速检测方法,步骤4)中当前帧之前一定范围内的帧指的是当前帧的前25帧。Further, in the fast detection method of the moving target in the video monitoring, the frame within a certain range before the current frame in the step 4) refers to the first 25 frames of the current frame.
本发明提供的视频监测中运动目标的快速检测方法能够快速检测视频中的运动目标,根据预先存储的目标数据样本库进行特性目标的计算与检测,很好的避免了目标漏检与重捡的问题。The fast detection method of the moving target in the video monitoring provided by the invention can quickly detect the moving target in the video, and perform the calculation and detection of the characteristic target according to the pre-stored target data sample library, which can well avoid the target missed detection and repeated detection. problem.
附图说明DRAWINGS
图1为本发明实施例1中所述的运动目标快速检测方法的流程图;1 is a flowchart of a method for quickly detecting a moving object according to Embodiment 1 of the present invention;
图2为本发明实施例1中所述的处理前的图片;2 is a picture before processing according to Embodiment 1 of the present invention;
图3为本发明实施例1中所述的背景差分后的图片;3 is a picture after background difference according to Embodiment 1 of the present invention;
图4为本发明实施例1中所述的经形态学操作后的图片;4 is a morphologically operated picture according to Embodiment 1 of the present invention;
图5为本发明实施例1中所述的最终检测结构的图片。Figure 5 is a picture of the final detection structure described in Embodiment 1 of the present invention.
具体实施方式detailed description
实施例1 Example 1
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。The present invention will be further described in detail below with reference to the specific embodiments of the invention.
图1是本实施例提供的运动目标快速检测方法的流程图,包括以下步骤:FIG. 1 is a flowchart of a method for quickly detecting a moving target according to the embodiment, which includes the following steps:
步骤M1,对视频序列中的每一帧图像,检测出其中的所有潜在的运动区域,并获得这些区域的位置;Step M1: detecting, for each frame image in the video sequence, all potential motion regions therein, and obtaining locations of the regions;
在本实施例中,我们选取一段监控摄像机拍摄的高清视频片断,分辨率为1920x1080,视频场景为两侧是绿化带的道路交通,视频序列中的图片的点像素,其亮度值在0~255之间,视频记为V={I0,I1,I2,…,Ik},其中Ik是视频V中的第k帧图像,我们的后续分析均是基于亮度值进行,图2是采自该视频的一张图片。In this embodiment, we select a high-definition video clip taken by a surveillance camera with a resolution of 1920x1080. The video scene is road traffic with green belts on both sides, and the pixel of the picture in the video sequence has a brightness value of 0-255. Between, the video is recorded as V = {I 0 , I 1 , I 2 , ..., I k }, where I k is the kth frame image in video V, and our subsequent analysis is based on the brightness value, Figure 2 It is a picture taken from this video.
背景模型Dk是在步骤M1中需要使用到的数据,它代表对当前场景的背景的预测。初始情况下,Dk被设置成等于视频序列中第一张图片,即D0=I0,在后续执行过程中,D0进行实时更新;D k in the background model is required to step M1 data, which represents the prediction of the background of the current scene. Initially, D k is set equal to the first picture in the video sequence, ie D 0 =I 0 , and D 0 is updated in real time during subsequent execution;
步骤M1进一步包括以下步骤:Step M1 further includes the following steps:
步骤M11,把视频序列中当前正在分析的图片与背景模型D0点对点相减提取前景,即Fk=Ik-Dk-1,此时k=1(图3显示了图2中图片与背景模型相减的结果),其中差值超过某个常量(比如,10)的点被认为是前景点,否则认为是背景点。通过该步骤,我们获知了当前图像中的每一点的前景/背景的初步判断;In step M11, the picture currently being analyzed in the video sequence is subtracted from the background model D 0 point by point to extract the foreground, that is, F k =I k -D k-1 , where k=1 (Fig. 3 shows the picture and the picture in Fig. 2) The result of subtraction of the background model), where the point where the difference exceeds a certain constant (for example, 10) is considered to be the former attraction, otherwise it is considered to be the background point. Through this step, we know the preliminary judgment of the foreground/background of each point in the current image;
步骤M12,根据上一步骤获得的前景/背景的初步判断,抠出当前图像中的潜在运动区域Sk。设定Sk中目标的集合的初始值Ok={}为空集,然后顺序采用形态学开、闭操作,过滤噪声并使联通区域相对更规整,在此基础上,采用区域生长方法获得其中作为前景的所有联通区域,每一个联通区域就是一个潜在的运动区域,对这些联通区域,进行一次临近区域合并:如果两个联通区域的最小包含矩形之间的距离不超过某个阈值(比如,3),则将这两个联通区域合并。图3经过形态学操作后的结果如图4,经过联通区域合并,最后所有这些高亮区域形成了唯一一个潜在运动区域,(需要进行相邻区域合并的原因也能从图中看出来,其中的目标汽车被划分成了几个相邻近的联通区域,如果不进行合并,有可能将汽车划分成几个小部分);In step M12, according to the preliminary judgment of the foreground/background obtained in the previous step, the potential motion area S k in the current image is extracted. The initial value O k ={} of the set of targets in Sk is set to be an empty set, and then the morphological opening and closing operations are sequentially used to filter the noise and make the joint area relatively regular. On this basis, the region growing method is used to obtain Among all the connected areas of the foreground, each of the connected areas is a potential moving area, and a merged area is merged for each of the connected areas: if the distance between the smallest containing rectangles of the two connected areas does not exceed a certain threshold (eg , 3), then merge the two connected areas. Figure 3 shows the results after morphological operation. As shown in Figure 4, after the merged area merges, all of these highlight areas form the only potential motion area. The reason for the need to merge adjacent areas can also be seen from the figure. The target car is divided into several adjacent Unicom areas. If not merged, it is possible to divide the car into several small parts);
步骤M13,用当前图像更新背景模型D0。对位置是(x,y)的点,假设在当前背景模型中的值是d,而在当前图像中的值是p,则经过当前图像,背景模型的值更新为D1=(1-α)×d+α×p,如果该位置的点在M11中被判定为背景,则采用一个较大的α值(比如,0.1),如果该位置的点在M11中被判定为前景,则采用一个较小的α值(比如,0.01);In step M13, the background model D 0 is updated with the current image. For a point whose position is (x, y), assuming that the value in the current background model is d and the value in the current image is p, then the value of the background model is updated to D 1 = (1-α) after the current image. ×d+α×p, if the point of the position is judged to be the background in M11, a larger α value (for example, 0.1) is used, and if the point of the position is judged to be foreground in M11, a small alpha value (for example, 0.01);
步骤M2,对步骤M12中输出的每一个潜在运动区域进行目标识别,以找出其中真正意义上的目标,步骤M2进一步包括:Step M2, performing target recognition on each potential motion area outputted in step M12 to find a target in a true sense, and step M2 further includes:
步骤M21,确定多尺度的浮动窗对区域的划分,浮动窗的大小根据目标实际的可能大小进行设定。针对某种特定的目标,假设该类目标在视频图像中的最小尺寸为s×t(对于目标行人,这个值采用的是比如:s=8,t=16;对于目标车辆,这个值采用的是比如:s=24,t=24),则第一个尺度上定义窗口大小为s×t,该窗口首先浮动在区域的左上角位置,然后依次从左向右、从上向下浮动该窗口位置直至区域的右下角,每次浮动的位移为窗口大小的1/2(即从左向右位移的跨度为s/2,从上向下位移的跨度为t/2),当该浮动过程结束,即第一个尺度的浮动过程结束;进入第二个尺度的浮动,在这第二个尺度的浮动中,浮动窗口 的宽、高同时调整为上一个尺度中浮动窗口的宽、高的某个大于1的常数倍(比如,1.05),然后类似在第一个尺度中一样进行浮动,结束后进入再下一个尺度直到窗口大小超过了目标的最大尺寸为止(比如:最小尺寸的二倍宽、高)。以上每个尺度浮动窗的所有停留位置,形成了对该区域的区域划分,在下面步骤中我们会判断其中每一个位置中是否存在该特定目标。所以M21实际就是形成了这个区域划分(区域被划分成了很多窗口位置,这些位置之间相互可能重叠);In step M21, the division of the multi-scale floating window to the area is determined, and the size of the floating window is set according to the actual possible size of the target. For a specific target, assume that the minimum size of the target in the video image is s × t (for the target pedestrian, this value is used for example: s = 8, t = 16; for the target vehicle, this value is used For example: s=24, t=24), the window size is defined as s×t on the first scale. The window first floats in the upper left corner of the area, and then floats from left to right and from top to bottom. The position of the window is up to the lower right corner of the area, and the displacement of each float is 1/2 of the window size (ie, the span from left to right is s/2, and the span from top to bottom is t/2), when the float The end of the process, that is, the floating process of the first scale ends; the float into the second scale, in the float of this second scale, the floating window The width and height are simultaneously adjusted to a constant multiple of the width and height of the floating window in the previous scale (eg, 1.05), and then float like the first scale, and then enter the next scale after the end. Until the window size exceeds the maximum size of the target (for example: twice the width and height of the smallest size). All the positions of the floating windows of each of the above scales form a region division of the region. In the following steps, we will determine whether or not the specific target exists in each of the locations. So M21 actually formed this area division (the area is divided into many window positions, and these positions may overlap each other);
步骤M22,对区域中的所有点计算HoG特征,HoG特征的计算,可以直接使用OpenCV的对应函数完成;In step M22, the HoG feature is calculated for all points in the region, and the calculation of the HoG feature can be directly performed by using the corresponding function of OpenCV.
步骤M23,对M21区域划分出来的每一个窗口位置,在该窗口内进行基于HoG特征的目标识别(分别进行行人/车辆识别),作为识别的结果,区域划分的每一个窗口,得到判断在该窗口中是/否存在目标。In step M23, for each window position divided by the M21 region, target recognition based on HoG features is performed in the window (personal/vehicle recognition is performed separately), and as a result of the recognition, each window of the region division is judged at the Yes/No in the window there is a target.
基于HoG特征的目标识别,这个需要利用到步骤M4进行机器学习获得判定机制(即识别模型,也就是模型数据D3)。Based on HoG feature-based target recognition, this requires the use of step M4 for machine learning to obtain a decision mechanism (ie, the recognition model, that is, model data D3).
为了步骤M4进行机器学习,首先手动标定了一个目标数据样本库D2(针对行人、车辆各自有一个这样的数据样本集,现在只是针对其中的某一种来进行说明),数据样本库D2由一定量(比如,1000幅)的相同尺寸的图片组成(比如32×64),每个图片都标定了其中是或者否存在特定目标(即行人/车辆),每个图片中各像素点的HoG特征都获取了(获取特征);In order to perform machine learning in step M4, firstly, a target data sample library D2 is manually calibrated (for a pedestrian and a vehicle, each of which has such a data sample set, which is now only described for one of them), and the data sample library D2 is determined by a certain A quantity (for example, 1000 frames) of the same size (for example, 32 × 64), each picture is marked with the presence or absence of a specific target (ie, pedestrian/vehicle), and the HoG characteristics of each pixel in each picture Have acquired (acquired features);
步骤M4,基于D2进行机器学习获得该目标的识别模型数据。由于D2中图片的尺寸固定,图片中每一个像素的HoG特征的字节数也是固定的,因此每一张图片实际就是看作一个由所有像素的HoG特征按序排列而形成的一个高维向量,而这张图片中是否包含特定目标(0,1)即是分类,这是一个非常典型的分类问题,我们采用SVM算法来完成这个学习过程(SVM算法直接使用OpenCV中的函数来进行),从而获得该目标的识别模型数据D3;In step M4, machine learning is performed based on D2 to obtain recognition model data of the target. Since the size of the picture in D2 is fixed, the number of bytes of the HoG feature of each pixel in the picture is also fixed, so each picture is actually regarded as a high-dimensional vector formed by sequentially arranging HoG features of all pixels. And whether this image contains a specific target (0,1) is a classification, which is a very typical classification problem, we use SVM algorithm to complete this learning process (SVM algorithm directly uses the function in OpenCV), Thereby obtaining the identification model data D3 of the target;
在M23中,我们首先把窗口大小调整为D2中图片的大小,然后还是用SVM算法调用模型数据D3,完成识别,判断该窗口中是/否存在该特定目标;In M23, we first adjust the window size to the size of the image in D2, and then use the SVM algorithm to call the model data D3 to complete the recognition and determine whether the specific target exists in the window.
步骤M24,整理识别结果。由于M21区域划分的那些窗口之间有很大程度上的重叠,因此有可能单一目标被重复识别,因此需要简单判定M23中识别到的目标其中有哪些是重复的,通过整理,获得最终在该区域检测到的所有目标,并将这些目标加入到集合Ok中(图5是图4的最终识别结果,能看出来它把在运动区域上粘连的两个目标分离出来了);In step M24, the recognition result is sorted out. Since there is a large overlap between those windows divided by the M21 region, it is possible that a single target is repeatedly identified. Therefore, it is necessary to simply determine which of the targets identified in M23 are duplicated, and by finishing, the final result is All the targets detected in the area, and these targets are added to the set Ok (Figure 5 is the final recognition result of Figure 4, it can be seen that it separates the two targets stuck in the motion area);
对于潜在运动区域Sk中真正意义上的目标的检测,我们在研究过程中还开展了以下检测方式:For the detection of potential targets on the movement area S k in a real sense, we also carried out in the course of the study the following test methods:
1)将第一帧图像作为初始背景,即C(x,y,1)=T(x,y,k),其中x,y为像素点的坐标,k为帧数号;1) taking the first frame image as the initial background, that is, C(x, y, 1)=T(x, y, k), where x, y are the coordinates of the pixel point, and k is the frame number;
2)对当前帧T(x,y,k),进行目标标识矩阵D(x,y,k)的计算,计算公式为:2) For the current frame T(x, y, k), calculate the target identification matrix D(x, y, k), and the calculation formula is:
Figure PCTCN2017097647-appb-000003
Figure PCTCN2017097647-appb-000003
其中C(x,y,k)为当前背景图像,F(k)的取值范围为20-40; Where C(x, y, k) is the current background image, and F(k) has a value range of 20-40;
3)统计各像素点Ai,j,k和Ai+n,j+m,k+f在待测像素点的时间和空间领域内的概率密度,计算信息差和检测点像素点的值,其中信息差M(x,y,k)为:3) Calculate the probability density of each pixel point A i,j,k and A i+n, j+m,k+f in the time and space domain of the pixel to be measured, calculate the information difference and the value of the detection point pixel point , where the information difference M(x, y, k) is:
Figure PCTCN2017097647-appb-000004
Figure PCTCN2017097647-appb-000004
式中,g(Ai,j,k)和g(Ai+n,j+m,k+f)分别为像素点Ai,j,k和Ai+n,j+m,k+f在时间域和空间域内的概率密度函数;像素点的值Z(Ai,j,k)为:Z(Ai,j,k)=g(Ai,j,k)+M(x,y,k)/26,(k≥2);Where g(A i,j,k ) and g(A i+n,j+m,k+f ) are pixel points A i,j,k and A i+n,j+m,k+ f probability density function in the time domain and the spatial domain; the value of the pixel point Z(A i,j,k ) is: Z(A i,j,k )=g(A i,j,k )+M(x , y, k) / 26, (k ≥ 2);
4)将像素点的值Z(Ai,j,k)≥0.02的区域作为前景标记,标记为Qq;把像素点的值Z(Ai,j,k)≤0.02的区域作为背景标记,标记为Qb4) The area where the value of the pixel point Z(A i,j,k )≥0.02 is used as the foreground mark, labeled as Q q ; the area where the value of the pixel point Z(A i,j,k )≤0.02 is used as the background mark , marked as Q b ;
5)利用多尺度形态学方法计算当前帧的多尺度形态学梯度图像g;5) calculating a multi-scale morphological gradient image g of the current frame using a multi-scale morphological method;
6)利用以上标记Qq与Qb进行优化:g,=imimposemin(g,Qq|Qb),然后利用分水岭算法得到目标图像:contour=watershed(g,);6) Using the above markers Q q and Q b to optimize: g, =imimposemin(g, Q q |Q b ), and then use the watershed algorithm to obtain the target image: contour=watershed(g,);
7)对下一帧图像更新并按照以上步骤M13建立新背景,重复步骤2)-6),直至最后一帧图像。7) Update the next frame image and create a new background according to the above step M13, and repeat steps 2)-6) until the last frame image.
通过以上检测步骤,最终得到了与步骤M24相同的特定检测目标车辆与行人。Through the above detection steps, the same specific detection target vehicle and pedestrian as the step M24 are finally obtained.
步骤M3,进行目标跟踪。步骤M2对当前图像的所有潜在运动区域检测了目标,把他们综合起来就得到当前图像中的所有目标,我们知道,这些目标不是孤立存在的,它会在视频序列中的一段时间内持续存在,我们通过步骤3,把不同图像之间的目标串起来,从而获得目标的移动轨迹;In step M3, target tracking is performed. Step M2 detects the target for all potential motion regions of the current image, and combines them to obtain all the targets in the current image. We know that these targets are not isolated, and will persist for a period of time in the video sequence. We use step 3 to string together the targets between different images to obtain the moving trajectory of the target;
步骤M31,筛选掉一些明显不符合要求的目标,这是一个保留步骤,处于各种原因有可能需要做一个初步筛选,比如界限、比如由于识别准确率不足带来的误报等等;Step M31, screening out some obviously unsatisfactory targets, which is a retention step, and may need to do a preliminary screening for various reasons, such as boundaries, such as false positives due to insufficient recognition accuracy;
步骤M32,对当前图像Ik之前的集合Ok-1中的目标,在当前帧k进行前向跟踪,跟踪方法具体如下:In step M32, forward tracking is performed on the target in the set O k-1 before the current image I k , and the tracking method is as follows:
1)首先初始化待跟踪帧中与跟踪源帧中目标位置相同的位置为初始检索位置p;1) Initially initializing the same position in the to-be-tracked frame as the target position in the tracking source frame as the initial retrieval position p;
2)把待跟踪帧中检索位置p为中心、大小与目标在跟踪源帧中大小相等的矩形框作为潜在的跟踪结果,将它与跟踪源中的目标求均方差,如果这个值小于阈值2,则认为跟踪完成,找到了跟踪目标,算法结束;2) The rectangular frame with the retrieval position p in the to-be-tracked frame as the center and the size of the target in the tracking source frame is used as the potential tracking result, and the target is found to be equal to the target in the tracking source. If the value is smaller than the threshold 2 , that the tracking is completed, the tracking target is found, and the algorithm ends;
3)上一步骤获得的均方差值不满足条件的话,把p周围菱形位置即八个位置:斜对角的四个邻点+上下左右各自隔一点的位置分别作为矩形框中心各自计算与跟踪目标在跟踪源帧中的均方差,如果这八个值均超过上一步骤获得的均方差值则认为不存在跟踪结果,算法结束;否则,以产生最小均方差值的点替换p;3) If the mean squared difference obtained in the previous step does not satisfy the condition, the diamond position around p is the eight positions: the four adjacent points of the oblique diagonal + the upper and lower positions of each of the upper and lower sides are respectively calculated as the center of the rectangular frame. Tracking the mean square error of the target in the tracking source frame. If the eight values exceed the mean squared difference obtained in the previous step, the tracking result is considered to be absent, and the algorithm ends; otherwise, the point that produces the minimum mean squared value is substituted for p. ;
4)对当前帧之前一定范围内的帧(比如,25帧)中的目标的最后一次出现,都采用以上步骤中的方法进行跟踪,看它是否出现,如果出现,判断是否是在M24的检测结果之中,若是则把两者通过标识相同的ID联系起来,若不是则把它作为一个新发现的目标加入到当前目标集合Ok中;4) For the last occurrence of the target in a certain range of frames (for example, 25 frames) before the current frame, use the method in the above steps to track whether it appears, if it appears, determine whether it is detected in M24. among the results, if put both linked by the same identification ID, if not treat it as a target is added to a newly discovered destination set O k in the current;
步骤M33,我们需要对一些长时间不在出现的目标作消失处理;一旦作消失处理,则***将忘记该目标,以后出现的任何目标均不会再与其可能作任何关联,在我们的实现中,我们简单的把已经在最近的一些帧(比如,600帧)中未曾出现的目标均作消失处理。In step M33, we need to make some disappearances for the targets that are not present for a long time; once the disappearance process is performed, the system will forget the target, and any future targets will no longer be associated with them. In our implementation, We simply eliminate the targets that have not appeared in some of the most recent frames (for example, 600 frames).
由此,完成了对监控视频中运动目标的快速检测。 Thereby, the rapid detection of moving objects in the surveillance video is completed.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。 The specific embodiments of the present invention have been described in detail, and are not intended to limit the present invention. Any modifications, equivalent substitutions, improvements, etc., made within the spirit and scope of the present invention are intended to be included within the scope of the present invention.

Claims (7)

  1. 一种视频监测中运动目标的快速检测方法,其特征在于,包括以下步骤:A method for quickly detecting a moving object in video monitoring, characterized in that it comprises the following steps:
    步骤一,待测视频V={I0,I1,I2,…,Ik},其中Ik是视频V中的第k帧图像,将视频的第零帧图像设定为初始背景模型D0,即D0=I0Step 1: The video to be tested V={I 0 , I 1 , I 2 , . . . , I k }, where I k is the kth frame image in the video V, and the zeroth frame image of the video is set as the initial background model. D 0 , that is, D 0 =I 0 ;
    步骤二,通过计算Fk=Ik-Dk-1提取前景,并在Fk上提取潜在运动区域的集合Sk,设定Sk中目标的集合的初始值Ok={}为空集;其中提取前景为将当前正在分析的图像Ik与背景模型Dk-1点对点相减,差值超过常量10的点作为前景点,否则作为背景点;点对点相减完成之后采用形态学开、闭操作,过滤噪声并使联通区域相对更规整,再采用区域生长方法获得其中作为前景的所有联通区域,将临近区域合并,得到潜在运动区域的集合SkStep two, by calculating F k = I k -D k- 1 extracted foreground, and extracts a set of potential motion area S k in the F k, the initial value of the set O k S k = {target} is empty The foreground is extracted by subtracting the image I k currently being analyzed from the background model D k-1 point by point, and the point where the difference exceeds the constant 10 is used as the front point; otherwise, it is used as the background point; after the point-to-point subtraction is completed, the morphology is opened. Closed operation, filtering noise and making the connected area relatively regular, and then using the region growing method to obtain all the connected areas as the foreground, and merging the adjacent areas to obtain the set S k of potential motion areas;
    步骤三,对于步骤二Si中每一个潜在运动区域s,检测s中可能存在的待检目标,所有检测到的目标均加入集合Ok中;Step three, two step S i for each potential motion area s, s detection target to be detected may be present, all detected targets are added to the set O k;
    步骤四,对Ok-1中的所有目标,在第k帧中进行跟踪,所得待检目标同样加入到Ok中,并继续向前对所有短时间内丢失的目标的最后一次出现在当前帧进行跟踪,所得目标同样加入到当前目标集合Ok中;In step 4, all the targets in O k-1 are tracked in the kth frame, and the obtained target to be inspected is also added to O k , and continues to move forward to the last occurrence of all the short-lived targets in the short time. The frame is tracked, and the obtained target is also added to the current target set O k ;
    步骤五,对于集合Ok中一定时间不出现的目标作消失处理,将该目标从集合Ok中删除;Step five, the target set O k does not appear in a certain time for the extinguishing process, and the target removed from the collection of O k;
    步骤六,更新背景模型;Step six, updating the background model;
    步骤七,对于下一帧图像,重复上述步骤二到步骤六,直到检测到视频最后一帧图像后,得到检测结果。Step 7: Repeat the above steps 2 to 6 for the next frame image until the last frame of the video is detected, and the detection result is obtained.
  2. 根据权利要求1所述的视频监测中运动目标的快速检测方法,其特征在于,步骤三中检测s中可能存在的待检目标的方法为多尺度、浮动窗和基于HoG特征计算的的目标识别的方法。The method for rapidly detecting a moving object in video monitoring according to claim 1, wherein the method for detecting a target to be detected in the step s is multi-scale, floating window, and target recognition based on HoG feature calculation. Methods.
  3. 根据权利要求2所述的视频监测中运动目标的快速检测方法,其特征在于,基于HoG特征计算的的目标识别,还需要设定一个判定机制,该机制为首先手动标定一个目标数据样本库,数据样本库由大量相同尺寸的图片组成,每张图片都标定了其是否存在特定检测目标,每张图片中各像素点的HoG特征均被获取;判定机制通过获取每张图片中各像素点的HoG特征进而形成对于检测目标的识别模型数据;特定检测过程中,首先将待测窗口调整为与判定机制中图片相同的大小,然后通过识别模型数据判定待测窗口中是否存在特定检测目标。The method for quickly detecting a moving object in video monitoring according to claim 2, wherein the target recognition based on the HoG feature calculation further needs to set a determination mechanism, which is to manually manually calibrate a target data sample library. The data sample library is composed of a large number of pictures of the same size, each picture is marked with a specific detection target, and the HoG features of each pixel in each picture are acquired; the determination mechanism obtains each pixel point in each picture. The HoG feature further forms recognition model data for the detection target; in the specific detection process, the window to be tested is first adjusted to the same size as the picture in the determination mechanism, and then the recognition model data is used to determine whether a specific detection target exists in the window to be tested.
  4. 根据权利要求1所述的视频监测中运动目标的快速检测方法,其特征在于,步骤三中检测s中可能存在的待检目标的方法还可以按照以下进行:The method for quickly detecting a moving object in video monitoring according to claim 1, wherein the method for detecting a target to be detected in the s in the step 3 is further performed as follows:
    1)将第一帧图像作为初始背景,即C(x,y,1)=T(x,y,k),其中x,y为像素点的坐标,k为帧数号;1) taking the first frame image as the initial background, that is, C(x, y, 1)=T(x, y, k), where x, y are the coordinates of the pixel point, and k is the frame number;
    2)对当前帧T(x,y,k),进行目标标识矩阵D(x,y,k)的计算,计算公式为:2) For the current frame T(x, y, k), calculate the target identification matrix D(x, y, k), and the calculation formula is:
    Figure PCTCN2017097647-appb-100001
    Figure PCTCN2017097647-appb-100001
    其中C(x,y,k)为当前背景图像,F(k)的取值范围为20-40;Where C(x, y, k) is the current background image, and F(k) has a value range of 20-40;
    3)统计各像素点Ai,j,k和Ai+n,j+m,k+f在待测像素点的时间和空间领域内的概率密度,计算信息差和检测点像素点的值,其中信息差M(x,y,k)为: 3) Calculate the probability density of each pixel point A i,j,k and A i+n, j+m,k+f in the time and space domain of the pixel to be measured, calculate the information difference and the value of the detection point pixel point , where the information difference M(x, y, k) is:
    Figure PCTCN2017097647-appb-100002
    Figure PCTCN2017097647-appb-100002
    式中,g(Ai,j,k)和g(Ai+n,j+m,k+f)分别为像素点Ai,j,k和Ai+n,j+m,k+f在时间域和空间域内的概率密度函数;像素点的值Z(Ai,j,k)为:Z(Ai,j,k)=g(Ai,j,k)+M(x,y,k)/26,(k≥2);Where g(A i,j,k ) and g(A i+n,j+m,k+f ) are pixel points A i,j,k and A i+n,j+m,k+ f probability density function in the time domain and the spatial domain; the value of the pixel point Z(A i,j,k ) is: Z(A i,j,k )=g(A i,j,k )+M(x , y, k) / 26, (k ≥ 2);
    4)将像素点的值Z(Ai,j,k)≥0.02的区域作为前景标记,标记为Qq;把像素点的值Z(Ai,j,k)≤0.02的区域作为背景标记,标记为Qb4) The area where the value of the pixel point Z(A i,j,k )≥0.02 is used as the foreground mark, labeled as Q q ; the area where the value of the pixel point Z(A i,j,k )≤0.02 is used as the background mark , marked as Q b ;
    5)利用多尺度形态学方法计算当前帧的多尺度形态学梯度图像g;5) calculating a multi-scale morphological gradient image g of the current frame using a multi-scale morphological method;
    6)利用以上标记Qq与Qb进行优化:g,=imimposemin(g,Qq|Qb),然后利用分水岭算法得到目标图像:contour=watershed(g,);6) Using the above markers Q q and Q b to optimize: g, =imimposemin(g, Q q |Q b ), and then use the watershed algorithm to obtain the target image: contour=watershed(g,);
    7)对下一帧图像更新并建立新背景,重复步骤2)-6),直至最后一帧图像,得到检测结果。7) Update the next frame image and create a new background, repeat steps 2)-6) until the last frame image, and get the detection result.
  5. 根据权利要求1所述的视频监测中运动目标的快速检测方法,其特征在于,步骤四中跟踪的方法如下:The method for quickly detecting a moving object in video monitoring according to claim 1, wherein the method of tracking in step 4 is as follows:
    1)首先初始化待跟踪帧中与跟踪源帧中目标位置相同的位置为初始检索位置p;1) Initially initializing the same position in the to-be-tracked frame as the target position in the tracking source frame as the initial retrieval position p;
    2)把待跟踪帧中检索位置p为中心、大小与目标在跟踪源帧中大小相等的矩形框作为潜在的跟踪结果,将它与跟踪源中的目标求均方差,如果这个值小于阈值2,则认为跟踪完成,找到了跟踪目标,算法结束;2) The rectangular frame with the retrieval position p in the to-be-tracked frame as the center and the size of the target in the tracking source frame is used as the potential tracking result, and the target is found to be equal to the target in the tracking source. If the value is smaller than the threshold 2 , that the tracking is completed, the tracking target is found, and the algorithm ends;
    3)上一步骤获得的均方差值不满足条件的话,把p周围菱形位置即八个位置:斜对角的四个邻点+上下左右各自隔一点的位置分别作为矩形框中心各自计算与跟踪目标在跟踪源帧中的均方差,如果这八个值均超过上一步骤获得的均方差值则认为不存在跟踪结果,算法结束;否则,以产生最小均方差值的点替换p;3) If the mean squared difference obtained in the previous step does not satisfy the condition, the diamond position around p is the eight positions: the four adjacent points of the oblique diagonal + the upper and lower positions of each of the upper and lower sides are respectively calculated as the center of the rectangular frame. Tracking the mean square error of the target in the tracking source frame. If the eight values exceed the mean squared difference obtained in the previous step, the tracking result is considered to be absent, and the algorithm ends; otherwise, the point that produces the minimum mean squared value is substituted for p. ;
    4)对当前帧之前一定范围内的帧中的目标的最后一次出现,都采用以上步骤中的方法进行跟踪。4) The last occurrence of the target in the frame within a certain range before the current frame is tracked by the method in the above steps.
  6. 根据权利要求1所述的视频监测中运动目标的快速检测方法,其特征在于,步骤六中更新背景模型具体为对位置是(x,y)的点,如在当前背景模型中的亮度值是d,而在当前图像中的亮度值是p,则经过当前图像,背景模型的值更新为Dk=(1-α)×d+α×p,如果位置是(x,y)的点在步骤二中判定为背景点,则α值取0.1,如果在步骤二中判定为前景点,则α值取0.01。The method for quickly detecting a moving object in video monitoring according to claim 1, wherein the updating the background model in step 6 is specifically a point where the position is (x, y), as the brightness value in the current background model is d, and the brightness value in the current image is p, then the value of the background model is updated to D k = (1 - α) × d + α × p after the current image, if the position is (x, y) In the second step, it is determined as the background point, and the value of α is 0.1. If it is determined as the front spot in step 2, the value of α is 0.01.
  7. 根据权利要求5所述的视频监测中运动目标的快速检测方法,其特征在于,步骤4)中当前帧之前一定范围内的帧指的是当前帧的前25帧。 The method for quickly detecting a moving object in video monitoring according to claim 5, wherein the frame within a certain range before the current frame in step 4) refers to the first 25 frames of the current frame.
PCT/CN2017/097647 2016-11-28 2017-08-16 Rapid detection method for moving target in video monitoring WO2018095082A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611069001.7 2016-11-28
CN201611069001.7A CN106910203B (en) 2016-11-28 2016-11-28 The quick determination method of moving target in a kind of video surveillance

Publications (1)

Publication Number Publication Date
WO2018095082A1 true WO2018095082A1 (en) 2018-05-31

Family

ID=59206767

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/097647 WO2018095082A1 (en) 2016-11-28 2017-08-16 Rapid detection method for moving target in video monitoring

Country Status (2)

Country Link
CN (1) CN106910203B (en)
WO (1) WO2018095082A1 (en)

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658440A (en) * 2018-11-30 2019-04-19 华南理工大学 A kind of method for tracking target based on target significant characteristics
CN109816650A (en) * 2019-01-24 2019-05-28 强联智创(北京)科技有限公司 A kind of target area recognition methods and its system based on two-dimentional DSA image
CN110288051A (en) * 2019-07-03 2019-09-27 电子科技大学 A kind of polyphaser multiple target matching process based on distance
CN110427844A (en) * 2019-07-19 2019-11-08 宁波工程学院 A kind of abnormal behavior video detecting method based on convolutional neural networks
CN110738686A (en) * 2019-10-12 2020-01-31 四川航天神坤科技有限公司 Static and dynamic combined video man-vehicle detection method and system
CN110879951A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Motion foreground detection method and device
CN111047625A (en) * 2020-02-18 2020-04-21 神思电子技术股份有限公司 Semi-automatic dish video sample marking method
CN111161304A (en) * 2019-12-16 2020-05-15 北京空间机电研究所 Remote sensing video target track tracking method for rapid background estimation
CN111259907A (en) * 2020-03-12 2020-06-09 Oppo广东移动通信有限公司 Content identification method and device and electronic equipment
CN111310689A (en) * 2020-02-25 2020-06-19 陕西科技大学 Method for recognizing human body behaviors in potential information fusion home security system
CN111369591A (en) * 2020-03-05 2020-07-03 杭州晨鹰军泰科技有限公司 Method, device and equipment for tracking moving object
CN111553255A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 High-altitude parabolic wall monitoring area positioning method based on gradient algorithm
CN111639578A (en) * 2020-05-25 2020-09-08 上海中通吉网络技术有限公司 Method, device, equipment and storage medium for intelligently identifying illegal parabola
CN111723654A (en) * 2020-05-12 2020-09-29 中国电子***技术有限公司 High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
CN111738108A (en) * 2020-06-08 2020-10-02 中国电信集团工会上海市委员会 Method and system for detecting head of video stream
CN111753609A (en) * 2019-08-02 2020-10-09 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN111833375A (en) * 2019-04-23 2020-10-27 舟山诚创电子科技有限责任公司 Method and system for tracking animal group track
CN111898511A (en) * 2020-07-23 2020-11-06 北京以萨技术股份有限公司 High-altitude parabolic detection method, device and medium based on deep learning
CN111931575A (en) * 2020-07-01 2020-11-13 南京工业大学 Method for detecting and tracking steering of mixing drum of concrete mixer truck based on classifier integration
CN112016440A (en) * 2020-08-26 2020-12-01 杭州云栖智慧视通科技有限公司 Target pushing method based on multi-target tracking
CN112183252A (en) * 2020-09-15 2021-01-05 珠海格力电器股份有限公司 Video motion recognition method and device, computer equipment and storage medium
CN112215870A (en) * 2020-09-17 2021-01-12 武汉联影医疗科技有限公司 Liquid flow track overrun detection method, device and system
CN112288767A (en) * 2020-11-04 2021-01-29 成都寰蓉光电科技有限公司 Automatic detection and tracking method based on target adaptive projection
CN112419362A (en) * 2019-08-21 2021-02-26 中国人民解放***箭军工程大学 Moving target tracking method based on prior information feature learning
CN112435280A (en) * 2020-11-13 2021-03-02 桂林电子科技大学 Moving target detection and tracking method for unmanned aerial vehicle video
CN112489086A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN112508998A (en) * 2020-11-11 2021-03-16 北京工业大学 Visual target alignment method based on global motion
CN112528843A (en) * 2020-12-07 2021-03-19 湖南警察学院 Motor vehicle driver fatigue detection method fusing facial features
CN112686921A (en) * 2021-01-08 2021-04-20 西安羚控电子科技有限公司 Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics
CN112784651A (en) * 2019-11-11 2021-05-11 北京君正集成电路股份有限公司 System for realizing efficient target detection
CN112784738A (en) * 2021-01-21 2021-05-11 上海云从汇临人工智能科技有限公司 Moving object detection alarm method, device and computer readable storage medium
CN112835010A (en) * 2021-03-17 2021-05-25 中国人民解放军海军潜艇学院 Weak and small target detection and combination method based on interframe accumulation
CN113052878A (en) * 2021-03-30 2021-06-29 北京睿芯高通量科技有限公司 Multi-path high-altitude parabolic detection method and system for edge equipment in security system
CN113111681A (en) * 2020-01-09 2021-07-13 北京君正集成电路股份有限公司 Method for reducing detection false alarm of human-shaped upper body
CN113223059A (en) * 2021-05-17 2021-08-06 浙江大学 Weak and small airspace target detection method based on super-resolution feature enhancement
CN113297950A (en) * 2021-05-20 2021-08-24 首都师范大学 Dynamic target detection method
CN113569770A (en) * 2021-07-30 2021-10-29 北京市商汤科技开发有限公司 Video detection method and device, electronic equipment and storage medium
CN113723364A (en) * 2021-09-28 2021-11-30 中国农业银行股份有限公司 Moving object identification method and device
CN113807250A (en) * 2021-09-17 2021-12-17 沈阳航空航天大学 Anti-shielding and scale-adaptive low-altitude airspace flying target tracking method
CN113989357A (en) * 2021-11-10 2022-01-28 广东粤海珠三角供水有限公司 Shield slag-tapping gradation rapid estimation method based on monitoring video
CN114743154A (en) * 2022-06-14 2022-07-12 广州英码信息科技有限公司 Work clothes identification method based on registration form and computer readable medium
US20220415054A1 (en) * 2019-06-24 2022-12-29 Nec Corporation Learning device, traffic event prediction system, and learning method
CN111191576B (en) * 2019-12-27 2023-04-25 长安大学 Personnel behavior target detection model construction method, intelligent analysis method and system
CN116205914A (en) * 2023-04-28 2023-06-02 山东中胜涂料有限公司 Waterproof coating production intelligent monitoring system

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910203B (en) * 2016-11-28 2018-02-13 江苏东大金智信息***有限公司 The quick determination method of moving target in a kind of video surveillance
CN107578424B (en) * 2017-08-04 2020-09-29 中山大学 Dynamic background difference detection method, system and device based on space-time classification
EP3796189A4 (en) 2018-05-18 2022-03-02 Cambricon Technologies Corporation Limited Video retrieval method, and method and apparatus for generating video retrieval mapping relationship
CN112597341A (en) * 2018-05-25 2021-04-02 中科寒武纪科技股份有限公司 Video retrieval method and video retrieval mapping relation generation method and device
CN109188390B (en) * 2018-08-14 2023-05-23 苏州大学张家港工业技术研究院 High-precision detection and tracking method for moving target
CN109446901B (en) * 2018-09-21 2020-10-27 北京晶品特装科技有限责任公司 Embedded transplantation real-time humanoid target automatic identification algorithm
CN109862263B (en) * 2019-01-25 2021-07-30 桂林长海发展有限责任公司 Moving target automatic tracking method based on image multi-dimensional feature recognition
CN113255737B (en) * 2021-04-30 2023-08-08 超节点创新科技(深圳)有限公司 Method for sorting baggage in folded package on civil aviation sorting line, electronic equipment and storage medium
CN113554683A (en) * 2021-09-22 2021-10-26 成都考拉悠然科技有限公司 Feature tracking method based on video analysis and object detection
CN115273138B (en) * 2022-06-29 2023-04-11 珠海视熙科技有限公司 Human body detection system and passenger flow camera
CN115170535A (en) * 2022-07-20 2022-10-11 水电水利规划设计总院有限公司 Hydroelectric engineering fishway fish passing counting method and system based on image recognition
CN117292321A (en) * 2023-09-27 2023-12-26 深圳市正通荣耀通信科技有限公司 Motion detection method and device based on video monitoring and computer equipment
CN117537929A (en) * 2023-10-27 2024-02-09 大湾区大学(筹) Unmanned aerial vehicle detection method, system, equipment and medium based on infrared thermal imaging

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090309966A1 (en) * 2008-06-16 2009-12-17 Chao-Ho Chen Method of detecting moving objects
CN101739550A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Method and system for detecting moving objects
CN101739686A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Moving object tracking method and system thereof
CN101854467A (en) * 2010-05-24 2010-10-06 北京航空航天大学 Method for adaptively detecting and eliminating shadow in video segmentation
CN106910203A (en) * 2016-11-28 2017-06-30 江苏东大金智信息***有限公司 The method for quick of moving target in a kind of video surveillance

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100495438C (en) * 2007-02-09 2009-06-03 南京大学 Method for detecting and identifying moving target based on video monitoring
CN101739551B (en) * 2009-02-11 2012-04-18 北京智安邦科技有限公司 Method and system for identifying moving objects
CN101916447B (en) * 2010-07-29 2012-08-15 江苏大学 Robust motion target detecting and tracking image processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090309966A1 (en) * 2008-06-16 2009-12-17 Chao-Ho Chen Method of detecting moving objects
CN101739550A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Method and system for detecting moving objects
CN101739686A (en) * 2009-02-11 2010-06-16 北京智安邦科技有限公司 Moving object tracking method and system thereof
CN101854467A (en) * 2010-05-24 2010-10-06 北京航空航天大学 Method for adaptively detecting and eliminating shadow in video segmentation
CN106910203A (en) * 2016-11-28 2017-06-30 江苏东大金智信息***有限公司 The method for quick of moving target in a kind of video surveillance

Cited By (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110879951A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Motion foreground detection method and device
CN110879951B (en) * 2018-09-06 2022-10-25 华为技术有限公司 Motion foreground detection method and device
CN109658440A (en) * 2018-11-30 2019-04-19 华南理工大学 A kind of method for tracking target based on target significant characteristics
CN109816650B (en) * 2019-01-24 2022-11-25 强联智创(北京)科技有限公司 Target area identification method and system based on two-dimensional DSA image
CN109816650A (en) * 2019-01-24 2019-05-28 强联智创(北京)科技有限公司 A kind of target area recognition methods and its system based on two-dimentional DSA image
CN111833375A (en) * 2019-04-23 2020-10-27 舟山诚创电子科技有限责任公司 Method and system for tracking animal group track
CN111833375B (en) * 2019-04-23 2024-04-05 舟山诚创电子科技有限责任公司 Method and system for tracking animal group track
US20220415054A1 (en) * 2019-06-24 2022-12-29 Nec Corporation Learning device, traffic event prediction system, and learning method
CN110288051A (en) * 2019-07-03 2019-09-27 电子科技大学 A kind of polyphaser multiple target matching process based on distance
CN110288051B (en) * 2019-07-03 2022-04-22 电子科技大学 Multi-camera multi-target matching method based on distance
CN110427844A (en) * 2019-07-19 2019-11-08 宁波工程学院 A kind of abnormal behavior video detecting method based on convolutional neural networks
CN110427844B (en) * 2019-07-19 2022-11-22 宁波工程学院 Behavior anomaly video detection method based on convolutional neural network
CN111753609A (en) * 2019-08-02 2020-10-09 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN111753609B (en) * 2019-08-02 2023-12-26 杭州海康威视数字技术股份有限公司 Target identification method and device and camera
CN112419362A (en) * 2019-08-21 2021-02-26 中国人民解放***箭军工程大学 Moving target tracking method based on prior information feature learning
CN112419362B (en) * 2019-08-21 2023-07-07 中国人民解放***箭军工程大学 Moving target tracking method based on priori information feature learning
CN110738686B (en) * 2019-10-12 2022-12-02 四川航天神坤科技有限公司 Static and dynamic combined video man-vehicle detection method and system
CN110738686A (en) * 2019-10-12 2020-01-31 四川航天神坤科技有限公司 Static and dynamic combined video man-vehicle detection method and system
CN112784651A (en) * 2019-11-11 2021-05-11 北京君正集成电路股份有限公司 System for realizing efficient target detection
CN111161304A (en) * 2019-12-16 2020-05-15 北京空间机电研究所 Remote sensing video target track tracking method for rapid background estimation
CN111191576B (en) * 2019-12-27 2023-04-25 长安大学 Personnel behavior target detection model construction method, intelligent analysis method and system
CN113111681B (en) * 2020-01-09 2024-05-03 北京君正集成电路股份有限公司 Method for reducing false alarm of detection of upper body of humanoid form
CN113111681A (en) * 2020-01-09 2021-07-13 北京君正集成电路股份有限公司 Method for reducing detection false alarm of human-shaped upper body
CN111047625B (en) * 2020-02-18 2023-04-07 神思电子技术股份有限公司 Semi-automatic dish video sample marking method
CN111047625A (en) * 2020-02-18 2020-04-21 神思电子技术股份有限公司 Semi-automatic dish video sample marking method
CN111310689A (en) * 2020-02-25 2020-06-19 陕西科技大学 Method for recognizing human body behaviors in potential information fusion home security system
CN111310689B (en) * 2020-02-25 2023-04-07 陕西科技大学 Method for recognizing human body behaviors in potential information fusion home security system
CN111369591A (en) * 2020-03-05 2020-07-03 杭州晨鹰军泰科技有限公司 Method, device and equipment for tracking moving object
CN111259907B (en) * 2020-03-12 2024-03-12 Oppo广东移动通信有限公司 Content identification method and device and electronic equipment
CN111259907A (en) * 2020-03-12 2020-06-09 Oppo广东移动通信有限公司 Content identification method and device and electronic equipment
CN111553255B (en) * 2020-04-26 2023-04-07 上海天诚比集科技有限公司 High-altitude parabolic wall monitoring area positioning method based on gradient algorithm
CN111553255A (en) * 2020-04-26 2020-08-18 上海天诚比集科技有限公司 High-altitude parabolic wall monitoring area positioning method based on gradient algorithm
CN111723654B (en) * 2020-05-12 2023-04-07 中国电子***技术有限公司 High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
CN111723654A (en) * 2020-05-12 2020-09-29 中国电子***技术有限公司 High-altitude parabolic detection method and device based on background modeling, YOLOv3 and self-optimization
CN111639578B (en) * 2020-05-25 2023-09-19 上海中通吉网络技术有限公司 Method, device, equipment and storage medium for intelligently identifying illegal parabolic objects
CN111639578A (en) * 2020-05-25 2020-09-08 上海中通吉网络技术有限公司 Method, device, equipment and storage medium for intelligently identifying illegal parabola
CN111738108B (en) * 2020-06-08 2024-01-16 中国电信集团工会上海市委员会 Method and system for detecting head of video stream
CN111738108A (en) * 2020-06-08 2020-10-02 中国电信集团工会上海市委员会 Method and system for detecting head of video stream
CN111931575A (en) * 2020-07-01 2020-11-13 南京工业大学 Method for detecting and tracking steering of mixing drum of concrete mixer truck based on classifier integration
CN111898511A (en) * 2020-07-23 2020-11-06 北京以萨技术股份有限公司 High-altitude parabolic detection method, device and medium based on deep learning
CN112016440A (en) * 2020-08-26 2020-12-01 杭州云栖智慧视通科技有限公司 Target pushing method based on multi-target tracking
CN112016440B (en) * 2020-08-26 2024-02-20 杭州云栖智慧视通科技有限公司 Target pushing method based on multi-target tracking
CN112183252A (en) * 2020-09-15 2021-01-05 珠海格力电器股份有限公司 Video motion recognition method and device, computer equipment and storage medium
CN112215870B (en) * 2020-09-17 2022-07-12 武汉联影医疗科技有限公司 Liquid flow track overrun detection method, device and system
CN112215870A (en) * 2020-09-17 2021-01-12 武汉联影医疗科技有限公司 Liquid flow track overrun detection method, device and system
CN112288767A (en) * 2020-11-04 2021-01-29 成都寰蓉光电科技有限公司 Automatic detection and tracking method based on target adaptive projection
CN112508998A (en) * 2020-11-11 2021-03-16 北京工业大学 Visual target alignment method based on global motion
CN112435280A (en) * 2020-11-13 2021-03-02 桂林电子科技大学 Moving target detection and tracking method for unmanned aerial vehicle video
CN112528843A (en) * 2020-12-07 2021-03-19 湖南警察学院 Motor vehicle driver fatigue detection method fusing facial features
CN112489086A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN112686921B (en) * 2021-01-08 2023-12-01 西安羚控电子科技有限公司 Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics
CN112686921A (en) * 2021-01-08 2021-04-20 西安羚控电子科技有限公司 Multi-interference unmanned aerial vehicle detection tracking method based on track characteristics
CN112784738A (en) * 2021-01-21 2021-05-11 上海云从汇临人工智能科技有限公司 Moving object detection alarm method, device and computer readable storage medium
CN112784738B (en) * 2021-01-21 2023-09-19 上海云从汇临人工智能科技有限公司 Moving object detection alarm method, moving object detection alarm device and computer readable storage medium
CN112835010A (en) * 2021-03-17 2021-05-25 中国人民解放军海军潜艇学院 Weak and small target detection and combination method based on interframe accumulation
CN113052878B (en) * 2021-03-30 2024-01-02 北京中科通量科技有限公司 Multipath high-altitude parabolic detection method and system for edge equipment in security system
CN113052878A (en) * 2021-03-30 2021-06-29 北京睿芯高通量科技有限公司 Multi-path high-altitude parabolic detection method and system for edge equipment in security system
CN113223059A (en) * 2021-05-17 2021-08-06 浙江大学 Weak and small airspace target detection method based on super-resolution feature enhancement
CN113297950A (en) * 2021-05-20 2021-08-24 首都师范大学 Dynamic target detection method
CN113569770A (en) * 2021-07-30 2021-10-29 北京市商汤科技开发有限公司 Video detection method and device, electronic equipment and storage medium
CN113569770B (en) * 2021-07-30 2024-06-11 北京市商汤科技开发有限公司 Video detection method and device, electronic equipment and storage medium
CN113807250B (en) * 2021-09-17 2024-02-02 沈阳航空航天大学 Anti-shielding and scale-adaptive low-altitude airspace flight target tracking method
CN113807250A (en) * 2021-09-17 2021-12-17 沈阳航空航天大学 Anti-shielding and scale-adaptive low-altitude airspace flying target tracking method
CN113723364A (en) * 2021-09-28 2021-11-30 中国农业银行股份有限公司 Moving object identification method and device
CN113989357A (en) * 2021-11-10 2022-01-28 广东粤海珠三角供水有限公司 Shield slag-tapping gradation rapid estimation method based on monitoring video
CN114743154B (en) * 2022-06-14 2022-09-20 广州英码信息科技有限公司 Work clothes identification method based on registration form and computer readable medium
CN114743154A (en) * 2022-06-14 2022-07-12 广州英码信息科技有限公司 Work clothes identification method based on registration form and computer readable medium
CN116205914A (en) * 2023-04-28 2023-06-02 山东中胜涂料有限公司 Waterproof coating production intelligent monitoring system

Also Published As

Publication number Publication date
CN106910203B (en) 2018-02-13
CN106910203A (en) 2017-06-30

Similar Documents

Publication Publication Date Title
WO2018095082A1 (en) Rapid detection method for moving target in video monitoring
CN106203274B (en) Real-time pedestrian detection system and method in video monitoring
CN110688987B (en) Pedestrian position detection and tracking method and system
WO2019101220A1 (en) Deep learning network and average drift-based automatic vessel tracking method and system
Wang et al. Robust video-based surveillance by integrating target detection with tracking
CN111881853B (en) Method and device for identifying abnormal behaviors in oversized bridge and tunnel
CN100525395C (en) Pedestrian tracting method based on principal axis marriage under multiple vedio cameras
CN103729858B (en) A kind of video monitoring system is left over the detection method of article
CN104978567B (en) Vehicle checking method based on scene classification
CN101315701B (en) Movement destination image partition method
WO2008070206A2 (en) A seamless tracking framework using hierarchical tracklet association
CN109657581A (en) Urban track traffic gate passing control method based on binocular camera behavioral value
CN106127812B (en) A kind of passenger flow statistical method of the non-gate area in passenger station based on video monitoring
WO2021139049A1 (en) Detection method, detection apparatus, monitoring device, and computer readable storage medium
CN106778540B (en) Parking detection is accurately based on the parking event detecting method of background double layer
CN113095263B (en) Training method and device for pedestrian re-recognition model under shielding and pedestrian re-recognition method and device under shielding
CN110956158A (en) Pedestrian shielding re-identification method based on teacher and student learning frame
CN113743260B (en) Pedestrian tracking method under condition of dense pedestrian flow of subway platform
CN109359563A (en) A kind of road occupying phenomenon real-time detection method based on Digital Image Processing
Lian et al. A novel method on moving-objects detection based on background subtraction and three frames differencing
Liang et al. Research on concrete cracks recognition based on dual convolutional neural network
Surkutlawar et al. Shadow suppression using rgb and hsv color space in moving object detection
CN107871315B (en) Video image motion detection method and device
Sezen et al. Deep learning-based door and window detection from building façade
Fang et al. Real-time multiple vehicles tracking with occlusion handling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17872908

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17872908

Country of ref document: EP

Kind code of ref document: A1