WO2014082480A1 - 一种人数及人群运动方向的计算方法及装置 - Google Patents

一种人数及人群运动方向的计算方法及装置 Download PDF

Info

Publication number
WO2014082480A1
WO2014082480A1 PCT/CN2013/083687 CN2013083687W WO2014082480A1 WO 2014082480 A1 WO2014082480 A1 WO 2014082480A1 CN 2013083687 W CN2013083687 W CN 2013083687W WO 2014082480 A1 WO2014082480 A1 WO 2014082480A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
current frame
image
frame image
point
Prior art date
Application number
PCT/CN2013/083687
Other languages
English (en)
French (fr)
Inventor
董振江
罗圣美
刘锋
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP13857870.3A priority Critical patent/EP2927871A4/en
Priority to US14/648,030 priority patent/US9576199B2/en
Publication of WO2014082480A1 publication Critical patent/WO2014082480A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • the invention relates to an intelligent video monitoring technology, in particular to a method and a device for calculating the number of people and the direction of movement of a crowd. Background technique
  • the method for estimating the population density is to use an intelligent video monitoring device to collect images for a period of time, analyze the collected images to obtain multiple image features, and use the obtained image features to establish a regression model; when the intelligent video monitoring device monitors in real time, The current frame image is analyzed to obtain the characteristics of the current frame image as input, the regression model is used to calculate the number of people, and the number of people is used to calculate the population density.
  • the object of the present invention is to provide a calculation method for the number of people and the direction of movement of the crowd.
  • the method and device can improve the calculation speed and the accuracy of the calculation result.
  • Embodiments of the present invention provide a method for calculating the number of people and the direction of movement of a crowd, and the method includes:
  • the motion feature points of the current frame image are weighted by direction to obtain the direction of the crowd movement; the edge points of the pedestrian image are extracted from the foreground image of the current frame image, and the edge points of the pedestrian image and the current frame image are obtained according to the correction coefficient of the position
  • the motion feature points are combined with a weighted count to obtain the number of pedestrians.
  • comparing the feature points of the current frame image with the selected historical frame image to obtain motion feature points of the current frame image including:
  • the motion feature points of the current frame image are weighted and counted according to the correction coefficient of the location, and the direction of the crowd motion is obtained, including:
  • the corresponding correction coefficient is searched one by one as the weight of the point; the weights of all the motion feature points of the current frame are accumulated in the direction to obtain the current frame motion histogram, and updated to the motion histogram history record. ;
  • the total number of motion feature points in each direction is counted, and the ratio of the total number of motion feature points in each direction to the total number of motion feature points is calculated, and a histogram of historical motion is obtained; Binarizing the historical motion histogram, obtaining the current frame motion direction record, and updating to the motion direction history record; according to the motion direction history record, obtaining the final motion feature point count in each direction, and the motion feature count value exceeds the preset
  • the direction of the motion threshold is used as the direction of movement of the crowd.
  • the edge point of the pedestrian image and the motion feature point of the current frame image are jointly weighted and counted according to the correction coefficient of the location, and the number of pedestrians is obtained, including: according to the motion feature of the edge point of the pedestrian image and the current frame image At the position of the point, the preset correction coefficient of the point is searched one by one, and the correction coefficient corresponding to the motion feature point of the current frame image is weighted and accumulated as the number of pedestrians.
  • the embodiment of the present invention further provides a computing device for the number of people and the direction of motion, the device comprising: an image acquisition module, a motion direction calculation module, and a number of people calculation module;
  • An image acquisition module configured to provide a current frame image for the motion direction calculation module and the number of people calculation module
  • the motion direction calculation module is configured to extract a feature point of the current frame image, compare the feature point of the current frame image with the selected history frame image, obtain a motion feature point of the current frame image, and weight the motion feature point of the current frame image by direction. Count, get the direction of movement of the crowd;
  • the number calculation module is configured to extract an edge point of the pedestrian image from the foreground image of the current frame image, and perform weighted counting on the edge point of the pedestrian image and the motion feature point of the current frame image according to the correction coefficient of the location, to obtain the number of pedestrians.
  • the motion direction calculation module is configured to extract feature points of the current frame image one by one, and select a template image around the feature points; from the selected historical frame image, select around the corresponding position of the feature points of the current frame image Searching for an image; using the template image, matching the search within the search image, determining according to the positional relationship between the feature point and the matching point, and determining that the feature point of the current frame image is a motion feature point when the distance is greater than the set threshold; until the current All motion feature points in the frame image and their direction of motion.
  • the motion direction calculation module is configured to search for the corresponding correction coefficient as the weight of the point according to the position of each motion feature point; and accumulate the weights of all the motion feature points of the current frame in the direction to obtain the current
  • the frame motion histogram is updated to the motion histogram history record; the total number of motion feature points in each direction is separately calculated according to the motion histogram history record, and the ratio of the total number of motion feature points in each direction to the total number of motion feature points is calculated, and the historical motion histogram is obtained.
  • the histogram of the historical motion is binarized, the current frame motion direction record is obtained, and updated to the motion direction history record; according to the motion direction history record, the final motion feature point counts in each direction are obtained, and the motion is obtained.
  • the direction in which the feature count value exceeds the preset motion threshold is used as the direction of the crowd movement.
  • the number calculation module is configured to search for a preset correction coefficient of the point according to the edge point of the pedestrian image and the position of the motion feature point of the current frame image, and set the edge point of each pedestrian image with the current frame.
  • the correction coefficient corresponding to the motion feature point of the image is weighted and accumulated as the number of pedestrians.
  • the method and device for calculating the number of people and the movement direction of the crowd provided by the embodiment of the present invention can obtain the movement direction of the crowd by weighting the motion feature points of the current frame image; and extracting the edge points of the pedestrian image from the foreground image of the current frame image.
  • the correction coefficient of the location the number of pedestrians is obtained by jointly weighting the edge points of the pedestrian image and the motion feature points of the current frame image.
  • the movement direction and the number of pedestrians can be obtained by using fewer image features, so that the calculation speed is improved; and the correction coefficient is used for weighting counting in the calculation process, so that the monitoring can be performed in the calculation.
  • the different shooting angles of the device and the distance are compensated to make the final result more accurate.
  • FIG. 1 is a schematic flow chart of a method for calculating a number of people and a moving direction of a crowd according to an embodiment of the present invention
  • FIG. 2 is a schematic structural diagram of a computing device for a number of people and a moving direction of a crowd according to an embodiment of the present invention.
  • the basic idea of the embodiment of the present invention is: extracting a feature point of a current frame image, comparing a feature point of the current frame image with a selected historical frame image, and obtaining a motion feature point of the current frame image; and pressing a motion feature point of the current frame image
  • the direction weighted counting is used to obtain the direction of the crowd movement; the edge points of the pedestrian image are extracted from the foreground image of the current frame image, and the edge points of the pedestrian image and the motion feature points of the current frame image are jointly weighted and counted according to the correction coefficient of the position. Number of pedestrians.
  • the embodiment of the present invention provides a method for calculating the number of people and the direction of movement of the crowd, as shown in FIG. 1, including the following steps:
  • Step 101 Extract a feature point of the current frame image, compare a feature point of the current frame image with the selected historical frame image, and obtain a motion feature point of the current frame image.
  • the extracting the current frame image is to set the detection area according to the prior art, and extract the detection area image as the current frame image.
  • the method for extracting feature points of the current frame image is a prior art, and the SURF algorithm may be used, and the parameter setting adopts a default setting, and the number of image feature points is set according to actual conditions, and the specific calculation method is not described herein.
  • the comparing the feature points of the current frame image with the selected historical frame image to obtain the motion feature points of the current frame image includes: extracting feature points of the current frame image one by one, and selecting an image of size N x M around the extracted feature points As the template image T; from the selected historical frame image, an image of size W x H is selected as the search image S around the corresponding position of the feature point of the current frame image; and the matching search is performed in the search image S by using the template image T, according to The positional relationship between the feature point and the matching point is determined.
  • the distance is greater than the set threshold, the feature point of the current frame image is determined to be a motion feature point; and so on, all motion feature points in the current frame image and their motion directions are obtained.
  • the set threshold is: set according to actual needs, I won't go into details here.
  • the selected historical frame image is: a certain frame image before the current frame selected according to the actual setting, for example, may be set to select an image of five frames before the current frame, or set to select an image of ten frames before the current frame;
  • the search image S is larger than the template image T, that is, W is greater than N, and H is greater than M.
  • the matching search is performed in the search image S by using the template image T, and the decision is made according to the positional relationship between the feature point and the matching point, which may be: in the search image S, sequentially selecting the template image T according to the specified order. a partial image, calculating an absolute error sum of the partial image and the template image T, when the absolute error sum is less than a preset matching threshold, using a center point of the partial image as a matching point of the search image S;
  • Step 102 Weighting the motion feature points of the current frame image in a direction to obtain a moving direction of the crowd.
  • the motion feature points of the current frame image are weighted by direction: according to the position of each motion feature point, the corresponding correction coefficient is searched as the weight of the point; the weights of all the motion feature points of the current frame are pressed.
  • the direction is accumulated to obtain the current frame motion histogram.
  • the correction coefficient is a correction value preset for each feature point.
  • the obtaining the movement direction of the crowd may include: adding a current frame motion histogram to the motion histogram history record; and separately counting motions in each direction according to the motion histogram history record The total number of points, the ratio of the total number of motion feature points in each direction to the total number of motion feature points is calculated, and the historical motion histogram is obtained; the histogram of the historical motion is binarized to obtain the current frame motion direction record, and updated to the motion direction history record; According to the movement direction history record, the final motion feature point counts in each direction are obtained, and the direction in which the motion feature count value exceeds the preset motion threshold value is taken as the crowd movement direction.
  • the method for acquiring the motion histogram is a prior art, and is not described herein;
  • the motion histogram history record is a set of a specified number of motion histograms saved; the specified number is specified according to actual conditions. For example, you can specify to save 50;
  • the motion histogram history includes 50 motion histograms, expressed as HH[8] [50]; the motion histogram history records each direction separately.
  • the total number of motion feature points calculate the ratio of the total number of motion feature points in each direction to the total number of motion feature points, and obtain historical motion straight
  • n one of eight directions
  • the binarization of the histogram of the historical motion is performed to obtain a record of the current frame motion direction, and the formula: ⁇ >[" ⁇ S[i] ⁇ , where the current frame motion direction record is recorded,
  • n represents one of eight directions
  • Step 103 Extract an edge point of the pedestrian image from the foreground image of the current frame image, and perform weighted counting on the edge point of the pedestrian image and the motion feature point of the current frame image according to the correction coefficient of the location, to obtain the number of pedestrians.
  • the foreground image of the current frame image may be: using a mixed Gaussian background modeling algorithm to obtain a background image and a foreground image, and correcting the foreground image.
  • the hybrid Gaussian background modeling algorithm is prior art, and is not described here; the correction of the foreground image may be performed by using morphological filtering and calculating the foreground confidence by integrating the image, which is a prior art, here Do not repeat them.
  • the extracting the edge points of the pedestrian image comprises: performing Canny edge detection on the current frame image to obtain an initial edge image, and performing an AND operation with the foreground image of the current frame image to obtain a corrected pedestrian edge image; in the corrected pedestrian edge image The edge points are counted, and finally the edge points of the pedestrian image are obtained.
  • the edge points of the pedestrian image and the motion feature points of the current frame image are jointly weighted and counted, and the number of pedestrians is obtained: according to the position of the edge point of the pedestrian image and the motion feature point of the current frame image, one by one The preset correction coefficient of the point is searched, and the correction coefficient corresponding to the motion feature points of the current frame image is weighted and accumulated as the number of pedestrians.
  • the embodiment of the present invention provides a computing device for the number of people and the direction of movement of the crowd.
  • the device includes: an image acquiring module 21, a motion direction calculating module 22, and a number of people calculating module 23;
  • the image acquisition module 21 is configured to provide a current frame image for the crowd movement direction calculation module 22 and the number calculation module 23;
  • the motion direction calculation module 22 is configured to extract feature points of the current frame image from the image acquisition module 21, compare the feature points of the current frame image with the selected historical frame image, and obtain motion feature points of the current frame image, and the current image frame.
  • the motion feature points are weighted by direction, Get the direction of the crowd;
  • the number calculation module 23 is configured to extract an edge point of the pedestrian image from the foreground image of the current frame image saved in the image acquisition module 21, and select a motion feature point of the edge point of the pedestrian image and the current frame image according to the correction coefficient of the location Joint weighted counts to get the number of pedestrians.
  • the image acquisition module 21 is configured to set a detection area and extract the detection area image as the current frame image.
  • the motion direction calculation module 22 is configured to use the SURF algorithm, and the parameter setting adopts a default setting, and the number of image feature points is set according to actual conditions, and the specific calculation is not described herein.
  • the motion direction calculation module 22 is configured to extract feature points of the current frame image one by one, and select an image of size N x M as a template image T around the extracted feature points; from the selected historical frame image, in the current frame image
  • the image of the size W x H is selected as the search image S around the corresponding position of the feature point; the matching search is performed in the search image S by using the template image T, and the decision is made according to the positional relationship between the feature point and the matching point, when the distance is greater than the set At the threshold, it is determined that the feature points of the current frame image are motion feature points; and so on, all motion feature points in the current frame image and their motion directions are obtained.
  • the motion direction calculation module 22 is configured to select a certain frame image before the current frame as a historical frame image according to an actual setting. For example, the image may be set to select an image of five frames before the current frame, or may be set to select the first frame before the current frame. The image of the frame.
  • the motion direction calculation module 22 is configured to sequentially select a partial image that is equal to the template image T in the search image S according to a specified order, and calculate an absolute error sum of the partial image and the template image T.
  • the absolute error is less than the preset matching threshold
  • the center point of the partial image is used as a matching point of the search image S; the relative displacement of the matching point and the feature point is calculated, and it is determined whether the relative position is smaller than a preset motion threshold.
  • the motion direction calculation module 22 is configured to search for a corresponding correction coefficient as a weight of the point according to the position of each motion feature point; and accumulate the weights of all the motion feature points of the current frame in a direction to obtain a current frame motion quadrature Figure.
  • the motion direction calculation module 22 is configured to update the current frame motion histogram to the motion histogram history record; separately calculate the total number of motion feature points in each direction according to the motion histogram history record, and calculate the total number of motion feature points in each direction.
  • the ratio of the total number of points, the histogram of the historical motion is obtained; the histogram of the historical motion is binarized to obtain the current frame motion direction record, and updated to the motion direction history record; according to the motion direction history record, the final motion feature point count in each direction is obtained.
  • a direction in which the motion feature count value exceeds a preset motion threshold is used as a crowd movement direction.
  • the method for obtaining the motion histogram is a prior art, and is not described herein;
  • the motion histogram history record is a set of a specified number of motion histograms saved; the specified number is specified according to actual conditions, for example, 50 can be specified to be saved; for example: the current frame motion histogram is set to H [ 8], the motion histogram history record includes 50 motion histograms, expressed as Li [8] [50];
  • n one of eight directions
  • n one of eight directions
  • History a set of saved movement records for a specified number of movements, n for one of eight directions, j for the saved direction of movement recorded.
  • the number calculation module 23 is configured to adopt a mixed Gaussian background modeling algorithm to obtain a background image and a foreground image, and correct the foreground image.
  • the hybrid Gaussian background modeling algorithm is a prior art, and is not described herein; Correcting the foreground image may be performed by using morphological filtering and calculating the foreground confidence by integrating the image, which is not described in the prior art;
  • the number calculation module 23 is configured to perform Canny edge detection on the current frame image to obtain an initial edge image, and perform an AND operation with the foreground image of the current frame image to obtain a corrected pedestrian edge image; the edge in the corrected pedestrian edge image Point count, and finally get the edge point of the pedestrian image.
  • the number calculation module 23 is configured to search for a preset correction coefficient of the point according to the edge point of the pedestrian image and the position of the motion feature point of the current frame image, and move the edge point of all the pedestrian images with the motion of the current frame image.
  • the correction coefficients corresponding to the feature points are weighted and accumulated, and the value is obtained as the number of pedestrians.
  • the image acquisition module described above may be implemented using a camera, and the motion direction calculation module and the number of people calculation module may be implemented using a processor, such as a central processing unit (CPU).
  • a processor such as a central processing unit (CPU).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种人数及人群运动方向的计算方法,包括:提取当前帧图像的特征点,将当前帧图像的特征点与选取的历史帧图像对比,得到当前帧图像的运动特征点;根据所在位置的修正系数,对当前帧图像的运动特征点按方向加权计数,得到人群运动方向;从当前帧图像的前景图像中提取行人图像的边缘点;根据所在位置的修正系数,对行人图像的边缘点与当前帧图像的运动特征点联合加权计数,得到行人数量。本发明还公开了一种人数及人群运动方向的计算装置,采用本发明能提高计算结果的准确率。

Description

一种人数及人群运动方向的计算方法及装置 技术领域
本发明涉及智能视频监控技术, 尤其涉及一种人数及人群运动方向的 计算方法及装置。 背景技术
随着经济社会的发展, 人们社会活动的不断增加, 尤其是我国城镇化 进程的推进, 城市人口密度越来越大。 因此, 人群密度估计有着广泛的应 用前景和研究价值。
目前, 人群密度估计的方法为利用智能视频监控设备采集一段时间的 图像, 对采集的图像进行分析得到多个图像特征, 使用得到的图像特征建 立回归模型; 智能视频监控设备实时进行监控时, 对当前帧图像进行分析 得到当前帧图像的个特征作为输入, 使用回归模型计算得到人数, 再利用 人数计算出人群密度。
但是, 上述人群密度估计的方法中, 建立回归模型时、 以及计算人数 时, 提取的图像特征较多, 这样会提高计算的复杂度并影响计算速度; 并 且, 由于进行智能视频监控时, 没有考虑监控设备的架设位置受到的角度 影响, 所以直接使用提取的图像特征进行分析时, 建立的回归模型不够准 确, 进而导致使用回归模型计算人数也不准确。
可见, 现有技术中对人群密度进行估计的方法, 计算速度较低, 并且 计算结果不准确。 发明内容
有鉴于此, 本发明的目的在于提供一种人数及人群运动方向的计算方 法及装置, 能提高计算速度及计算结果准确度。
为达到上述目的, 本发明实施例的技术方案是这样实现的:
本发明实施例提供了一种人数及人群运动方向的计算方法, 该方法包 括:
提取当前帧图像的特征点, 将当前帧图像的特征点与选取的历史帧 图像对比, 得到当前帧图像的运动特征点;
对当前帧图像的运动特征点按方向加权计数, 得到人群运动方向; 从当前帧图像的前景图像中提取行人图像的边缘点, 根据所在位置 的修正系数, 对行人图像的边缘点与当前帧图像的运动特征点联合加权 计数, 得到行人数量。
上述方案中,所述将当前帧图像的特征点与选取的历史帧图像对比, 得到当前帧图像的运动特征点, 包括:
逐个提取当前帧图像的特征点, 在特征点的周围选取模板图像; 从 选取的历史帧图像中, 在当前帧图像的特征点对应位置周围选取搜索图 像;
利用模板图像, 在搜索图像内匹配搜索, 根据特征点与匹配点的位 置关系进行判决, 当距离大于设定的阈值时, 判定当前帧图像的特征点 为运动特征点;直至得到当前帧图像中的所有运动特征点及其运动方向。
上述方案中, 所述根据所在位置的修正系数, 对当前帧图像的运动 特征点按方向加权计数, 得到人群运动方向, 包括:
根据各运动特征点的位置, 逐个查找对应的修正系数作为该点的权 值; 将当前帧所有运动特征点的权值按方向累加, 得到当前帧运动直方 图, 并更新至运动直方图历史记录;
根据运动直方图历史记录分别统计各方向的运动特征点总数, 计算 各方向运动特征点总数占运动特征点总数的比值,得到历史运动直方图; 对历史运动直方图进行二值化, 得到当前帧运动方向记录, 并更新至运 动方向历史记录; 根据运动方向历史记录, 得到各方向最终运动特征点 计数, 将所述运动特征计数值超过预置的运动阈值的方向作为人群运动 方向。
上述方案中, 所述根据所在位置的修正系数, 对行人图像的边缘点 与当前帧图像的运动特征点联合加权计数, 得到行人数量, 包括: 根据 行人图像的边缘点与当前帧图像的运动特征点所在位置, 逐个查找预置 的该点的修正系数, 将所有行人图像的边缘点与当前帧图像的运动特征 点对应的修正系数加权累加所得数值作为行人数量。
本发明实施例还提供了一种人数及运动方向的计算装置, 该装置包 括: 图像获取模块、 运动方向计算模块和人数计算模块; 其中,
图像获取模块, 配置为为运动方向计算模块以及人数计算模块提供 当前帧图像;
运动方向计算模块, 配置为提取当前帧图像的特征点, 将当前帧图 像的特征点与选取的历史帧图像对比, 得到当前帧图像的运动特征点, 对当前帧图像的运动特征点按方向加权计数, 得到人群运动方向;
人数计算模块, 配置为从当前帧图像的前景图像中提取行人图像的 边缘点, 根据所在位置的修正系数, 对行人图像的边缘点与当前帧图像 的运动特征点联合加权计数, 得到行人数量。
上述方案中, 所述运动方向计算模块, 配置为逐个提取当前帧图像 的特征点, 在特征点的周围选取模板图像; 从选取的历史帧图像中, 在 当前帧图像的特征点对应位置周围选取搜索图像; 利用模板图像, 在搜 索图像内匹配搜索, 根据特征点与匹配点的位置关系进行判决, 当距离 大于设定的阈值时, 判定当前帧图像的特征点为运动特征点; 直至得到 当前帧图像中的所有运动特征点及其运动方向。 上述方案中, 所述运动方向计算模块, 配置为根据各运动特征点的 位置, 逐个查找对应的修正系数作为该点的权值; 将当前帧所有运动特 征点的权值按方向累加, 得到当前帧运动直方图, 并更新至运动直方图 历史记录;根据运动直方图历史记录分别统计各方向的运动特征点总数, 计算各方向运动特征点总数占运动特征点总数的比值, 得到历史运动直 方图; 根据一定阈值, 对历史运动直方图进行二值化, 得到当前帧运动 方向记录, 并更新至运动方向历史记录; 艮据运动方向历史记录, 得到 各方向最终运动特征点计数, 将所述运动特征计数值超过预置的运动阈 值的方向作为人群运动方向。
上述方案中, 所述人数计算模块, 配置为根据行人图像的边缘点与 当前帧图像的运动特征点所在位置, 逐个查找预置的该点的修正系数, 将所有行人图像的边缘点与当前帧图像的运动特征点对应的修正系数加 权累加所得数值作为行人数量。
本发明实施例提供的人数及人群运动方向的计算方法及装置, 能通过 对当前帧图像的运动特征点加权计数得到人群运动方向; 以及从当前帧 图像的前景图像中提取行人图像的边缘点, 根据所在位置的修正系数, 对行人图像的边缘点与当前帧图像的运动特征点联合加权计数得到行人 数量。 这样, 与现有技术相比, 使用更少的图像特征就能得到人群运动方 向和行人数量, 使计算速度提高; 并且在计算过程中使用修正系数进行加 权计数, 这样就能在计算中对监控设备的不同拍摄角度以及距离远近进行 补偿, 从而使最终结果更加准确。 附图说明
图 1为本发明实施例的人数及人群运动方向的计算方法流程示意图; 图 2 为本发明实施例的人数及人群运动方向的计算装置组成结构示意 图。 具体实施方式
本发明实施例的基本思想是: 提取当前帧图像的特征点, 将当前帧 图像的特征点与选取的历史帧图像对比,得到当前帧图像的运动特征点; 对当前帧图像的运动特征点按方向加权计数, 得到人群运动方向; 从当 前帧图像的前景图像中提取行人图像的边缘点, 根据所在位置的修正系 数, 对行人图像的边缘点与当前帧图像的运动特征点联合加权计数, 得 到行人数量。
下面结合附图及具体实施例对本发明再作进一步的详细说明。
本发明实施例提出人数及人群运动方向的计算方法, 如图 1所示, 包 括以下步骤:
步骤 101 : 提取当前帧图像的特征点, 将当前帧图像的特征点与选 取的历史帧图像对比, 得到当前帧图像的运动特征点。
这里, 所述提取当前帧图像为按照现有技术, 设置检测区, 提取检 测区图像作为当前帧图像。
所述提取当前帧图像的特征点的方法为现有技术, 可以使用 SURF 算法, 参数设置采用默认设置, 图像特征点的数量根据实际情况设定, 具体计算方法这里不做赘述。
所述将当前帧图像的特征点与选取的历史帧图像对比, 得到当前帧 图像的运动特征点包括: 逐个提取当前帧图像的特征点, 在提取的特征 点周围选取大小为 N x M的图像作为模板图像 T; 从选取的历史帧图像 中, 在当前帧图像的特征点对应位置周围选取大小为 W x H的图像作为 搜索图像 S; 利用模板图像 T在搜索图像 S内进行匹配搜索, 根据特征 点与匹配点的位置关系进行判决, 当距离大于设定的阈值时, 判定当前 帧图像的特征点为运动特征点; 依此类推, 得到当前帧图像中的所有运 动特征点及其运动方向。 其中, 所述设定的阈值为: 根据实际需要设置, 这里不做赘述。
所述选取的历史帧图像为: 根据实际设置选取的当前帧之前的某一 帧图像, 比如, 可以设置为选取当前帧之前五帧的图像, 或者, 设置为 选取当前帧之前十帧的图像; 所述搜索图像 S比模板图像 T大, 即 W大 于 N、 且 H大于 M。
所述利用模板图像 T在搜索图像 S内进行匹配搜索, 根据特征点与 匹配点的位置关系进行判决, 可以为: 在搜索图像 S内、 依照指定顺序, 依次选出和模板图像 T等大的局部图像, 计算所述局部图像与模板图像 T 的绝对误差和, 当所述绝对误差和小于预置的匹配阈值时, 将所述局 部图像的中心点作为搜索图像 S的匹配点;
计算匹配点与特征点的相对位移, 判断所述相对位置是否小于预置 的运动阈值, 若是, 则判定对应的特征点为运动特征点, 否则, 判定该 特征点为非运动特征点。
其中, 所述匹配阈值及运动阈值均为根据实际情况预置的数值; 所 述计算所述局部图像与模板图像 T 的绝对误差和为使用公式: Ε{ί, = 8'] {ηι, η) - Τ{ηι, η) \ · 其中, D)表示绝对误差和, ( )标识模板 图像 T在搜索图像 S中的相对位置。
步骤 102: 对当前帧图像的运动特征点按方向加权计数, 得到人群 运动方向。
这里, 所述对当前帧图像的运动特征点按方向加权计数为: 逐个根 据各运动特征点的位置, 查找对应的修正系数作为该点的权值; 将当前 帧所有运动特征点的权值按方向累加, 得到当前帧运动直方图。 其中, 所述修正系数为针对各个特征点预置的修正值。
所述得到人群运动方向, 可以包括: 将当前帧运动直方图添加至运 动直方图历史记录; 根据运动直方图历史记录分别统计各方向的运动特 征点总数, 计算各方向运动特征点总数占运动特征点总数的比值, 得到 历史运动直方图; 对历史运动直方图进行二值化, 得到当前帧运动方向 记录, 并更新至运动方向历史记录; 根据运动方向历史记录, 得到各方 向最终运动特征点计数, 将所述运动特征计数值超过预置的运动阈值的 方向作为人群运动方向。
其中, 所述运动直方图的获取方法为现有技术, 这里不做赘述; 所 述运动直方图历史记录为保存的指定数量的运动直方图组成的集合; 所 述指定数量为根据实际情况指定, 比如, 可以指定保存 50个;
比如: 支设当前帧运动直方图为 H[8], 运动直方图历史记录包括 50 个运动直方图, 表示为 HH[8] [50] ; 所述根据运动直方图历史记录分别统计各方向的运动特征点总数, 计算各方向运动特征点总数占运动特征点总数的比值, 得到历史运动直
50
∑HH[n][j]
方图, 可以使用公式: S[«] = ^¾ , 其中, 表示历史运动直方
∑∑HH[i][j] 图, n表示八个方向之一;
所述对历史运动直方图进行二值化, 得到当前帧运动方向记录, 可 以使用公式: ∑>[" ∑S[i] δ,其中, 表示当前帧运动方向记录,
[0, others n表示八个方向之一;
所述根据运动方向历史记录,得到各方向最终计数,可以使用公式:
50 — 一
c["] =∑Hs["]L ], 其中, c[w]表示各方向最终计数, 表示运动方向
7=1
历史记录, 为保存的指定数量的运动方向记录组成的集合, n表示八个 方向之一, j表示保存的运动方向 己录。 步骤 103 : 从当前帧图像的前景图像中提取行人图像的边缘点, 根 据所在位置的修正系数, 对行人图像的边缘点与当前帧图像的运动特征 点联合加权计数, 得到行人数量。
这里, 所述当前帧图像的前景图像可以为: 采用混合高斯背景建模 算法, 得到背景图像和前景图像, 对前景图像进行修正。
其中, 混合高斯背景建模算法为现有技术, 这里不做赘述; 所述对 前景图像进行修正可以为采用形态学滤波, 结合积分图像计算前景置信 度的方法进行修正, 为现有技术, 这里不做赘述。
所述提取行人图像的边缘点包括: 对当前帧图像进行 Canny边缘检 测得到初始边缘图像, 以及与当前帧图像的前景图像进行与操作, 得到 修正的行人边缘图像; 对修正的行人边缘图像中的边缘点计数, 最终得 到行人图像的边缘点。
所述根据所在位置的修正系数, 对行人图像的边缘点与当前帧图像 的运动特征点联合加权计数, 得到行人数量为: 根据行人图像的边缘点 与当前帧图像的运动特征点所在位置,逐个查找预置的该点的修正系数, 将所有行人图像的边缘点与当前帧图像的运动特征点对应的修正系数加 权累加所得数值作为行人数量。
如图 2 所示, 本发明实施例提供了一种人数及人群运动方向的计算 装置, 该装置包括: 图像获取模块 21、 运动方向计算模块 22和人数计 算模块 23 ; 其中,
图像获取模块 21, 配置为为人群运动方向计算模块 22 以及人数计 算模块 23提供当前帧图像;
运动方向计算模块 22, 配置为从图像获取模块 21 中提取当前帧图 像的特征点, 将当前帧图像的特征点与选取的历史帧图像对比, 得到当 前帧图像的运动特征点, 对当前图像帧的运动特征点按方向加权计数, 得到人群运动方向;
人数计算模块 23, 配置为从图像获取模块 21 中保存的当前帧图像 的前景图像中提取行人图像的边缘点, 根据所在位置的修正系数, 对行 人图像的边缘点与当前帧图像的运动特征点联合加权计数, 得到行人数 量。
所述图像获取模块 21, 配置为设置检测区, 提取检测区图像作为当 前帧图像。
所述运动方向计算模块 22, 配置为使用 SURF算法, 参数设置采用 默认设置, 图像特征点数量根据实际情况设定, 具体计算这里不做赘述。
所述运动方向计算模块 22, 配置为逐个提取当前帧图像的特征点, 在提取的特征点周围选取大小为 N x M的图像作为模板图像 T; 从选取 的历史帧图像中, 在当前帧图像的特征点对应位置周围选取大小为 W x H的图像作为搜索图像 S; 利用模板图像 T在搜索图像 S内进行匹配搜 索, 根据特征点与匹配点的位置关系进行判决, 当距离大于设定的阈值 时, 判定当前帧图像的特征点为运动特征点; 依此类推, 得到当前帧图 像中的所有运动特征点及其运动方向。
所述运动方向计算模块 22, 配置为根据实际设置选取当前帧之前的 某一帧图像作为历史帧图像, 比如, 可以设置为选取当前帧之前五帧的 图像, 或者, 设置为选取当前帧之前十帧的图像。
所述运动方向计算模块 22,配置为在搜索图像 S内、依照指定顺序, 依次选出和模板图像 T等大的局部图像, 计算所述局部图像与模板图像 T 的绝对误差和, 当所述绝对误差和小于预置的匹配阈值时, 将所述局 部图像的中心点作为搜索图像 S的匹配点; 计算匹配点与特征点的相对 位移, 判断所述相对位置是否小于预置的运动阈值, 若是, 则判定对应 的特征点为运动特征点, 否则, 判定该特征点为非运动特征点; 其中, 所述计算使用公式: E(,_/) =∑∑l ,") - 2 ^) l, E( 表示 绝对误差和, ( )标识模板图像 τ在搜索图像 S中的相对位置。
所述运动方向计算模块 22, 配置为逐个根据各运动特征点的位置, 查找对应的修正系数作为该点的权值; 将当前帧所有运动特征点的权值 按方向累加, 得到当前帧运动直方图。
所述运动方向计算模块 22, 配置为将当前帧运动直方图更新至运动 直方图历史记录; 根据运动直方图历史记录分别统计各方向的运动特征 点总数, 计算各方向运动特征点总数占运动特征点总数的比值, 得到历 史运动直方图; 对历史运动直方图进行二值化, 得到当前帧运动方向记 录, 并更新至运动方向历史记录; 根据运动方向历史记录, 得到各方向 最终运动特征点计数, 将所述运动特征计数值超过预置的运动阈值的方 向作为人群运动方向。 其中, 所述运动直方图的获取方法为现有技术, 这里不做赘述;
所述运动直方图历史记录为保存的指定数量的运动直方图组成的集 合; 所述指定数量为根据实际情况指定, 比如, 可以指定保存 50个; 比 如: 支设当前帧运动直方图为 H[8], 运动直方图历史记录包括 50个运 动直方图, 表示为丽[8] [50];
所述根据运动直方图历史记录分别统计各方向的运动特征点总数, 计算各方向运动特征点总数占运动特征点总数的比值, 得到历史运动直
50
∑HH[n][j]
方图, 可以使用公式: S["] = ^¾ , 其中, 表示历史运动直方
∑∑HH[i][j] 图, n表示八个方向之一;
所述对历史运动直方图进行二值化, 得到当前帧运动方向记录, 可 S[n] 1
8 > - 以使用公式: ∑>[" ^ Sii] 8,其中, 表示当前帧运动方向记录:
[0, others
n表示八个方向之一;
所述根据运动方向历史记录,得到各方向最终计数, 可以使用公式:
50
c["] =∑Hs["]L ], 其中, c[w]表示各方向最终计数, 表示运动方向
7=1
历史记录, 为保存的指定数量的运动方向记录组成的集合, n表示八个 方向之一, j表示保存的运动方向 己录。
所述人数计算模块 23, 配置为采用混合高斯背景建模算法, 得到背 景图像和前景图像, 对前景图像进行修正; 其中, 混合高斯背景建模算 法为现有技术, 这里不做赘述; 所述对前景图像进行修正可以为采用形 态学滤波, 结合积分图像计算前景置信度的方法进行修正, 为现有技术, 这里不做赘述;
所述人数计算模块 23,配置为对当前帧图像进行 Canny边缘检测得 到初始边缘图像, 以及与当前帧图像的前景图像进行与操作, 得到修正 的行人边缘图像; 对修正的行人边缘图像中的边缘点计数, 最终得到行 人图像的边缘点。
所述人数计算模块 23, 配置为根据行人图像的边缘点与当前帧图像 的运动特征点所在位置, 逐个查找预置的该点的修正系数, 将所有行人 图像的边缘点与当前帧图像的运动特征点对应的修正系数加权累加, 得 到数值作为行人数量。
上述图像获取模块可以使用摄像头实现, 所述运动方向计算模块以 及人数计算模块可以使用处理器, 比如中央处理器 (CPU ) 实现。
以上所述, 仅为本发明的较佳实施例而已, 并非用于限定本发明的 保护范围。

Claims

权利要求书
1、 一种人数及人群运动方向的计算方法, 所述方法包括:
提取当前帧图像的特征点, 将当前帧图像的特征点与选取的历史帧 图像对比, 得到当前帧图像的运动特征点;
对当前帧图像的运动特征点按方向加权计数, 得到人群运动方向; 从当前帧图像的前景图像中提取行人图像的边缘点, 根据所在位置 的修正系数, 对行人图像的边缘点与当前帧图像的运动特征点联合加权 计数, 得到行人数量。
2、 根据权利要求 1所述的方法, 其中, 所述将当前帧图像的特征点 与选取的历史帧图像对比, 得到当前帧图像的运动特征点, 包括:
逐个提取当前帧图像的特征点, 在特征点的周围选取模板图像; 从 选取的历史帧图像中, 在当前帧图像的特征点对应位置周围选取搜索图 像;
利用模板图像, 在搜索图像内匹配搜索, 根据特征点与匹配点的位 置关系进行判决, 当距离大于设定的阈值时, 判定当前帧图像的特征点 为运动特征点;直至得到当前帧图像中的所有运动特征点及其运动方向。
3、 根据权利要求 2所述的方法, 其中, 所述根据所在位置的修正系 数, 对当前帧图像的运动特征点按方向加权计数, 得到人群运动方向, 包括:
根据各运动特征点的位置, 逐个查找对应的修正系数作为该点的权 值; 将当前帧所有运动特征点的权值按方向累加, 得到当前帧运动直方 图, 并更新至运动直方图历史记录;
根据运动直方图历史记录分别统计各方向的运动特征点总数, 计算 各方向运动特征点总数占运动特征点总数的比值,得到历史运动直方图; 对历史运动直方图进行二值化, 得到当前帧运动方向记录, 并更新至运 动方向历史记录; 根据运动方向历史记录, 得到各方向最终运动特征点 计数, 将所述运动特征计数值超过预置的运动阈值的方向作为人群运动 方向。
4、 根据权利要求 1所述的方法, 其中, 所述根据所在位置的修正系 数, 对行人图像的边缘点与当前帧图像的运动特征点联合加权计数, 得 到行人数量, 包括: 根据行人图像的边缘点与当前帧图像的运动特征点 所在位置, 逐个查找预置的该点的修正系数, 将所有行人图像的边缘点 与当前帧图像的运动特征点对应的修正系数加权累加所得数值作为行人 数量。
5、 一种人数及运动方向的计算装置, 该装置包括: 图像获取模块、 运动方向计算模块和人数计算模块; 其中,
图像获取模块, 配置为为运动方向计算模块以及人数计算模块提供 当前帧图像;
运动方向计算模块, 配置为提取当前帧图像的特征点, 将当前帧图 像的特征点与选取的历史帧图像对比, 得到当前帧图像的运动特征点, 对当前帧图像的运动特征点按方向加权计数, 得到人群运动方向;
人数计算模块, 配置为从当前帧图像的前景图像中提取行人图像的 边缘点, 根据所在位置的修正系数, 对行人图像的边缘点与当前帧图像 的运动特征点联合加权计数, 得到行人数量。
6、 根据权利要求 5所述的装置, 其中,
所述运动方向计算模块, 配置为逐个提取当前帧图像的特征点, 在 特征点的周围选取模板图像; 从选取的历史帧图像中, 在当前帧图像的 特征点对应位置周围选取搜索图像; 利用模板图像, 在搜索图像内匹配 搜索, 根据特征点与匹配点的位置关系进行判决, 当距离大于设定的阈 值时, 判定当前帧图像的特征点为运动特征点; 直至得到当前帧图像中 的所有运动特征点及其运动方向。
7、 根据权利要求 6所述的方法, 其中,
所述运动方向计算模块, 配置为根据各运动特征点的位置, 逐个查 找对应的修正系数作为该点的权值; 将当前帧所有运动特征点的权值按 方向累加, 得到当前帧运动直方图, 并更新至运动直方图历史记录; 根 据运动直方图历史记录分别统计各方向的运动特征点总数, 计算各方向 运动特征点总数占运动特征点总数的比值, 得到历史运动直方图; 根据 一定阈值, 对历史运动直方图进行二值化, 得到当前帧运动方向记录, 并更新至运动方向历史记录; 根据运动方向历史记录, 得到各方向最终 运动特征点计数, 将所述运动特征计数值超过预置的运动阈值的方向作 为人群运动方向。
8、 根据权利要求 5所述的装置, 其中,
所述人数计算模块, 配置为根据行人图像的边缘点与当前帧图像的 运动特征点所在位置, 逐个查找预置的该点的修正系数, 将所有行人图 像的边缘点与当前帧图像的运动特征点对应的修正系数加权累加所得数 值作为行人数量。
PCT/CN2013/083687 2012-11-28 2013-09-17 一种人数及人群运动方向的计算方法及装置 WO2014082480A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP13857870.3A EP2927871A4 (en) 2012-11-28 2013-09-17 METHOD AND DEVICE FOR CALCULATING THE NUMBER OF PEDESTRIAN AND MOVING DIRECTIONS OF A QUANTITY
US14/648,030 US9576199B2 (en) 2012-11-28 2013-09-17 Method and device for calculating number and moving direction of pedestrians

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201210495159.6A CN103854292B (zh) 2012-11-28 2012-11-28 一种人数及人群运动方向的计算方法及装置
CN201210495159.6 2012-11-28

Publications (1)

Publication Number Publication Date
WO2014082480A1 true WO2014082480A1 (zh) 2014-06-05

Family

ID=50827151

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/083687 WO2014082480A1 (zh) 2012-11-28 2013-09-17 一种人数及人群运动方向的计算方法及装置

Country Status (4)

Country Link
US (1) US9576199B2 (zh)
EP (1) EP2927871A4 (zh)
CN (1) CN103854292B (zh)
WO (1) WO2014082480A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446922A (zh) * 2020-11-24 2021-03-05 厦门熵基科技有限公司 一种通道闸行人反向判断方法和装置

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844572B (zh) * 2016-03-25 2022-04-15 腾讯科技(深圳)有限公司 拥挤风险监控方法及拥挤风险监控装置
CN108346199A (zh) * 2017-01-22 2018-07-31 株式会社日立制作所 人数统计方法和人数统计装置
CN108197579B (zh) * 2018-01-09 2022-05-20 杭州智诺科技股份有限公司 防护舱中人数的检测方法
US11125800B1 (en) 2018-10-31 2021-09-21 United Services Automobile Association (Usaa) Electrical power outage detection system
US11789003B1 (en) 2018-10-31 2023-10-17 United Services Automobile Association (Usaa) Water contamination detection system
US11854262B1 (en) 2018-10-31 2023-12-26 United Services Automobile Association (Usaa) Post-disaster conditions monitoring system using drones
US11538127B1 (en) 2018-10-31 2022-12-27 United Services Automobile Association (Usaa) Post-disaster conditions monitoring based on pre-existing networks
CN109409318B (zh) * 2018-11-07 2021-03-02 四川大学 统计模型的训练方法、统计方法、装置及存储介质
CN110096959B (zh) * 2019-03-28 2021-05-28 上海拍拍贷金融信息服务有限公司 人流量计算方法、装置以及计算机存储介质
CN110263643B (zh) * 2019-05-20 2023-05-16 上海兑观信息科技技术有限公司 一种基于时序关系的快速视频人群计数方法
CN112257485A (zh) * 2019-07-22 2021-01-22 北京双髻鲨科技有限公司 一种对象检测的方法、装置、存储介质及电子设备
CN111260696B (zh) * 2020-01-21 2023-04-18 北京工业大学 一种面向边缘端行人跟踪和人数精确统计的方法
US11348338B2 (en) 2020-11-04 2022-05-31 Huawei Technologies Co., Ltd. Methods and systems for crowd motion summarization via tracklet based human localization
CN112415964A (zh) * 2020-11-13 2021-02-26 佛山市顺德区美的电子科技有限公司 一种控制方法、装置、家电设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404086A (zh) * 2008-04-30 2009-04-08 浙江大学 基于视频的目标跟踪方法及装置
US20100021009A1 (en) * 2007-01-25 2010-01-28 Wei Yao Method for moving targets tracking and number counting
CN101976353B (zh) * 2010-10-28 2012-08-22 北京智安邦科技有限公司 低密度人群的统计方法及装置
CN102750710A (zh) * 2012-05-31 2012-10-24 信帧电子技术(北京)有限公司 一种图像中运动目标统计方法和装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7778466B1 (en) * 2003-12-02 2010-08-17 Hrl Laboratories, Llc System and method for processing imagery using optical flow histograms
WO2009102013A1 (ja) * 2008-02-14 2009-08-20 Nec Corporation 移動ベクトル検出装置
US9104909B2 (en) * 2010-12-15 2015-08-11 Canon Kabushiki Kaisha Image processing apparatus and method of processing image

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100021009A1 (en) * 2007-01-25 2010-01-28 Wei Yao Method for moving targets tracking and number counting
CN101404086A (zh) * 2008-04-30 2009-04-08 浙江大学 基于视频的目标跟踪方法及装置
CN101976353B (zh) * 2010-10-28 2012-08-22 北京智安邦科技有限公司 低密度人群的统计方法及装置
CN102750710A (zh) * 2012-05-31 2012-10-24 信帧电子技术(北京)有限公司 一种图像中运动目标统计方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2927871A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112446922A (zh) * 2020-11-24 2021-03-05 厦门熵基科技有限公司 一种通道闸行人反向判断方法和装置

Also Published As

Publication number Publication date
US20150310275A1 (en) 2015-10-29
CN103854292B (zh) 2018-02-23
EP2927871A4 (en) 2016-11-23
US9576199B2 (en) 2017-02-21
EP2927871A1 (en) 2015-10-07
CN103854292A (zh) 2014-06-11

Similar Documents

Publication Publication Date Title
WO2014082480A1 (zh) 一种人数及人群运动方向的计算方法及装置
CN108846854B (zh) 一种基于运动预测与多特征融合的车辆跟踪方法
CN106886216B (zh) 基于rgbd人脸检测的机器人自动跟踪方法和***
JP6482195B2 (ja) 画像認識装置、画像認識方法及びプログラム
CN102982537B (zh) 一种检测场景变换的方法和***
CN105791774A (zh) 一种基于视频内容分析的监控视频传输方法
CN109708658B (zh) 一种基于卷积神经网络的视觉里程计方法
WO2014201971A1 (zh) 在线训练的目标检测方法及装置
CN113192105B (zh) 一种室内多人追踪及姿态估量的方法及装置
CN111353496B (zh) 一种红外弱小目标实时检测方法
CN116448019B (zh) 建筑节能工程质量平面度智能检测装置及方法
WO2023155482A1 (zh) 一种人群快速聚集行为的识别方法、***、设备及介质
WO2013075295A1 (zh) 低分辨率视频的服装识别方法及***
CN116977937A (zh) 一种行人重识别的方法及***
CN103400395A (zh) 一种基于haar特征检测的光流跟踪方法
CN103996199A (zh) 一种基于深度信息的运动检测方法
CN103905826A (zh) 一种自适应全局运动估计方法
CN107067411B (zh) 一种结合密集特征的Mean-shift跟踪方法
CN108573217B (zh) 一种结合局部结构化信息的压缩跟踪方法
CN111339824A (zh) 基于机器视觉的路面抛洒物检测方法
CN115588149A (zh) 基于匹配优先级的跨相机多目标级联匹配方法
CN104123569A (zh) 一种基于有监督学习的视频人数信息统计方法
CN108062861B (zh) 一种智能交通监控***
CN114022831A (zh) 一种基于双目视觉的牲畜体况监测方法及***
CN104463902B (zh) 一种基于nmi特征的静止目标消除方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13857870

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 14648030

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 2013857870

Country of ref document: EP