CN107274395B - Bus entrance and exit passenger head detection method based on empirical mode decomposition - Google Patents

Bus entrance and exit passenger head detection method based on empirical mode decomposition Download PDF

Info

Publication number
CN107274395B
CN107274395B CN201710441730.9A CN201710441730A CN107274395B CN 107274395 B CN107274395 B CN 107274395B CN 201710441730 A CN201710441730 A CN 201710441730A CN 107274395 B CN107274395 B CN 107274395B
Authority
CN
China
Prior art keywords
image
value
function
order
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710441730.9A
Other languages
Chinese (zh)
Other versions
CN107274395A (en
Inventor
孙伟嘉
***
王春卓
陈科
彭真明
李美惠
黄苏琦
彭凌冰
饶行妹
何艳敏
王卓然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201710441730.9A priority Critical patent/CN107274395B/en
Publication of CN107274395A publication Critical patent/CN107274395A/en
Application granted granted Critical
Publication of CN107274395B publication Critical patent/CN107274395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30268Vehicle interior

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

Belongs to bus videoThe invention discloses a method for detecting heads of passengers at an entrance and an exit of a bus based on empirical mode decomposition, belonging to the field of passenger statistics application; first, for the image fj(i) Performing empirical mode decomposition on the jth line of image to obtain an inherent mode function of the jth line of image
Figure DDA0001320161360000011
Reuse of the natural mode function of the j-th line
Figure DDA0001320161360000012
Get the objective function F of the j rowj(i) (ii) a Finally, the objective function F is utilizedj(i) Obtaining a threshold t of the imageqUsing said threshold value tqThe method and the device have the advantages that the image is segmented, the extraction of the head of the passenger in the video is completed, the influence of a complex background in the operation of the bus is eliminated, the accuracy and the reliability of the identification and the extraction of the head of the passenger are improved, and the false detection of the clothes of the passenger is effectively eliminated.

Description

Bus entrance and exit passenger head detection method based on empirical mode decomposition
Technical Field
The invention relates to extraction of moving targets in video analysis, in particular to a bus entrance and exit passenger head detection method based on empirical mode decomposition, which is used for extracting the heads of passengers in a monitoring video of a bus system.
Background
The key point of accurately counting the number of the passengers in the bus is that an interested target in an image is correctly identified from a video sequence, then the motion state of the target is continuously observed, the number and the motion direction of the target are judged according to the target, and finally the counting of the number of the passengers is completed through the setting of a counting rule. A good target extraction mode can greatly improve the efficiency of subsequent processing such as target tracking, target behavior understanding and the like.
Since images and video sequences are susceptible to factors such as light, temperature, shadow, etc., extraction of moving objects becomes a difficult task. Commonly used moving object recognition techniques have time difference methods, optical flow methods, learning-based methods, background difference methods, and the like.
The time difference method is a method for detecting a moving target by performing difference operation on pixel gray scales of two continuous frames in a video sequence and judging the area of the target through a threshold value by using the principle that the gray scale of the moving target is obviously changed between two adjacent frames and the gray scale of a static target is not changed greatly in the video sequence. The method has the advantages of small calculation amount, high detection speed, easy realization and no influence of illumination factors. However, for a moving object with a pure color, the method can only detect the edge of the object, which affects the subsequent segmentation and extraction of the image. Meanwhile, the time difference method can only identify moving targets and cannot detect static targets, so that when the targets stop moving, the time difference method cannot track the targets, which may result in loss of target tracking.
The moving object detection based on the optical flow method determines the moving direction of each pixel point by utilizing the time domain change and the correlation of the pixel gray in an image sequence, namely, the foreground and the background of the object are distinguished by researching the change of the image gray in time. The optical flow method has the advantages that the optical flow does not depend on the background, the motion information of the moving object is carried, the moving object can still be detected under the condition that the camera moves, and the stability of the algorithm is high. However, the optical flow method is time-consuming for detecting and calculating the moving object, and has poor real-time performance and practicability. Meanwhile, the optical flow method has a very poor ability to disturb light and cannot detect a stationary target.
The recognition method based on machine learning is characterized in that a positive sample and a negative sample are established by extracting the features of a foreground target to be detected, and the classifier can recognize the features of the sample similar to the target to be detected by training the samples. When the target is identified and tracked, a slidable search window is directly established on the image, the characteristics of the image in the window are calculated, then a classifier is used for checking whether the windows are foreground types, and then the center of the search window is slid to the next position to operate again, so that the identification and tracking of the foreground target are realized. The identification method based on machine learning greatly depends on the training of samples and the selection of characteristic points, and meanwhile, the method has large calculation amount and low operation efficiency.
The background subtraction method is a moving object detection method that performs a subtraction operation on a current image or an image sequence and a background image, and then selects and extracts a region of a moving object through a threshold. Under a proper condition, the method can obtain the area where the moving target is located and a complete outline, is suitable for dynamic target detection under the static condition of the camera, and is a common target detection technology in the field of video monitoring. However, the background subtraction method is susceptible to the influence of light weather, and thus is susceptible to the influence of the shadow of the passenger's body and the light intensity.
The existing common method cannot efficiently and accurately extract the head of a passenger in monitoring, and is easily influenced by illumination, passenger clothes and passenger body shadow in the detection process. This causes the situation that the bus monitoring getting on and off statistical system using the head of a person as a counting unit has high missing detection and false detection rate, and the system has low robustness to illumination and low system operation efficiency.
Disclosure of Invention
Based on the technical problems, the invention provides a method for detecting the heads of passengers at the entrance and the exit of a bus based on empirical mode decomposition, which solves the technical problems of missed detection, high false detection rate, low illumination robustness and low system operation efficiency in a passenger getting-on and getting-off statistical system caused by illumination, passenger clothes and passenger body shadows.
The technical scheme adopted by the invention is as follows:
a bus entrance and exit passenger head detection method based on empirical mode decomposition comprises the following steps:
step 1: for image fj(i) Performing empirical mode decomposition on the jth line of image to obtain an inherent mode function of the jth line of image
Figure GDA0002733256440000021
Wherein j represents a row number of the image, i represents a column number of the image, and p represents an order number of the natural mode function;
step 2: using the natural mode function of line j
Figure GDA0002733256440000022
Get the objective function F of the j rowj(i);
And step 3: using said objective function Fj(i) Obtaining a threshold t of the imageqUsing said threshold value tqThe image is segmented.
Further, empirical mode decomposition is carried out on the j-th line image to obtain the inherent mode function of the j-th line
Figure GDA0002733256440000023
The method comprises the following specific steps:
s201: graying the image to obtain the gray value of each pixel of the image, calculating the average gray value of the image, and subtracting the average gray value from the gray value of each pixel in the image to obtain the processed image fj(i);
S202: determining an image fj(i) The maximum and minimum values in the j-th row are as follows:
maximum value: f. ofj(i)-fj(i-1)>0&&fj(i+1)-fj(i)<0 (1),
Minimum value: f. ofj(i)-fj(i-1)<0&&fj(i+1)-fj(i)>0 (2);
S203: calculating an upper envelope formed by the maximum value in S202 and a lower envelope formed by the minimum value by cubic spline interpolation, and calculating the average value of the upper envelope and the lower envelope
Figure GDA0002733256440000031
Wherein p represents the order number of the natural mode function;
s204: order to
Figure GDA0002733256440000032
Judgment of
Figure GDA0002733256440000033
Whether the condition for becoming the inherent mode function is satisfied, if so, the order is given
Figure GDA0002733256440000034
If not, then order
Figure GDA0002733256440000035
Repeating steps S202-S204; wherein n is
Figure GDA0002733256440000036
The number of cycles of (c);
s205: order to
Figure GDA0002733256440000037
And order
Figure GDA0002733256440000038
Repeating steps S202-S205 until
Figure GDA0002733256440000039
Obtaining K-order natural mode function of j-th line image for monotone sequence or only one extreme point
Figure GDA00027332564400000310
Wherein P is [1, P ]]And P is the total order of the mode function.
Further, an objective function Fj(i) The calculation formula of (a) is as follows:
Figure GDA00027332564400000311
where m denotes the pth of the P orders, and P ∈ [ m, P ] the natural mode function is a low frequency function.
Further, a threshold value tqThe determination process of (2) is as follows:
s401: determining an objective function F of the j-th row of imagesj(i) The maximum value and the minimum value of (2) are defined by the following formula:
maximum value: fj(i)-Fj(i-1)>0&&Fj(i+1)-Fj(i)<0 (7),
Minimum value: fj(i)-Fj(i-1)<0&&Fj(i+1)-Fj(i)>0 (8),
The objective function Fj(i) Wherein, there are Q pairs of adjacent extreme values, and the adjacent extreme values comprise a maximum value and a minimum value; the gray value of the maximum value in the Q-th pair of adjacent extreme values is f1, the gray value of the minimum value is f2, the column serial number of the maximum value is i1, the column serial number of the minimum value is i2, wherein an interval exists between i1 and i2, and Q belongs to [1, Q ∈];
S402: obtaining the slope k between adjacent extreme values by using the gray value and the column sequence number of the adjacent extreme valuesqThe formula is as follows:
Figure GDA00027332564400000312
the slope kqMaximum value of
Figure GDA00027332564400000313
S403: using said slope kqSetting a threshold tqThe formula is as follows:
tq=(f1-f2)+f2(0<<1) (11),
wherein, when kq<μkmaxWhen the value is more than 0 and less than 1, 0 is less than or equal to 0.5; k is a radical ofq>μkmaxWhen the value is more than 0 mu and less than 1, the value is more than 0.5 and less than or equal to 1;
when f1-f2 < T, Tq=tq-1Wherein T is a set fluctuation threshold;
s404: using said threshold value tqCarrying out threshold segmentation on pixel points between adjacent extreme values, and when f isj(i)>tqWhen, let fj(i) 255; when f isj(i)≤tqWhen, let fj(i)=0。
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
1. the gray level sequence of the image is flattened by using empirical mode decomposition, the most main features in the image, namely the features of the head of a person in the image, are reserved, and most of irrelevant information such as shadow, passenger clothes and the like in the image is removed.
2. An objective function is constructed by using an inherent mode function generated by empirical mode decomposition, and a threshold value of image segmentation is determined by using the objective function, namely, a place with large gray scale change in the image is determined, so that the characteristics of the human head in the image are segmented, and further, irrelevant information in the image is removed.
3. The threshold segmentation algorithm provided by the invention is a local threshold segmentation method, can effectively filter out smaller gray level fluctuation, has small operand, high accuracy and good real-time performance, and is suitable for real-time video analysis.
4. The method can effectively remove the influence of passenger clothes, shadows and illumination on the character head feature extraction, reduces the false detection rate and the missing detection rate of the head extraction, and has good robustness to illumination and high system operation efficiency.
Drawings
FIG. 1 is a flow chart of the invention;
FIG. 2 is a gray scale distribution before and after empirical mode decomposition of the jth line of images;
FIG. 3 is a gray scale distribution before and after empirical mode decomposition of an entire image;
FIG. 4 is an original image of an image;
fig. 5 is a detection diagram of a passenger head portrait;
FIG. 6 is an image of a video sequence after being processed by a general threshold segmentation algorithm;
FIG. 7 is a comparison graph of the same frame of image in the video sequence after being processed by the background difference method and the method of the present invention;
fig. 8 is an image resulting from processing a video sequence using the present invention.
Detailed Description
All features disclosed in this specification may be combined in any combination, except features and/or steps that are mutually exclusive.
The present invention will be described in detail with reference to the accompanying drawings.
A bus entrance and exit passenger head detection method based on empirical mode decomposition comprises the following steps:
step 1: for image fj(i) Performing empirical mode decomposition on the jth line of image to obtain an inherent mode function of the jth line of image
Figure GDA0002733256440000051
The method comprises the following specific steps:
s201: graying the image to obtain the gray value of each pixel of the image, calculating the average gray value of the image, and subtracting the average gray value from the gray value of each pixel in the image to obtain the processed image fj(i);
S202: determining an image fj(i) The maximum and minimum values in the j-th row are as follows:
maximum value: f. ofj(i)-fj(i-1)>0&&fj(i+1)-fj(i)<0 (12),
Minimum value: f. ofj(i)-fj(i-1)<0&&fj(i+1)-fj(i)>0 (13);
S203: calculating an upper envelope formed by the maximum value in S202 and a lower envelope formed by the minimum value by cubic spline interpolation, and calculating the average value of the upper envelope and the lower envelope
Figure GDA0002733256440000052
Wherein p represents the order number of the natural mode function;
s204: order to
Figure GDA0002733256440000053
Judgment of
Figure GDA0002733256440000054
Whether a condition to become a natural mode function is satisfied, the condition being:
(1)
Figure GDA0002733256440000055
the number of the extreme points and the number of the zero-crossing points are equal or different by one;
(2) at any one of the time points,
Figure GDA0002733256440000056
the difference from zero is less than 0.1;
if the condition is satisfied, order
Figure GDA0002733256440000057
If the condition is not satisfied, then order
Figure GDA0002733256440000058
Repeating steps S202-S204; wherein n is
Figure GDA0002733256440000059
The number of cycles of (c);
s205: order to
Figure GDA00027332564400000510
And order
Figure GDA00027332564400000511
Repeating steps S202-S205 until
Figure GDA00027332564400000512
Obtaining K-order natural mode function of j-th line image for monotone sequence or only one extreme point
Figure GDA00027332564400000513
Wherein P is [1, P ]]And P is the total order of the mode function.
Step 2: using the natural mode function of line j
Figure GDA00027332564400000514
Get the objective function F of the j rowj(i);
Objective function Fj(i) The calculation formula of (a) is as follows:
Figure GDA00027332564400000515
where m denotes the pth of the P orders, and P ∈ [ m, P ] the natural mode function is a low frequency function.
And step 3: using said objective function Fj(i) Obtaining a threshold t of the imageqUsing said threshold value tqSegmenting the image by a threshold tqThe determination process of (2) is as follows:
s401: determining an objective function F of the j-th row of imagesj(i) The maximum value and the minimum value of (2) are defined by the following formula:
maximum value: fj(i)-Fj(i-1)>0&&Fj(i+1)-Fj(i)<0 (18),
Minimum value: fj(i)-Fj(i-1)<0&&Fj(i+1)-Fj(i)>0 (19),
The objective function Fj(i) Wherein, there are Q pairs of adjacent extreme values, and the adjacent extreme values comprise a maximum value and a minimum value; the gray value of the maximum value in the Q-th pair of adjacent extreme values is f1, the gray value of the minimum value is f2, the sequence number of the column of the maximum value is il, the sequence number of the column of the minimum value is i2, wherein an interval exists between il and i2, and Q belongs to [1, Q ∈];
S402: obtaining the slope k between adjacent extreme values by using the gray value and the column sequence number of the adjacent extreme valuesqThe formula is as follows:
Figure GDA0002733256440000061
the slope kqMaximum value of
Figure GDA0002733256440000062
S403: using said slope kqSetting a threshold tqThe formula is as follows:
tq=(f1-f2)+f2(0<<1) (22),
wherein, when kq<μkmaxWhen the value is more than 0 and less than 1, 0 is less than or equal to 0.5; k is a radical ofq>μkmaxWhen the value is more than 0 mu and less than 1, the value is more than 0.5 and less than or equal to 1;
when f1-f2 < T, Tq=tq-1Wherein T is a set fluctuation threshold;
s404: using said threshold value tqCarrying out threshold segmentation on pixel points between adjacent extreme values, and when f isj(i)>tqWhen, let fj(i) 255; when f isj(i)≤tqWhen, let fj(i)=0。
The above description is an embodiment of the present invention. The present invention is not limited to the above embodiments, and any structural changes made under the teaching of the present invention shall fall within the protection scope of the present invention, which is similar or similar to the technical solutions of the present invention.

Claims (3)

1. A bus entrance and exit passenger head detection method based on empirical mode decomposition is characterized in that: the method comprises the following steps:
step 1: for image fj(i) Performing empirical mode decomposition on the jth line of image to obtain an inherent mode function of the jth line of image
Figure FDA0002733256430000011
Wherein j represents a row number of the image, i represents a column number of the image, and p represents an order number of the natural mode function;
step 2: using the natural mode function of line j
Figure FDA0002733256430000012
Get the objective function F of the j rowj(i);
And step 3: using said objective function Fj(i) Obtaining a threshold t of the imageqUsing said threshold value tqSegmenting the image;
threshold value tqThe determination process of (2) is as follows:
s401: determining an objective function F of the j-th row of imagesj(i) The maximum value and the minimum value of (2) are defined by the following formula:
maximum value: fj(i)-Fj(i-1)>0&&Fj(i+1)-Fj(i)<0,
Minimum value: fj(i)-Fj(i-1)<0&&Fj(i+1)-Fj(i)>0,
The objective function Fj(i) Wherein, there are Q pairs of adjacent extreme values, and the adjacent extreme values comprise a maximum value and a minimum value; the gray value of the maximum value in the Q-th pair of adjacent extreme values is f1, the gray value of the minimum value is f2, the column serial number of the maximum value is i1, the column serial number of the minimum value is i2, wherein an interval exists between i1 and i2, and Q belongs to [1, Q ∈];
S402: obtaining the slope k between adjacent extreme values by using the gray value and the column sequence number of the adjacent extreme valuesqThe formula is as follows:
Figure FDA0002733256430000013
the slope kqMaximum value of
Figure FDA0002733256430000014
S403: using said slope kqSetting a threshold tqThe formula is as follows:
tq=(f1-f2)+f2(0<<1)
wherein, when kq<μkmaxWhen the value is more than 0 and less than 1, 0 is less than or equal to 0.5; k is a radical ofq>μkmaxWhen the value is more than 0 mu and less than 1, the value is more than 0.5 and less than or equal to 1;
when f1-f2 < T, Tq=tq-1Wherein T is a set fluctuation threshold;
s404: using said threshold value tqCarrying out threshold segmentation on pixel points between adjacent extreme values, and when f isj(i)>tqWhen, let fj(i) 255; when f isj(i)≤tqWhen, let fj(i)=0。
2. The method for detecting the heads of passengers at the entrance and the exit of the bus based on empirical mode decomposition as claimed in claim 1, wherein: the specific steps of step 1 are as follows:
s201: graying the image to obtain the gray value of each pixel of the image, calculating the average gray value of the image, and subtracting the average gray value from the gray value of each pixel in the image to obtain the processed image fj(i);
S202: determining an image fj(i) The maximum and minimum values in the j-th row are as follows:
maximum value: f. ofj(i)-fj(i-1)>0&&fj(i+1)-fj(i)<0,
Minimum value: f. ofj(i)-fj(i-1)<0&&fj(i+1)-fj(i)>0;
S203: obtaining an upper envelope formed by the maximum value and a lower envelope formed by the minimum value in S202 by cubic spline interpolationA line calculating a mean value of the upper envelope and the lower envelope
Figure FDA0002733256430000021
Wherein p represents the order number of the natural mode function;
s204: order to
Figure FDA0002733256430000022
Judgment of
Figure FDA0002733256430000023
Whether the condition for becoming the inherent mode function is satisfied, if so, the order is given
Figure FDA0002733256430000024
If not, then order
Figure FDA0002733256430000025
Repeating steps S202-S204; wherein n is
Figure FDA0002733256430000026
The number of cycles of (c);
s205: order to
Figure FDA0002733256430000027
And order
Figure FDA0002733256430000028
Repeating steps S202-S205 until
Figure FDA0002733256430000029
Obtaining K-order natural mode function of j-th line image for monotone sequence or only one extreme point
Figure FDA00027332564300000210
Wherein P is [1, P ]]And P is the total order of the mode function.
3. According to claim 1The method for detecting the heads of passengers at the entrance and the exit of the bus based on empirical mode decomposition is characterized in that: objective function Fj(i) The calculation formula of (a) is as follows:
Figure FDA00027332564300000211
where m denotes the pth of the P orders, and P ∈ [ m, P ] the natural mode function is a low frequency function.
CN201710441730.9A 2017-06-13 2017-06-13 Bus entrance and exit passenger head detection method based on empirical mode decomposition Active CN107274395B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710441730.9A CN107274395B (en) 2017-06-13 2017-06-13 Bus entrance and exit passenger head detection method based on empirical mode decomposition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710441730.9A CN107274395B (en) 2017-06-13 2017-06-13 Bus entrance and exit passenger head detection method based on empirical mode decomposition

Publications (2)

Publication Number Publication Date
CN107274395A CN107274395A (en) 2017-10-20
CN107274395B true CN107274395B (en) 2020-12-29

Family

ID=60066936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710441730.9A Active CN107274395B (en) 2017-06-13 2017-06-13 Bus entrance and exit passenger head detection method based on empirical mode decomposition

Country Status (1)

Country Link
CN (1) CN107274395B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872425A (en) * 2010-07-29 2010-10-27 哈尔滨工业大学 Empirical mode decomposition based method for acquiring image characteristics and measuring corresponding physical parameters
CN102129676A (en) * 2010-01-19 2011-07-20 中国科学院空间科学与应用研究中心 Microscopic image fusing method based on two-dimensional empirical mode decomposition
CN102855623A (en) * 2012-07-19 2013-01-02 哈尔滨工业大学 Method for measuring myocardium ultrasonic angiography image physiological parameters based on empirical mode decomposition (EMD)
CN106446870A (en) * 2016-10-21 2017-02-22 湖南文理学院 Human body contour feature extracting method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120184825A1 (en) * 2011-01-17 2012-07-19 Meir Ben David Method for detecting and analyzing sleep-related apnea, hypopnea, body movements, and snoring with non-contact device
CN102184529B (en) * 2011-05-12 2012-07-25 西安电子科技大学 Empirical-mode-decomposition-based edge detecting method
US10242126B2 (en) * 2012-01-06 2019-03-26 Technoimaging, Llc Method of simultaneous imaging of different physical properties using joint inversion of multiple datasets
CN103871047A (en) * 2013-12-31 2014-06-18 江南大学 Gray level fluctuation threshold segmentation method of image with non-uniform illumination
CN105160674A (en) * 2015-08-28 2015-12-16 北京联合大学 Improved quick bidimensional empirical mode decomposition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102129676A (en) * 2010-01-19 2011-07-20 中国科学院空间科学与应用研究中心 Microscopic image fusing method based on two-dimensional empirical mode decomposition
CN101872425A (en) * 2010-07-29 2010-10-27 哈尔滨工业大学 Empirical mode decomposition based method for acquiring image characteristics and measuring corresponding physical parameters
CN102855623A (en) * 2012-07-19 2013-01-02 哈尔滨工业大学 Method for measuring myocardium ultrasonic angiography image physiological parameters based on empirical mode decomposition (EMD)
CN106446870A (en) * 2016-10-21 2017-02-22 湖南文理学院 Human body contour feature extracting method and device

Also Published As

Publication number Publication date
CN107274395A (en) 2017-10-20

Similar Documents

Publication Publication Date Title
CN107563347B (en) Passenger flow counting method and device based on TOF camera
CN102598057B (en) Method and system for automatic object detection and subsequent object tracking in accordance with the object shape
US20230289979A1 (en) A method for video moving object detection based on relative statistical characteristics of image pixels
AU2009295350B2 (en) Detection of vehicles in an image
CN102903119B (en) A kind of method for tracking target and device
CN106022231A (en) Multi-feature-fusion-based technical method for rapid detection of pedestrian
Pan et al. Robust abandoned object detection using region-level analysis
Xia et al. A novel sea-land segmentation algorithm based on local binary patterns for ship detection
CN103745216B (en) A kind of radar image clutter suppression method based on Spatial characteristic
CN105741319B (en) Improvement visual background extracting method based on blindly more new strategy and foreground model
CN104408482A (en) Detecting method for high-resolution SAR (Synthetic Aperture Radar) image object
Momin et al. Vehicle detection and attribute based search of vehicles in video surveillance system
CN106204594A (en) A kind of direction detection method of dispersivity moving object based on video image
CN103413149B (en) Method for detecting and identifying static target in complicated background
Karpagavalli et al. Estimating the density of the people and counting the number of people in a crowd environment for human safety
Surkutlawar et al. Shadow suppression using rgb and hsv color space in moving object detection
CN108985216B (en) Pedestrian head detection method based on multivariate logistic regression feature fusion
CN109271902B (en) Infrared weak and small target detection method based on time domain empirical mode decomposition under complex background
CN107274395B (en) Bus entrance and exit passenger head detection method based on empirical mode decomposition
CN110502968A (en) The detection method of infrared small dim moving target based on tracing point space-time consistency
Yang et al. A hierarchical approach for background modeling and moving objects detection
El Baf et al. Fuzzy foreground detection for infrared videos
Sujatha et al. An innovative moving object detection and tracking system by using modified region growing algorithm
CN106250859B (en) The video flame detecting method spent in a jumble is moved based on characteristic vector
Bartoli et al. Unsupervised scene adaptation for faster multi-scale pedestrian detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant