CN106778528A - A kind of method for detecting fatigue driving based on gaussian pyramid feature - Google Patents

A kind of method for detecting fatigue driving based on gaussian pyramid feature Download PDF

Info

Publication number
CN106778528A
CN106778528A CN201611062106.XA CN201611062106A CN106778528A CN 106778528 A CN106778528 A CN 106778528A CN 201611062106 A CN201611062106 A CN 201611062106A CN 106778528 A CN106778528 A CN 106778528A
Authority
CN
China
Prior art keywords
gaussian
fatigue driving
gaussian pyramid
feature
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611062106.XA
Other languages
Chinese (zh)
Inventor
张卫华
周激流
周琳琳
张意
林峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
Original Assignee
Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University filed Critical Sichuan University
Priority to CN201611062106.XA priority Critical patent/CN106778528A/en
Publication of CN106778528A publication Critical patent/CN106778528A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of method for detecting fatigue driving based on gaussian pyramid feature, comprise the following steps:S1, the driver's driving image to shooting are pre-processed;S2, the pretreated view data is carried out down-sampled, obtain multi-resolution Gaussian pyramid feature;S3, the feature is carried out to be matched with the property data base for prestoring, judge whether driver is fatigue driving.Method for detecting fatigue driving based on gaussian pyramid feature of the invention uses gaussian pyramid characteristic analysis method, and algorithm complex is low compared to existing technology, computational efficiency is high, recognition accuracy is high.

Description

Fatigue driving detection method based on Gaussian pyramid characteristics
Technical Field
The invention relates to the field of image recognition, in particular to a fatigue driving detection method based on Gaussian pyramid characteristics.
Background
With the development of social economy, automobiles become the core strength of transporting goods and traffic in people's daily life, and unfortunately, with the increase of the usage amount of automobiles, the number of people who die and are seriously injured due to traffic accidents is increased year by year. Therefore, the problem of road traffic safety is increasingly attracting attention, and improving the safety performance of automobiles and reducing road traffic accidents are social problems which are worthy of attention and important problems which need to be solved urgently through scientific and technological means. Active safe driving is closely related to the behavior and habits of the person. The driving action is monitored and recognized, the safety of the driving action can be analyzed in advance, and unsafe actions are reminded to avoid accidents. In the driving process, the actions of hands and feet of the operated automobile and the head of a driver are the direct expressions of driving behaviors, such as looking up an instrument panel and observing a rearview mirror; however, some actions irrelevant to driving, such as looking at the gear of the transmission, watching the outside of a side window for a long time, operating vehicle-mounted electronic equipment for a long time, driving a mobile phone in the driving process, countersunk head and picking up objects, also exist, and the actions generate great hidden dangers for driving safety. Therefore, developing a fast motion recognition algorithm which sacrifices recognition rate a little under the condition of guaranteeing the real-time performance of the algorithm as much as possible becomes another idea of motion recognition research. Motion is important information for detecting the segmentation of objects and background in a scene, and such methods first assume the periodicity of pedestrian motion, according to which the pedestrian is separated from the background. Human motion recognition is a challenging subject in the field of computer vision, and differences in the appearances of different people, unstable backgrounds, moving cameras and scene lighting changes all add difficulty to motion recognition. Under conventional visual sensor conditions, human motion can be represented by continuous changes between adjacent frames in a video sequence. The analysis methods are mainly classified into three categories: the first category is spatio-temporal based feature analysis such as MHI (Motion Histogram), SI (Silhouette Histogram contour Histogram), and optical flow analysis and spatio-temporal feature volumes, etc.; the second type models the trunk and the four limbs of the human body, then detects the human body in each frame of image in the video sequence and obtains model parameters to describe the action change; the third category uses image statistics to perform statistical identification on the bottom layer information of video frames, for example, in the research of Ke Yan, color segmentation is performed on images in a video sequence, and then motion identification is performed by using statistical information of the same color blocks in consecutive images.
However, the methods of the first class of spatio-temporal feature analysis have low computational complexity and are easy to implement, but are sensitive to image noise; the second type of modeling human body trunk and limbs has good motion description accuracy, but large calculation amount and lower real-time performance; the third type uses image statistical method to perform statistical identification on the bottom information of video frame, which can not meet the requirements of real-time and easy realization. In conclusion, the traditional recognition algorithm has high operation complexity and low efficiency; the recognition accuracy is low.
Disclosure of Invention
The fatigue driving detection method based on the Gaussian pyramid features is capable of reducing algorithm complexity, high in calculation efficiency and high in recognition accuracy.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
a fatigue driving detection method based on Gaussian pyramid characteristics comprises the following steps:
s1, preprocessing the shot driving image of the driver;
s2, performing down-sampling on the preprocessed image data to obtain multi-resolution Gaussian pyramid characteristics;
and S3, matching the characteristics with a pre-stored characteristic database, and judging whether the driver is fatigue driving.
Further, the step S1 includes performing optical flow, gaussian smoothing, and normalization calculation on the image once.
Further, the step S1 specifically includes performing optical flow calculation on a current frame image f (I, j, t) and a previous frame image f (I, j, t-1) of the original video sequence to obtain an X-direction velocity { U (X, Y), (X, Y) ∈ I } represented by U, a Y-direction velocity { V (X, Y), (X, Y) ∈ I } represented by V, and four features obtained in the direction of the separation velocity are respectively that U (U, j, t) is right+Leftward U-Downward V+And upward V-And gaussian smoothing is performed on all features:
the above formula is performed and normalized:
and (3) calculating the similarity S (i, j) of the optical flow characteristics obtained from the video sequence:
wherein,andrespectively representing the calculation results of the corresponding features of the pixel points in the two groups of optical flow feature sequences with similarity to be compared, c representsAndfour directions after Gaussian smoothing and normalization, i, j are respectively frame numbers in corresponding sequences;
performing convolution kernel of the unit matrix on the S (i, j) to obtain:
further, the step S2 includes,
s21, performing multi-level down-sampling on the video sequence to form a video sequence pyramid with L levels, and performing multi-level down-sampling on an initialized video sequence f with a given resolution of M × N0(i, j, t), each layer fl(i, j, t) is calculated recursively by the following formula:
wherein f isl(i, j, t) represents that each frame of image f (i, j) is at the t th frame of pyramid level L (L is more than or equal to 0 and less than or equal to L), r (m, n) is a Gaussian filter,
wherein,
r(m,n)=r(m)r(n)
r(0)=a,r(1)=r(-1)=1/4,r(2)=r(-2)=1/4-2/a;
s22, starting from the level with the lowest resolution, calculating the motion characteristic sequence f of the l-th layerlHierarchically computing the test samples and each sample in the training setSelecting K candidates according to K neighbors according to the result of the similarity; f is then calculated at level l-1l-1(i, j, t) corresponds to the similarity between the motion feature and the K candidates previously selected in the l layer, and the result of K neighbors is used for comparison of the l-2 layer with higher resolution until the l with the highest resolution is equal to 0 layer.
Further, the characteristic database stored in advance stores abnormal driving behavior characteristics.
Compared with the prior art, the invention has the beneficial effects
The fatigue driving detection method based on the Gaussian pyramid characteristics adopts a Gaussian pyramid characteristic analysis method, and compared with the prior art, the fatigue driving detection method based on the Gaussian pyramid characteristics is low in algorithm complexity, high in calculation efficiency and high in identification accuracy.
Drawings
Fig. 1 is a flowchart of a fatigue driving detection method based on gaussian pyramid characteristics according to the present invention.
FIG. 2 is a graphical representation of the results of the test in one embodiment.
Detailed Description
The present invention will be described in further detail with reference to specific embodiments. It should be understood that the scope of the above-described subject matter is not limited to the following examples, and any techniques implemented based on the disclosure of the present invention are within the scope of the present invention.
Example 1:
a fatigue driving detection method based on Gaussian pyramid characteristics comprises the following steps:
s1, preprocessing the shot driving image of the driver;
s2, performing down-sampling on the preprocessed image data to obtain multi-resolution Gaussian pyramid characteristics;
and S3, matching the characteristics with a pre-stored characteristic database, and judging whether the driver is fatigue driving.
The fatigue driving detection method based on the Gaussian pyramid characteristics adopts a Gaussian pyramid characteristic analysis method, and compared with the prior art, the fatigue driving detection method based on the Gaussian pyramid characteristics is low in algorithm complexity, high in calculation efficiency and high in identification accuracy.
In a specific embodiment, the step S1 includes performing optical flow, gaussian smoothing, and normalization calculation on the image once.
In one embodiment, the step S1 specifically includes performing optical flow calculation on a current frame image f (I, j, t) and a previous frame image f (I, j, t-1) of the original video sequence to obtain an X-direction velocity { U (X, Y), (X, Y) ∈ I } represented by U, a Y-direction velocity { V (X, Y), (X, Y) ∈ I } represented by V, and four features obtained from the direction of the separation velocity are respectively U (X, j, t) to the right+Leftward U-Downward V+And upward V-And gaussian smoothing is performed on all features:
the above formula is performed and normalized:
and (3) calculating the similarity S (i, j) of the optical flow characteristics obtained from the video sequence:
wherein,andrespectively representing the calculation results of the corresponding features of the pixel points in the two groups of optical flow feature sequences with similarity to be compared, c representsAndfour directions after Gaussian smoothing and normalization, i, j are respectively frame numbers in corresponding sequences;
performing convolution kernel of the unit matrix on the S (i, j) to obtain:
in one embodiment, the step S2 includes,
s21, performing multi-level down-sampling on the video sequence to form a video sequence pyramid with L levels, and performing multi-level down-sampling on an initialized video sequence f with a given resolution of M × N0(i, j, t), each layer fl(i, j, t) is calculated recursively by the following formula:
wherein f isl(i, j, t) represents that each frame of image f (i, j) is at the t th frame of pyramid level L (L is more than or equal to 0 and less than or equal to L), r (m, n) is a Gaussian filter,
wherein,
r(m,n)=r(m)r(n)
r(0)=a,r(1)=r(-1)=1/4,r(2)=r(-2)=1/4-2/a;
s22, starting from the level with the lowest resolution, calculating the motion characteristic sequence f of the l-th layerlCalculating the similarity of the test sample and each sample in the training set in a hierarchy, and selecting K candidates according to K neighbors; f is then calculated at level l-1l-1(i, j, t) corresponds to the similarity between the motion feature and the K candidates previously selected in the l layer, and the result of K neighbors is used for comparison of the l-2 layer with higher resolution until the l with the highest resolution is equal to 0 layer.
In order to identify driving actions in real Time, identify and alarm actions irrelevant to driving, overcome the inherent difficulty of driving action identification and more accurately identify abnormal driving behaviors, the method provides an action characteristic pyramid characteristic based on optical flow, and realizes a rapid action identification algorithm constrained by coarse-to-fine DTW (dynamic Time warping): the method comprises the steps of firstly carrying out optical flow, Gaussian smoothing and normalization calculation on an image in sequence, then carrying out down-sampling on optical flow sequence data generated by an action video sequence for multiple times to form a multi-resolution Gaussian pyramid feature, and finally carrying out classification and identification on actions on the basis of a plurality of layers of pyramid features by taking improvement of overall identification speed as a target. And (3) on the basis of the identification of the action identification research common database, discussing the integral framework of action characteristic extraction, action identification and algorithm, and finally applying the integral framework to the actual problem of driving action identification to judge abnormal driving behaviors. The DTW algorithm provided by the invention can automatically register the two sequences and restore the registration path without considering the inspiration of action in the calculation, and a more accurate detection result is obtained under the real-time requirement. The method comprises the following steps: and searching a path in the similarity matrix, so that points on the path are all calculated by the most similar frame between the two actions, saving coordinates and the direction of the previous point in the path calculation process, and returning according to the original path and removing the points on the boundary to obtain a real matching path after one time of DTW calculation is finished.
Compared with the prior art, the method greatly improves the speed of similarity calculation, and simultaneously keeps that the recognition rate is not greatly reduced due to higher similarity of two adjacent layers of characteristic sequences of the pyramid. The number of selected layers and the K value of K neighbor are proper, so that the calculated amount can be greatly reduced under the condition that the recognition rate is basically unchanged, and the abnormal driving action can be reasonably and efficiently recognized; specifically, the algorithm solves the problem that the starting point and the end point of the sequence are not easy to determine in the traditional DTW method, and can automatically align the starting point and the end point of the action in the calculation process, so that the recognition rate is improved, and meanwhile, the method for preprocessing the optical flow characteristics reduces the influence of noise on the recognition result and improves the recognition rate.
In one embodiment, the pre-stored characteristic database stores abnormal driving behavior characteristics.
Common undesirable driving behaviors include: (1) watching non-driving directions for a long time, such as watching scenes outside windows on two sides; (2) looking up the gear lever during gear shifting; (3) answering the mobile phone; (4) operating the electronic device for a long time or viewing the meter for a long time, and the like.
One embodiment of the present invention provides a sample, which identifies and detects seven poor driving behaviors, including: the method comprises the steps of head burying, object picking, head turning and rear viewing, left rearview mirror viewing, center rearview mirror viewing, right rearview mirror viewing, mobile phone receiving and head lowering for gear viewing.
And (3) carrying out sample library training on the seven behaviors in advance, and storing the trained samples to form a characteristic database. And then, calculating the similarity of the behaviors in actual driving by using the algorithm of the invention, and selecting the behavior corresponding to the corresponding sequence with the maximum similarity as the recognition result. Fig. 2 shows the results obtained after the 126 action sequence tests of the present example, wherein 120 actions are completely correctly recognized, which accounts for 95.2%, and 6 false recognition are: a countersunk pick identified as looking at the left rearview mirror; turning the head and then seeing the head, and recognizing the head as a rearview mirror in watching; a head-down viewing gear is identified as viewing the right rearview mirror; the rear-view mirror is recognized as a head turning rear-view mirror when the user looks at the right rear-view mirror; receiving the mobile phone, and recognizing as watching the right rearview mirror; the head is turned to look back, and the head is identified as looking at the right rearview mirror. The above false action is analyzed, the actions before and after the false identification are related to each other, and when the action amplitude is too small, the optical flow characteristics are relatively close to each other and can be identified by the false, so the reliability of the method of the invention is relatively high, and the algorithm can realize the identification within 3 seconds on a commonly configured CPU (a 2.5GHz dual-core CPU using a memory 2G in the embodiment).
After the abnormal driving action is recognized, the voice reminding is carried out on the driver to remind the driver to stop the dangerous driving action.
While the present invention has been described in detail with reference to the embodiments shown in the drawings, the present invention is not limited to the above embodiments, and various modifications or alterations can be made by those skilled in the art without departing from the spirit and scope of the claims of the present application.

Claims (5)

1. A fatigue driving detection method based on Gaussian pyramid characteristics is characterized by comprising the following steps:
s1, preprocessing the shot driving image of the driver;
s2, performing down-sampling on the preprocessed image data to obtain multi-resolution Gaussian pyramid characteristics;
and S3, matching the characteristics with a pre-stored characteristic database, and judging whether the driver is fatigue driving.
2. The fatigue driving detecting method based on gaussian pyramid feature of claim 1, wherein said step S1 comprises performing optical flow, gaussian smoothing and normalization calculation once on said image.
3. The method of claim 2, wherein the step S1 specifically comprises performing optical flow calculation on a current frame image f (I, j, t) and a previous frame image f (I, j, t-1) of the original video sequence to obtain an X-direction velocity { U (X, Y), (X, Y) ∈ I } represented by U, a Y-direction velocity { V (X, Y), (X, Y) ∈ I } represented by V, and four separation velocity directions are respectively U (X, Y) to the right+Leftward U-Downward V+And upward V-And gaussian smoothing is performed on all features:
U ^ + ( x , y ) = U + ( x , y ) ⊗ g ( i , j , 0 , δ ) U ^ - ( x , y ) = U - ( x , y ) ⊗ g ( i , j , 0 , δ ) V ^ + ( x , y ) = V + ( x , y ) ⊗ g ( i , j , 0 , δ ) V ^ - ( x , y ) = V - ( x , y ) ⊗ g ( i , j , 0 , δ ) ,
the above formula is performed and normalized:
U ^ ^ + = U ^ + / | U ^ + | , U ^ ^ - = U ^ - / | U ^ - | V ^ ^ + = V ^ + / | V ^ + | , V ^ ^ - = V ^ - / | V ^ - | ,
and (3) calculating the similarity S (i, j) of the optical flow characteristics obtained from the video sequence:
S ( i , j ) = Σ c = 1 4 Σ x , y ∈ I a c i ( x , y ) · b c j ( x , y )
wherein,andrespectively representing the calculation results of the corresponding features of the pixel points in the two groups of optical flow feature sequences with similarity to be compared, c representsAndfour directions after Gaussian smoothing and normalization, i, j are respectively frame numbers in corresponding sequences;
performing convolution kernel of the unit matrix on the S (i, j) to obtain:
S T ( i , j ) = S ( i , j ) ⊗ I ( T ) .
4. the fatigue driving detecting method based on Gaussian pyramid characteristics as claimed in claim 1, wherein the step S2 includes,
s21, performing multi-level down-sampling on the video sequence to form a video sequence pyramid with L levels, and performing multi-level down-sampling on an initialized video sequence f with a given resolution of M × N0(i, j, t), each layer fl(i, j, t) is calculated recursively by the following formula:
f l ( i , j , t ) = Σ m = - 2 2 Σ n = - 2 2 r ( m , n ) f l - 1 ( 2 i + m , 2 j + n , t ) ,
wherein f isl(i, j, t) represents that each frame of image f (i, j) is at the t th frame of pyramid level L (L is more than or equal to 0 and less than or equal to L), r (m, n) is a Gaussian filter,
wherein,
r(m,n)=r(m)r(n)
r(0)=a,r(1)=r(-1)=1/4,r(2)=r(-2)=1/4-2/a;
s22, starting from the level with the lowest resolution, calculating the motion characteristic sequence f of the l-th layerlCalculating the similarity of the test sample and each sample in the training set in a hierarchy, and selecting K candidates according to K neighbors; f is then calculated at level l-1l-1(i, j, t) corresponds to the similarity between the motion feature and the K candidates previously selected in the l layer, and the result of K neighbors is used for comparison of the l-2 layer with higher resolution until the l with the highest resolution is equal to 0 layer.
5. The fatigue driving detection method based on the gaussian pyramid feature of claim 1, wherein the pre-stored feature database stores therein abnormal driving behavior features.
CN201611062106.XA 2016-11-24 2016-11-24 A kind of method for detecting fatigue driving based on gaussian pyramid feature Pending CN106778528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611062106.XA CN106778528A (en) 2016-11-24 2016-11-24 A kind of method for detecting fatigue driving based on gaussian pyramid feature

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611062106.XA CN106778528A (en) 2016-11-24 2016-11-24 A kind of method for detecting fatigue driving based on gaussian pyramid feature

Publications (1)

Publication Number Publication Date
CN106778528A true CN106778528A (en) 2017-05-31

Family

ID=58910961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611062106.XA Pending CN106778528A (en) 2016-11-24 2016-11-24 A kind of method for detecting fatigue driving based on gaussian pyramid feature

Country Status (1)

Country Link
CN (1) CN106778528A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334876A (en) * 2018-05-09 2018-07-27 华南理工大学 Tired expression recognition method based on image pyramid local binary pattern
CN110543848A (en) * 2019-08-29 2019-12-06 交控科技股份有限公司 Driver action recognition method and device based on three-dimensional convolutional neural network
CN110718067A (en) * 2019-09-23 2020-01-21 浙江大华技术股份有限公司 Violation behavior warning method and related device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770568A (en) * 2008-12-31 2010-07-07 南京理工大学 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation
CN103489010A (en) * 2013-09-25 2014-01-01 吉林大学 Fatigue driving detecting method based on driving behaviors
CN103514448A (en) * 2013-10-24 2014-01-15 北京国基科技股份有限公司 Method and system for navicular identification
US20140226913A1 (en) * 2009-01-14 2014-08-14 A9.Com, Inc. Method and system for matching an image using image patches
CN104331151A (en) * 2014-10-11 2015-02-04 中国传媒大学 Optical flow-based gesture motion direction recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101770568A (en) * 2008-12-31 2010-07-07 南京理工大学 Target automatically recognizing and tracking method based on affine invariant point and optical flow calculation
US20140226913A1 (en) * 2009-01-14 2014-08-14 A9.Com, Inc. Method and system for matching an image using image patches
CN103489010A (en) * 2013-09-25 2014-01-01 吉林大学 Fatigue driving detecting method based on driving behaviors
CN103514448A (en) * 2013-10-24 2014-01-15 北京国基科技股份有限公司 Method and system for navicular identification
CN104331151A (en) * 2014-10-11 2015-02-04 中国传媒大学 Optical flow-based gesture motion direction recognition method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WEIHUA ZHANG ETC: "Action Recognition by Joint Spatial-Temporal Motion Feature", 《JOURNAL OF APPLIED MATHEMATICS》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108334876A (en) * 2018-05-09 2018-07-27 华南理工大学 Tired expression recognition method based on image pyramid local binary pattern
CN110543848A (en) * 2019-08-29 2019-12-06 交控科技股份有限公司 Driver action recognition method and device based on three-dimensional convolutional neural network
CN110543848B (en) * 2019-08-29 2022-02-15 交控科技股份有限公司 Driver action recognition method and device based on three-dimensional convolutional neural network
CN110718067A (en) * 2019-09-23 2020-01-21 浙江大华技术股份有限公司 Violation behavior warning method and related device

Similar Documents

Publication Publication Date Title
CN110033002B (en) License plate detection method based on multitask cascade convolution neural network
Anagnostopoulos et al. A license plate-recognition algorithm for intelligent transportation system applications
CN109784150B (en) Video driver behavior identification method based on multitasking space-time convolutional neural network
US8320643B2 (en) Face authentication device
Tsai et al. Vehicle detection using normalized color and edge map
TWI384408B (en) Method and system for identifying image and outputting identification result
Ansari et al. Human detection techniques for real time surveillance: a comprehensive survey
Nowosielski et al. Embedded night-vision system for pedestrian detection
CN106384345B (en) A kind of image detection and flow statistical method based on RCNN
KR102132407B1 (en) Method and apparatus for estimating human emotion based on adaptive image recognition using incremental deep learning
CN108288047A (en) A kind of pedestrian/vehicle checking method
CN106778687A (en) Method for viewing points detecting based on local evaluation and global optimization
CN108764096B (en) Pedestrian re-identification system and method
Lee et al. Near-infrared-based nighttime pedestrian detection using grouped part models
Nurhadiyatna et al. Gabor filtering for feature extraction in real time vehicle classification system
CN106778528A (en) A kind of method for detecting fatigue driving based on gaussian pyramid feature
CN115861981A (en) Driver fatigue behavior detection method and system based on video attitude invariance
Xue et al. Nighttime pedestrian and vehicle detection based on a fast saliency and multifeature fusion algorithm for infrared images
CN111461181A (en) Vehicle fine-grained classification method and device
CN116740792A (en) Face recognition method and system for sightseeing vehicle operators
CN108647679B (en) Car logo identification method based on car window coarse positioning
Srivastava et al. Driver’s Face Detection in Poor Illumination for ADAS Applications
CN115352454A (en) Interactive auxiliary safe driving system
Qasim et al. Abandoned Object Detection and Classification Using Deep Embedded Vision
Lollett et al. Towards a driver's gaze zone classifier using a single camera robust to temporal and permanent face occlusions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170531