CN106980823A - A kind of action identification method based on interframe self similarity - Google Patents
A kind of action identification method based on interframe self similarity Download PDFInfo
- Publication number
- CN106980823A CN106980823A CN201710150592.9A CN201710150592A CN106980823A CN 106980823 A CN106980823 A CN 106980823A CN 201710150592 A CN201710150592 A CN 201710150592A CN 106980823 A CN106980823 A CN 106980823A
- Authority
- CN
- China
- Prior art keywords
- action
- sample
- interframe
- self
- identification method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
- G06V40/25—Recognition of walking or running movements, e.g. gait recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of action identification method based on interframe self similarity, the action identification method comprises the following steps:Global optical flow feature is extracted to the sample action after processing, and represents with eigenmatrix each sample action;Self-similarity between each frame feature is calculated according to sample action eigenmatrix and obtains self similarity matrix;The new characteristic vector of each sample action is extracted from self similarity matrix;According to new characteristic vector by SVMs learning classification model, the model obtained with study carries out the classification of motion to the new characteristic vector of test sample, completes the action recognition task under unknown visual angle.The embodiment of the present invention obtains an action recognition model with higher robustness, realization carries out action recognition under unknown visual angle by extracting the interframe self-similarity characteristics of sample action using the sample action study of look-out angle.
Description
Technical field
The present invention relates to action recognition field, more particularly to a kind of action identification method based on interframe self similarity.
Background technology
The action recognition of human body is a highly important problem in computer vision field, is computer vision research
In focus and difficult point.Its quite varied life with people of application is closely bound up, for example:Digital entertainment, intelligentized Furniture, regard
Frequency monitoring and the generation of crime prevention event etc..But, due to there is huge difference, the flexibility of action between men
And diversity, the motion of video camera, the uncontrollability of environment, mutually block between men, the mutual screening between people and object
Gear, and presence the problems such as complex background, human action Activity recognition remain one it is sufficiently complex and challenging
The problem of.
The action recognition of early stage mainly concentrates how research carries out action recognition under the single-view of controllable environment.In order to grind
Study carefully this problem, researcher proposes local feature in extraction video and visual information is converted into quantifiable information, enters action
Recognize.This method only considers to extract the spatial information of action, therefore recognition effect from the colour information of video in early days
It is unsatisfactory.Then researcher, which proposes, to be combined local spatial information and temporal information, it is proposed that space-time sense
Point of interest and Scale invariant such as change at the feature, so as to extract more action messages from video.The action recognition of such method
Accuracy rate can have been lifted than before, but be due to that only make use of local space time information, and have ignored the global information of action,
The robustness of such action identification method is unsatisfactory.
The action recognition algorithm based on deep learning is paid attention to by many researchers in recent years.Because the method is base
The learning model built in complicated neutral net, it can obtain a preferable disaggregated model of robustness by training.At present
These methods mainly tackle the action recognition in single-view, but due in such action recognition task many environmental factors by
Manual control is relatively simple, and substantial amounts of action recognition algorithm can obtain preferable effect in this action recognition problem.
How human action identification problem in recent years is carried out in complex environment and under different visual angle if being focused more on
Action recognition.Different from the action recognition under single-view, the same type of sample action under different visual angles has different spies
Point.Directly learn action recognition model using sample action under look-out angle, it is difficult in the action recognition under another visual angle
Obtain preferable effect.Sample action is collected under all visual angles simultaneously come to train a general action recognition model be not
Reality.Sample action quantity firstly the need of training is very huge, it is difficult to learn to obtain a point with fine robustness
Class model.Secondly the action classification under different visual angles not necessarily, it is difficult to ensure that each action classification has sufficient training
Sample.
In order to solve this problem, researcher proposes the method such as across visual angle study and cross-domain study, by learning two
Sample action under one visual angle is mapped under another visual angle by the feature of respective action sample under visual angle, realization.It is such
Method can obtain more good effect in across visual angle action recognition, but need for using test visual angle in the training process
Under sample action.In reality, when needing to carry out action recognition under a unknown visual angle, typically without under this visual angle
Sample action.
The content of the invention
The invention provides a kind of action identification method based on interframe self similarity, the present invention by study action in itself not
Self similarity feature with the stage realizes the purpose of the progress action recognition under unknown visual angle, described below:
A kind of action identification method based on interframe self similarity, the action identification method comprises the following steps:
Global optical flow feature is extracted to the sample action after processing, and represents with eigenmatrix each sample action;
Self-similarity between each frame feature is calculated according to sample action eigenmatrix and obtains self similarity matrix;From self similarity
The new characteristic vector of each sample action is extracted in matrix;
According to new characteristic vector by SVMs learning classification model, the model obtained with study is to test sample
New characteristic vector carry out the classification of motion, complete the action recognition task under unknown visual angle.
The action identification method is further comprising the steps of:
The movable information for recording human body under different visual angles is collected, the action database of various visual angles is set up;Training is extracted to regard
Sample action under angle and under test visual angle;
The unification of frame length is carried out to the sample action of extraction so that each sample has identical video frame number.
Described pair extraction sample action carry out frame length unified step be specially:
The average frame number j of the video of everything sample is calculated, then to each sample action video according to image
The time sequencing extracted at equal intervals m=j/2 frames of sequence, and video is reassembled as the sample action video after processing.
Sample action after described pair of processing extracts global optical flow feature, and represents that each acts sample with eigenmatrix
This step of is specially:
The light stream series connection of all pixels point in each two field picture is being obtained into each two field picture after obtaining the light stream of every bit
Optical-flow Feature vector;
The Optical-flow Feature vector of each frame is combined to the eigenmatrix for obtaining action video sample in chronological order.
The beneficial effect for the technical scheme that the present invention is provided is:
1st, by study action inherent self similarity feature in itself, it can just be extracted and obtained and visual angle using global optical flow feature
Unrelated action message, is easy under unknown visual angle carry out action recognition;
2nd, without carrying out, complicated dictionary learns and feature coding is the characteristic vector that can obtain sample action, has saved
Into the time of action recognition;
3rd, the action recognition algorithm flow of the interframe self similarity proposed is simple, is easily generalized in real use.
Brief description of the drawings
Fig. 1 is a kind of flow chart of the action identification method based on interframe self similarity.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, further is made to embodiment of the present invention below
It is described in detail on ground.
In order to preferably solve problem present in background technology, the embodiment of the present invention proposes a kind of interframe that is based on from phase
As action identification method.Although different actions have the different forms of expression under different viewing angles, itself is acted
Interframe self-similarity nature will not change because of the change at visual angle.By extracting the interframe self-similarity characteristics of sample action, only
It can just learn to obtain an action recognition model with higher robustness merely with the sample action of look-out angle, this model can
Action recognition is carried out under unknown visual angle to realize.
Embodiment 1
A kind of action identification method based on interframe self similarity, referring to Fig. 1, the action identification method comprises the following steps:
101:Global optical flow feature is extracted to the sample action after processing, and represents that each acts sample with eigenmatrix
This;
102:Self-similarity between each frame feature is calculated according to sample action eigenmatrix and obtains self similarity matrix;From certainly
The new characteristic vector of each sample action is extracted in similar matrix;
103:According to new characteristic vector by SVMs learning classification model, the model obtained with study is to test
The new characteristic vector of sample carries out the classification of motion, completes the action recognition task under unknown visual angle.
Wherein, the action identification method is further comprising the steps of:
The movable information for recording human body under different visual angles is collected, the action database of various visual angles is set up;Training is extracted to regard
Sample action under angle and under test visual angle;
The unification of frame length is carried out to the sample action of extraction so that each sample has identical video frame number.
Wherein, the unified step of the above-mentioned sample action progress frame length to extraction is specially:
The average frame number j of the video of everything sample is calculated, then to each sample action video according to image
The time sequencing extracted at equal intervals m=j/2 frames of sequence, and video is reassembled as the sample action video after processing.
Wherein, global optical flow feature is extracted to the sample action after processing in step 101, and represented with eigenmatrix
The step of each sample action is specially:
The light stream series connection of all pixels point in each two field picture is being obtained into each two field picture after obtaining the light stream of every bit
Optical-flow Feature vector;
The Optical-flow Feature vector of each frame is combined to the eigenmatrix for obtaining action video sample in chronological order.
In summary, the embodiment of the present invention utilizes look-out angle by extracting the interframe self-similarity characteristics of sample action
Sample action study obtains an action recognition model with higher robustness, realizes the carry out action knowledge under unknown visual angle
Not.
Embodiment 2
The scheme in embodiment 1 is further introduced with reference to Fig. 1, table 1 and specific calculation formula, in detail
See below description:
201:The movable information for recording human body under different visual angles is collected, the action database of various visual angles is set up;
Individually action is carried out during recording under each viewing angle to record, it is ensured that when the sample action under all visual angles does not have
Between and direct relevance spatially, table 1 gives the action lists of the database of foundation.
The action lists of table 1
When implementing, the embodiment of the present invention is not limited to above-mentioned specific action, or others action, root
Set according to the need in practical application.
202:The sample action under training visual angle and under test visual angle is extracted respectively;
Wherein, the sample action at training visual angle is mainly used in carrying out the training of classification of motion model, tests the action at visual angle
Sample is tested for action recognition.
203:The unification of frame length is carried out to above-mentioned sample action so that each sample action has identical frame of video
Number;
Calculate the average frame number j of the video of everything sample first, then to each sample action video according to
The time sequencing extracted at equal intervals m=j/2 frames of image sequence, and video is reassembled as the sample action video after processing.
204:Global optical flow feature is extracted to the sample action after processing, and represents that each acts sample with eigenmatrix
This;
Wherein, for the point (x, y) in each two field picture in a Sample video, represent this point in t frames with I (x, y, t)
When gray scale.In another frame, when this point moves to point, (when x+ Δ x, y+ Δ y), its gray value is I (x+ Δ x, y+ Δ y, t+ Δs
t).And it should meet the optical flow constraint equation shown in formula (1) for same point.
I (x, y, t)=I (x+ Δs x, y+ Δ y, t+ Δ t) (1)
Formula (1) can be obtained using Taylor's formula:
Formula (2) is equivalent to formula (3):
Ixu+Iyv+It=0 (3)
Wherein,AndTo put the movement gradient in x directions,For point
Movement gradient on the y axis.Light stream (u, v) is that (embodiment of the present invention is illustrated l × l by taking l=3 as an example, specific real in size
Now, the embodiment of the present invention is without limitation) pixel window in be a constant, then according to this z=l2Pixel obtains formula
(4) equation group.
Formula (4) is converted into formula (5):
Formula (6) is obtained by formula (5) using least square method and solves corresponding light stream (u, v):
Wherein, it is above-mentionedFor pixel i gray scale gradient along the x-axis direction;For pixel i gray scale along the y-axis direction
Gradient;For pixel i gradient of the gray scale along t direction of principal axis.
The light stream series connection of all pixels point in each two field picture is obtained into each frame after the light stream (u, v) of every bit is obtained
The Optical-flow Feature vector o of image.The Optical-flow Feature vector of each frame is combined to the spy for obtaining action video sample in chronological order
Levy matrix H=[o1,o2,…,om]。
205:The sample action eigenmatrix obtained to being extracted in step 204 calculates the self-similarity between each frame feature,
It is obtained from similar matrix;The new characteristic vector of each sample action is extracted from self similarity matrix;
For the i-th frame and jth frame in eigenmatrix with its corresponding Optical-flow Feature vector oiAnd ojThe Europe for calculating them is several
Reed is apart from dij, the value of distance represents the similitude between two frames, is worth smaller similitude bigger.Calculated as i=j current
Frame is with the Euclidean distance of oneself, because two frames are the same, and the value of this distance is 0.The interframe shown in formula (7) is obtained by calculating
Self similarity matrix.
By the value for taking out the Euclidean distance between different frame in turn from top to bottom from left to right from formula (7), string
Connection obtains vectorial d=[d12,d13,…,d1m,d21,d23,…,dm(m-1)], this vector is that the interframe self similarity of video sample is special
Levy vector.
Wherein, d12,d13,…,d1m,d21,d23,…,dm(m-1)For the element in vector.
206:Utilizing step 205) the obtained new characteristic vector of training sample passes through SVMs learning classification mould
Type, the model obtained with study carries out the classification of motion to the new characteristic vector of test sample, completes dynamic under unknown visual angle
Make identification mission.
Linear SVM is a kind of learning method for the sorter model for having supervision.It is former in this sorting technique
The characteristic point of beginningIt is projected onto a new feature space.This feature space is as far as possible by different samples
Characteristic point separate.For the action of n classes altogether, it is necessary to learn n classification function.For the action of the i-th class, classification function such as formula
(8) shown in.
Fi(d)=wi Td+b (8)
W in formulaiIt is the hyperplane for needing to learn, d is the sign vector of sample action, and b is bias vector.Pass through instruction
Practice, characterize vector d for given sample, F should be met when d belongs to the i-th classi(d) > 0, then F when being not belonging toi(d) < 0.
As the sign vector d for giving a test sampletestWhen, carry it into n classification function and utilize decision functionIt is judged as corresponding i-th class.
Wherein, Fn(dtest) it is the probable value that the test sample belongs to the n-th class.
In summary, the embodiment of the present invention utilizes look-out angle by extracting the interframe self-similarity characteristics of sample action
Sample action study obtains an action recognition model with higher robustness, realizes the carry out action knowledge under unknown visual angle
Not.
It will be appreciated by those skilled in the art that accompanying drawing is the schematic diagram of a preferred embodiment, the embodiments of the present invention
Sequence number is for illustration only, and the quality of embodiment is not represented.
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit and
Within principle, any modification, equivalent substitution and improvements made etc. should be included in the scope of the protection.
Claims (4)
1. a kind of action identification method based on interframe self similarity, it is characterised in that the action identification method includes following step
Suddenly:
Global optical flow feature is extracted to the sample action after processing, and represents with eigenmatrix each sample action;
Self-similarity between each frame feature is calculated according to sample action eigenmatrix and obtains self similarity matrix;From self similarity matrix
In extract the new characteristic vector of each sample action;
According to new characteristic vector by SVMs learning classification model, the model obtained with study is to the new of test sample
Characteristic vector carry out the classification of motion, complete action recognition task under unknown visual angle.
2. a kind of action identification method based on interframe self similarity according to claim 1, it is characterised in that the action
Recognition methods is further comprising the steps of:
The movable information for recording human body under different visual angles is collected, the action database of various visual angles is set up;Extract under training visual angle,
And the sample action under test visual angle;
The unification of frame length is carried out to the sample action of extraction so that each sample has identical video frame number.
3. a kind of action identification method based on interframe self similarity according to claim 2, it is characterised in that described pair carries
The unified step that the sample action that takes carries out frame length is specially:
The average frame number j of the video of everything sample is calculated, then to each sample action video according to image sequence
Time sequencing extracted at equal intervals m=j/2 frames, and be reassembled as video as the sample action video after processing.
4. a kind of action identification method based on interframe self similarity according to claim 1, it is characterised in that at described pair
Sample action after reason extracts global optical flow feature, and represented with eigenmatrix be specially the step of each sample action:
The light stream series connection of all pixels point in each two field picture is being obtained to the light of each two field picture after obtaining the light stream of every bit
Flow characteristic vector;
The Optical-flow Feature vector of each frame is combined to the eigenmatrix for obtaining action video sample in chronological order.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710150592.9A CN106980823A (en) | 2017-03-14 | 2017-03-14 | A kind of action identification method based on interframe self similarity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710150592.9A CN106980823A (en) | 2017-03-14 | 2017-03-14 | A kind of action identification method based on interframe self similarity |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106980823A true CN106980823A (en) | 2017-07-25 |
Family
ID=59338829
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710150592.9A Pending CN106980823A (en) | 2017-03-14 | 2017-03-14 | A kind of action identification method based on interframe self similarity |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106980823A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107590477A (en) * | 2017-09-22 | 2018-01-16 | 成都考拉悠然科技有限公司 | A kind of detection means and its method of monitor video anomalous event |
CN108629301A (en) * | 2018-04-24 | 2018-10-09 | 重庆大学 | A kind of human motion recognition method based on moving boundaries dense sampling and movement gradient histogram |
CN110569702A (en) * | 2019-02-14 | 2019-12-13 | 阿里巴巴集团控股有限公司 | Video stream processing method and device |
CN116630639A (en) * | 2023-07-20 | 2023-08-22 | 深圳须弥云图空间科技有限公司 | Object image identification method and device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310233A (en) * | 2013-06-28 | 2013-09-18 | 青岛科技大学 | Similarity mining method of similar behaviors between multiple views and behavior recognition method |
CN104268827A (en) * | 2014-09-24 | 2015-01-07 | 三星电子(中国)研发中心 | Method and device for amplifying local area of video image |
CN104978561A (en) * | 2015-03-25 | 2015-10-14 | 浙江理工大学 | Gradient and light stream characteristics-fused video motion behavior identification method |
-
2017
- 2017-03-14 CN CN201710150592.9A patent/CN106980823A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103310233A (en) * | 2013-06-28 | 2013-09-18 | 青岛科技大学 | Similarity mining method of similar behaviors between multiple views and behavior recognition method |
CN104268827A (en) * | 2014-09-24 | 2015-01-07 | 三星电子(中国)研发中心 | Method and device for amplifying local area of video image |
CN104978561A (en) * | 2015-03-25 | 2015-10-14 | 浙江理工大学 | Gradient and light stream characteristics-fused video motion behavior identification method |
Non-Patent Citations (1)
Title |
---|
尤鸿霞: "局部证据RBF人体行为高层特征自相似融合识别", 《计算机工程与设计》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107590477A (en) * | 2017-09-22 | 2018-01-16 | 成都考拉悠然科技有限公司 | A kind of detection means and its method of monitor video anomalous event |
CN108629301A (en) * | 2018-04-24 | 2018-10-09 | 重庆大学 | A kind of human motion recognition method based on moving boundaries dense sampling and movement gradient histogram |
CN108629301B (en) * | 2018-04-24 | 2022-03-08 | 重庆大学 | Human body action recognition method |
CN110569702A (en) * | 2019-02-14 | 2019-12-13 | 阿里巴巴集团控股有限公司 | Video stream processing method and device |
US10943126B2 (en) | 2019-02-14 | 2021-03-09 | Advanced New Technologies Co., Ltd. | Method and apparatus for processing video stream |
CN110569702B (en) * | 2019-02-14 | 2021-05-14 | 创新先进技术有限公司 | Video stream processing method and device |
CN116630639A (en) * | 2023-07-20 | 2023-08-22 | 深圳须弥云图空间科技有限公司 | Object image identification method and device |
CN116630639B (en) * | 2023-07-20 | 2023-12-12 | 深圳须弥云图空间科技有限公司 | Object image identification method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119703B (en) | Human body action recognition method fusing attention mechanism and spatio-temporal graph convolutional neural network in security scene | |
Bhagat et al. | Indian sign language gesture recognition using image processing and deep learning | |
CN106033435B (en) | Item identification method and device, indoor map generation method and device | |
CN107506740A (en) | A kind of Human bodys' response method based on Three dimensional convolution neutral net and transfer learning model | |
CN105160310A (en) | 3D (three-dimensional) convolutional neural network based human body behavior recognition method | |
CN106874826A (en) | Face key point-tracking method and device | |
CN106980823A (en) | A kind of action identification method based on interframe self similarity | |
CN106687989A (en) | Method and system of facial expression recognition using linear relationships within landmark subsets | |
Xu et al. | Fully-coupled two-stream spatiotemporal networks for extremely low resolution action recognition | |
CN107624061A (en) | Machine vision with dimension data reduction | |
CN104063721B (en) | A kind of human behavior recognition methods learnt automatically based on semantic feature with screening | |
CN110490136A (en) | A kind of human body behavior prediction method of knowledge based distillation | |
CN112488229B (en) | Domain self-adaptive unsupervised target detection method based on feature separation and alignment | |
CN104021381B (en) | Human movement recognition method based on multistage characteristics | |
CN104881662A (en) | Single-image pedestrian detection method | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence | |
CN106228109A (en) | A kind of action identification method based on skeleton motion track | |
CN114241422A (en) | Student classroom behavior detection method based on ESRGAN and improved YOLOv5s | |
Gao et al. | Counting dense objects in remote sensing images | |
CN104794446B (en) | Human motion recognition method and system based on synthesis description | |
CN110163567A (en) | Classroom roll calling system based on multitask concatenated convolutional neural network | |
CN114332911A (en) | Head posture detection method and device and computer equipment | |
Chen et al. | Multi-modality gesture detection and recognition with un-supervision, randomization and discrimination | |
Elharrouss et al. | Drone-SCNet: Scaled cascade network for crowd counting on drone images | |
CN108416795A (en) | The video actions recognition methods of space characteristics is merged based on sequence pondization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170725 |