CN107392131A - A kind of action identification method based on skeleton nodal distance - Google Patents

A kind of action identification method based on skeleton nodal distance Download PDF

Info

Publication number
CN107392131A
CN107392131A CN201710573316.3A CN201710573316A CN107392131A CN 107392131 A CN107392131 A CN 107392131A CN 201710573316 A CN201710573316 A CN 201710573316A CN 107392131 A CN107392131 A CN 107392131A
Authority
CN
China
Prior art keywords
distance
frame
bone
sequence
skeleton
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710573316.3A
Other languages
Chinese (zh)
Inventor
侯永宏
杨梦頔
李传坤
王利伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710573316.3A priority Critical patent/CN107392131A/en
Publication of CN107392131A publication Critical patent/CN107392131A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of action identification method based on skeleton nodal distance, step are as follows:(1) three planes of each frame of bone sequence in Descartes's rhombic system are projected respectively, generate skeleton distribution map;(2) the skeletal joint point on skeleton distribution map is extracted;(3) the Euclidean distance between the skeletal joint of each frame of bone sequence is calculated, the distance of all nodes is combined into frame distance vector;(4) by all frame distance vectors of bone sequence composition sequence distance matrix sequentially in time;(5) it is color coded using pseudo- colour coding method two-dimensional matrix of adjusting the distance, obtains color texture figure;(6) picture classification is carried out using deep learning method, so as to complete the motion detection identification mission of the mankind.Action identification method of the invention based on bone nodal distance is not influenceed by environment and interactive action, suitable for interactive action recognition under complex scene, thus has more wide application value.

Description

A kind of action identification method based on skeleton nodal distance
Technical field
The invention belongs to multimedia signal processing field, is related to computer intelligence, pattern-recognition, machine learning, is by it A kind of action identification method based on skeleton nodal distance.
Background technology
With the continuous development of computer intellectual technology, in life from now on, human action identification should with wide Use prospect.Such as:Intelligent monitoring, the somatic sensation television game of man-machine interaction, video frequency searching etc..So research action recognition methods has Far-reaching researching value.In recent years, with increasingly ripe, easy use, the depth sensing of low cost of computer vision technique Device, such as Kinect cameras, it is widely used in a variety of applications.Because depth camera is compared with conventional color video camera, there is control The advantages of bright change is insensitive and can reliably estimate body contour and skeleton, the human action based on depth bone video sequence Detection identification causes increasing concern.But the stroking technique identification for being currently based on bone is still immature.
Two patents CN106203503A, CN106228109A of application are and the application most phase before present inventor The publication of pass, its shortcoming are can only to identify the action in single subject scenes, it is impossible to are met interactive dynamic under complex scene Make the needs identified.
The content of the invention
This method utilizes the skeleton sequence that is obtained by Kinect video camera, by extracting between skeleton node Range information, it is color coded, and classification based training is carried out to action using deep learning method, it is proposed that one kind is based on human body The action identification method of artis distance, meet the needs of interactive action recognition under complex scene.
The technical scheme for realizing the object of the invention is:
A kind of action identification method based on skeleton nodal distance, step are as follows:
(1) three planes of each frame of bone sequence in Descartes's rhombic system are projected respectively, generation skeleton distribution Figure;
(2) the skeletal joint point on skeleton distribution map is extracted;
(3) the Euclidean distance between the skeletal joint of each frame of bone sequence is calculated, the distance of all nodes is combined into framing Distance vector;
(4) by all frame distance vectors of bone sequence composition sequence distance matrix sequentially in time;
(5) it is color coded using pseudo- colour coding method two-dimensional matrix of adjusting the distance, obtains color texture figure;
(6) picture classification is carried out using deep learning method, so as to complete the motion detection identification mission of the mankind.
Described deep learning method and step is:Using four-way to front, side, top surface, three-dimensional distance colour Texture maps carry out the parameter training of convolutional neural networks respectively;(2) convolutional neural networks parameter is adjusted;(3) four passages are merged Action recognition result, the test data of input is classified.To four passage output category probability vector v1, v2, v3, v4Press Merged according to equation below,
Lable=Fin (max (v1ov2ov3ov4))
Wherein, the element multiplication of o representative vectors, Fin () function stand is using the maximum action of probable value as action recognition Result.
Concretely comprise the following steps:
The first step, the bone sequence frame number of everything is fixed as t using bilinear interpolation;
Second step, it is assumed that from each frame skeletal graph VxyzM skeleton node of upper extraction, is used
To represent the three dimensional local information of j-th of bone node of the i-th frame;Entirely In video sequence all bones can be projected to according to three-dimensional information three fronts of three Descartes's rhombic system, side, Top surface, obtain positive Sxy, side Syz, top surface Sxz, after projection, each frame bone sequence obtains the description of the three black and white action The picture of bone distribution;
3rd step, in positive Sxy, side Syz, top surface SxzWith skeletal graph VxyzIt is upper to obtain bone node respectivelyArrive other The Euclidean distance of m-1 bone node
djk i=| | Pj i-Pk i||2,j∈1,...,m-1;k∈j+1,...,m
4th step, each a total of m of frame (m-1)/2 bone node pair, therefore m (m-1)/2 distance is obtained, again to it Marked asWherein, l ∈ 1 ..., m (m-1)/2, by this m (m-1)/2 bone nodal distance component frame distance vector Di
Di={ d1 i,d2 i,...,dl i}
5th step, the frame Distance matrix D of each frameiCombine sequentially in time, obtain sequence distance matrix H:
H={ D1,D2,...,Dt}
6th step, pseudo- color coding is carried out to sequence distance matrix, obtains positive Sxy, side Syz, top surface Sxz, it is three-dimensional away from From VxyzColor texture figure.
The advantages of the present invention:
1st, the action identification method of the invention based on bone nodal distance is not influenceed by environment and interactive action, suitable for multiple Interactive action recognition under miscellaneous scene, thus there is more wide application value.
2nd, the present invention saves the temporal characteristics acted in bone sequence by recording the change of the distance between bone node. The difference of distance can correspond to different actions between bone node so that motion characteristic is more obvious, more easily nerve net Network learns, so as to improve the accuracy rate of action recognition.
3rd, the present invention uses four passages, and the feature extracted is trained using deep learning technology, finally used Integration technology, the classification results of four passages are merged, realize the identification function of objective and accurate and efficient human action.
4th, the crucial bone node of the invention by extracting fixed position on each frame of bone sequence, so as to effective Extract space characteristics, compared with traditional-handwork extracts motion characteristic, there is objectivity, general applicability, reduce randomness with The interference of subjectivity selection.So that recognition result is more objective credible, application is stronger.
Brief description of the drawings
Fig. 1 is the entirely action recognition frame diagram based on skeleton node.
Embodiment
Below in conjunction with the accompanying drawings and the invention will be further described by specific embodiment, and following examples are descriptive , it is not limited, it is impossible to which protection scope of the present invention is limited with this.
A kind of action identification method based on skeleton nodal distance, step are as follows:
1) mapping of the bone sequence to picture
Assuming that there is a series of bone sequence of human actions, now action is identified, the bone generally each acted The frame number t of sequencexIt is uncertain.
The first step, the bone sequence frame number of everything is fixed as t using bilinear interpolation.
Second step, it is assumed that from each frame skeletal graph VxyzM skeleton node of upper extraction, is used
To represent the three dimensional local information of j-th of bone node of the i-th frame.Entirely All bones can project in three planes of three Descartes's rhombic system (just according to three-dimensional information in video sequence Face, side, top surface), obtain positive Sxy, side Syz, top surface Sxz.After such projection, each frame bone sequence can obtain three Open the picture for describing action bone distribution of black and white.With abundant spatial information, but retouching for temporal information is not had State.
3rd step, in positive Sxy, side Syz, top surface SxzWith skeletal graph VxyzIt is upper to obtain bone node respectivelyArrive other The Euclidean distance of m-1 bone node
djk i=| | Pj i-Pk i||2,j∈1,...,m-1;k∈j+1,...,m
4th step, each a total of m of frame (m-1)/2 bone node pair, therefore m (m-1)/2 distance is obtained, again to it Marked asWherein, l ∈ 1 ..., m (m-1)/2, by this m (m-1)/2 bone nodal distance component frame distance vector Di
Di={ d1 i,d2 i,...,dl i}
5th step, the frame Distance matrix D of each frameiCombine sequentially in time, obtain sequence distance matrix H:
H={ D1,D2,...,Dt}
6th step, pseudo- color coding is carried out to sequence distance matrix, obtains color texture figure.Four groups of colours can now be obtained Texture maps, i.e., positive Sxy, side Syz, top surface Sxz, three-dimensional distance VxyzColor texture figure.
2) picture classification is carried out using deep learning, so as to complete the motion detection identification mission of the mankind.
Herein, we under linux system, are led to using current public deep learning framework caffe NvidiaGTXTITANX video cards carry out the acceleration of convolutional neural networks training, using the network structure of AlexNet in the tool box Color texture picture Jing Guo color coding is trained.
The first step, using the training dataset in data set, with four passages to positive Sxy, side Syz, top surface Sxz, three Tie up distance VxyzColor texture figure carry out the parameter trainings of convolutional neural networks respectively.
Second step, using fine setting model optimization network parameter.
3rd step, to four passage output category probability vector v1, v2, v3, v4Merged according to equation below,
Lable=Fin (max (v1ov2ov3ov4))
Wherein, the element multiplication of o representative vectors, Fin () function stand is using the maximum action of probable value as action recognition Result.
Experimental result explanation for the present invention on the data sets such as NTU RGB-D, UTD-MHAD below:
The Realization of Simulation of algorithm is carried out on Matlab-2014b platforms, has calculated training dataset and test data Collect in front, side, top surface and bone nodal distance figure spatially, and color texture is converted into using pseudo- color coding Figure.We under linux system, pass through NvidiaGTXTITANX video cards using current public deep learning framework caffe Carry out the acceleration of convolutional neural networks training.Still completed on Matlab-2014b platforms last probability vector fusion part.
This method is concentrated in internationally recognized human action's identification data and tested, and is instructed in data set NTU RGB-D The method of salary distribution for practicing collection and test set uses cross subject and cross the view methods of salary distribution.Test result is as follows: Acted comprising 60 classes on the NTU RGB-D data sets (wherein comprising 11 interactive actions), obtain 76.2% He respectively 82.3% recognition accuracy;The method of salary distribution of training set and test set uses cross in data set UTD-MHAD The subject methods of salary distribution.Test result is:On the UTD-MHAD data sets acted comprising 27 classes, 88.10% knowledge is obtained Other accuracy rate.The experimental result contrasts detects recognizer with the human action leading in the world based on bone sequence, obtains It is higher than the classification degree of accuracy of remaining algorithm.
Above-described is only the preferred embodiment of the present invention, it is noted that for one of ordinary skill in the art For, on the premise of inventive concept is not departed from, various modifications and improvements can be made, these belong to the protection of the present invention Scope.

Claims (4)

1. a kind of action identification method based on skeleton nodal distance, step are as follows:
(1) three planes of each frame of bone sequence in Descartes's rhombic system are projected respectively, generate skeleton distribution map;
(2) the skeletal joint point on skeleton distribution map is extracted;
It is characterized in that:
Calculate each frame of bone sequence skeletal joint between Euclidean distance, by the distance of all nodes be combined into frame pitch from Vector;
(4) by all frame distance vectors of bone sequence composition sequence distance matrix sequentially in time;
(5) it is color coded using pseudo- colour coding method two-dimensional matrix of adjusting the distance, obtains color texture figure;
(6) picture classification is carried out using deep learning method, so as to complete the motion detection identification mission of the mankind.
2. the action identification method according to claim 1 based on skeleton nodal distance, it is characterised in that:Described Deep learning method and step is:(1) the color texture figure of front, side, top surface, three-dimensional distance is carried out respectively using four-way The parameter training of convolutional neural networks;(2) convolutional neural networks parameter is adjusted;(3) the action recognition result of four passages is merged, it is right The test data of input is classified.
3. the action identification method according to claim 1 based on skeleton nodal distance, it is characterised in that:Specific step Suddenly it is:
The first step, the bone sequence frame number of everything is fixed as t using bilinear interpolation;
Second step, it is assumed that from each frame skeletal graph VxyzM skeleton node of upper extraction, is used
To represent the three dimensional local information of j-th of bone node of the i-th frame;Whole video All bones can project to three fronts, side, top surfaces of three Descartes's rhombic system according to three-dimensional information in sequence, Obtain positive Sxy, side Syz, top surface Sxz, after projection, each frame bone sequence obtains the description of three black and white action bone point The picture of cloth;
3rd step, in positive Sxy, side Syz, top surface SxzWith skeletal graph VxyzIt is upper to obtain bone node P respectivelyj iTo other m-1 The Euclidean distance d of bone nodejk i
djk i=| | Pj i-Pk i||2,j∈1,...,m-1;k∈j+1,...,m
4th step, each a total of m of frame (m-1)/2 bone node pair, therefore m (m-1)/2 distance is obtained, again to its label For dli:Wherein, l ∈ 1 ..., m (m-1)/2, by this m (m-1)/2 bone nodal distance component frame distance vector Di
Di={ d1 i,d2 i,...,dl i}
5th step, the frame Distance matrix D of each frameiCombine sequentially in time, obtain sequence distance matrix H:
H={ D1,D2,...,Dt}
6th step, pseudo- color coding is carried out to sequence distance matrix, obtains positive Sxy, side Syz, top surface Sxz, three-dimensional distance Vxyz Color texture figure.
4. the action identification method according to claim 2 based on skeleton nodal distance, it is characterised in that:To four Passage output category probability vector v1, v2, v3, v4Merged according to equation below,
Wherein,The element multiplication of representative vector, knot of the Fin () function stand using the maximum action of probable value as action recognition Fruit.
CN201710573316.3A 2017-07-14 2017-07-14 A kind of action identification method based on skeleton nodal distance Pending CN107392131A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710573316.3A CN107392131A (en) 2017-07-14 2017-07-14 A kind of action identification method based on skeleton nodal distance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710573316.3A CN107392131A (en) 2017-07-14 2017-07-14 A kind of action identification method based on skeleton nodal distance

Publications (1)

Publication Number Publication Date
CN107392131A true CN107392131A (en) 2017-11-24

Family

ID=60340130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710573316.3A Pending CN107392131A (en) 2017-07-14 2017-07-14 A kind of action identification method based on skeleton nodal distance

Country Status (1)

Country Link
CN (1) CN107392131A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257139A (en) * 2018-02-26 2018-07-06 中国科学院大学 RGB-D three-dimension object detection methods based on deep learning
CN108898064A (en) * 2018-06-05 2018-11-27 华东师范大学 A kind of 3D bone action identification method based on spatial match
CN109614899A (en) * 2018-11-29 2019-04-12 重庆邮电大学 A kind of human motion recognition method based on Lie group feature and convolutional neural networks
CN109670401A (en) * 2018-11-15 2019-04-23 天津大学 A kind of action identification method based on skeleton motion figure
CN109685037A (en) * 2019-01-08 2019-04-26 北京汉王智远科技有限公司 A kind of real-time action recognition methods, device and electronic equipment
CN110490034A (en) * 2018-05-14 2019-11-22 欧姆龙株式会社 Motion analysis device, action-analysing method, recording medium and motion analysis system
CN110717384A (en) * 2019-08-30 2020-01-21 深圳大学 Video interactive behavior recognition method and device
CN111223549A (en) * 2019-12-30 2020-06-02 华东师范大学 Mobile end system and method for disease prevention based on posture correction
WO2020107847A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Bone point-based fall detection method and fall detection device therefor
CN115131301A (en) * 2022-06-15 2022-09-30 北京长木谷医疗科技有限公司 Method and system for intelligently identifying osteoarthritis based on deep learning
CN115457518A (en) * 2022-08-30 2022-12-09 淮阴工学院 Driver behavior recognition method and system based on attitude perception and geometric constraint

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
CN105138995A (en) * 2015-09-01 2015-12-09 重庆理工大学 Time-invariant and view-invariant human action identification method based on skeleton information
US20160086350A1 (en) * 2014-09-22 2016-03-24 Foundation for Research and Technology - Hellas (FORTH) (acting through its Institute of Computer Apparatuses, methods and systems for recovering a 3-dimensional skeletal model of the human body
CN105930767A (en) * 2016-04-06 2016-09-07 南京华捷艾米软件科技有限公司 Human body skeleton-based action recognition method
CN106156714A (en) * 2015-04-24 2016-11-23 北京雷动云合智能技术有限公司 The Human bodys' response method merged based on skeletal joint feature and surface character
CN106203363A (en) * 2016-07-15 2016-12-07 中国科学院自动化研究所 Human skeleton motion sequence Activity recognition method
CN106203503A (en) * 2016-07-08 2016-12-07 天津大学 A kind of action identification method based on skeleton sequence
CN106845386A (en) * 2017-01-16 2017-06-13 中山大学 A kind of action identification method based on dynamic time warping Yu Multiple Kernel Learning
US20170168586A1 (en) * 2015-12-15 2017-06-15 Purdue Research Foundation Method and System for Hand Pose Detection
CN106897714A (en) * 2017-03-23 2017-06-27 北京大学深圳研究生院 A kind of video actions detection method based on convolutional neural networks
CN106897670A (en) * 2017-01-19 2017-06-27 南京邮电大学 A kind of express delivery violence sorting recognition methods based on computer vision
CN106909887A (en) * 2017-01-19 2017-06-30 南京邮电大学盐城大数据研究院有限公司 A kind of action identification method based on CNN and SVM

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
US20160086350A1 (en) * 2014-09-22 2016-03-24 Foundation for Research and Technology - Hellas (FORTH) (acting through its Institute of Computer Apparatuses, methods and systems for recovering a 3-dimensional skeletal model of the human body
CN106156714A (en) * 2015-04-24 2016-11-23 北京雷动云合智能技术有限公司 The Human bodys' response method merged based on skeletal joint feature and surface character
CN105138995A (en) * 2015-09-01 2015-12-09 重庆理工大学 Time-invariant and view-invariant human action identification method based on skeleton information
US20170168586A1 (en) * 2015-12-15 2017-06-15 Purdue Research Foundation Method and System for Hand Pose Detection
CN105930767A (en) * 2016-04-06 2016-09-07 南京华捷艾米软件科技有限公司 Human body skeleton-based action recognition method
CN106203503A (en) * 2016-07-08 2016-12-07 天津大学 A kind of action identification method based on skeleton sequence
CN106203363A (en) * 2016-07-15 2016-12-07 中国科学院自动化研究所 Human skeleton motion sequence Activity recognition method
CN106845386A (en) * 2017-01-16 2017-06-13 中山大学 A kind of action identification method based on dynamic time warping Yu Multiple Kernel Learning
CN106897670A (en) * 2017-01-19 2017-06-27 南京邮电大学 A kind of express delivery violence sorting recognition methods based on computer vision
CN106909887A (en) * 2017-01-19 2017-06-30 南京邮电大学盐城大数据研究院有限公司 A kind of action identification method based on CNN and SVM
CN106897714A (en) * 2017-03-23 2017-06-27 北京大学深圳研究生院 A kind of video actions detection method based on convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHUANKUN LI.ET: "Joint Distance Maps Based Action Recognition With Convolutional Neural Networks", 《IEEE SIGNAL PROCESSING LETTERS》 *
陈渊博: "视频序列中的人体动作识别", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108257139A (en) * 2018-02-26 2018-07-06 中国科学院大学 RGB-D three-dimension object detection methods based on deep learning
CN108257139B (en) * 2018-02-26 2020-09-08 中国科学院大学 RGB-D three-dimensional object detection method based on deep learning
CN110490034A (en) * 2018-05-14 2019-11-22 欧姆龙株式会社 Motion analysis device, action-analysing method, recording medium and motion analysis system
CN108898064A (en) * 2018-06-05 2018-11-27 华东师范大学 A kind of 3D bone action identification method based on spatial match
CN109670401A (en) * 2018-11-15 2019-04-23 天津大学 A kind of action identification method based on skeleton motion figure
WO2020107847A1 (en) * 2018-11-28 2020-06-04 平安科技(深圳)有限公司 Bone point-based fall detection method and fall detection device therefor
CN109614899A (en) * 2018-11-29 2019-04-12 重庆邮电大学 A kind of human motion recognition method based on Lie group feature and convolutional neural networks
CN109614899B (en) * 2018-11-29 2022-07-01 重庆邮电大学 Human body action recognition method based on lie group features and convolutional neural network
CN109685037A (en) * 2019-01-08 2019-04-26 北京汉王智远科技有限公司 A kind of real-time action recognition methods, device and electronic equipment
CN109685037B (en) * 2019-01-08 2021-03-05 北京汉王智远科技有限公司 Real-time action recognition method and device and electronic equipment
CN110717384A (en) * 2019-08-30 2020-01-21 深圳大学 Video interactive behavior recognition method and device
CN111223549A (en) * 2019-12-30 2020-06-02 华东师范大学 Mobile end system and method for disease prevention based on posture correction
CN115131301A (en) * 2022-06-15 2022-09-30 北京长木谷医疗科技有限公司 Method and system for intelligently identifying osteoarthritis based on deep learning
WO2023241032A1 (en) * 2022-06-15 2023-12-21 北京长木谷医疗科技有限公司 Deep learning-based method and system for intelligently identifying osteoarthritis
CN115457518A (en) * 2022-08-30 2022-12-09 淮阴工学院 Driver behavior recognition method and system based on attitude perception and geometric constraint
CN115457518B (en) * 2022-08-30 2024-01-26 淮阴工学院 Driver behavior recognition method and system based on gesture perception and geometric constraint

Similar Documents

Publication Publication Date Title
CN107392131A (en) A kind of action identification method based on skeleton nodal distance
CN108932500B (en) A kind of dynamic gesture identification method and system based on deep neural network
CN107358257B (en) Under a kind of big data scene can incremental learning image classification training method
CN106778604B (en) Pedestrian re-identification method based on matching convolutional neural network
CN108830252A (en) A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN106203503B (en) A kind of action identification method based on bone sequence
CN109961006A (en) A kind of low pixel multiple target Face datection and crucial independent positioning method and alignment schemes
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN107292813A (en) A kind of multi-pose Face generation method based on generation confrontation network
CN112800906B (en) Improved YOLOv 3-based cross-domain target detection method for automatic driving automobile
CN107808129A (en) A kind of facial multi-characteristic points localization method based on single convolutional neural networks
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN112686928B (en) Moving target visual tracking method based on multi-source information fusion
CN108898063A (en) A kind of human body attitude identification device and method based on full convolutional neural networks
Li et al. Sign language recognition based on computer vision
CN110728183A (en) Human body action recognition method based on attention mechanism neural network
CN112347861A (en) Human body posture estimation method based on motion characteristic constraint
CN106228109A (en) A kind of action identification method based on skeleton motion track
CN109858407A (en) A kind of video behavior recognition methods based on much information stream feature and asynchronous fusion
CN105069745A (en) face-changing system based on common image sensor and enhanced augmented reality technology and method
CN110163567A (en) Classroom roll calling system based on multitask concatenated convolutional neural network
CN111881716A (en) Pedestrian re-identification method based on multi-view-angle generation countermeasure network
CN109670401B (en) Action recognition method based on skeletal motion diagram
CN113128424A (en) Attention mechanism-based graph convolution neural network action identification method
Estrela et al. Sign language recognition using partial least squares and RGB-D information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20171124