CN108764059A - A kind of Human bodys' response method and system based on neural network - Google Patents
A kind of Human bodys' response method and system based on neural network Download PDFInfo
- Publication number
- CN108764059A CN108764059A CN201810422265.9A CN201810422265A CN108764059A CN 108764059 A CN108764059 A CN 108764059A CN 201810422265 A CN201810422265 A CN 201810422265A CN 108764059 A CN108764059 A CN 108764059A
- Authority
- CN
- China
- Prior art keywords
- image
- neural network
- recognition
- information
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Image Analysis (AREA)
Abstract
The present invention discloses a kind of Human bodys' response method based on neural network, solves the problems, such as that the Human bodys' response accuracy of wearable sensing data is not high enough.The image information that the present invention is acquired by wearable imaging sensor first uses gray processing processing, then carries out histogram equalization processing to image data, and scene Recognition is carried out to the image information of treated sensor using LSTM-RNN neural network algorithms.Exercise data input for wearable motion sensor carries out action recognition using the acceleration information of LSTM-RNN neural network algorithm motion sensors.And match the motion sequence that usage scenario marks in behavior database, obtain specific behavioural information.User's emergence message is notified using alarm modules.The present invention dresses the identification of sensing data consummatory behavior by the application of these methods and the support of companion module in system to human body, can improve the accuracy and stability of Human bodys' response, have good implementation and actual effect.
Description
Technical field
The Human bodys' response method based on neural network that the present invention relates to a kind of, belong to Activity recognition, sensor technology,
The interleaving techniques such as machine learning field, further relate to that a kind of multisensor multimode has interactive function Human bodys' response is
System.
Background technology
An important subject in Activity recognition and calculating sorting-machine visual field in video has important reason
By meaning and actual application value.
With the progress of the development and science and technology of China's economic society, to the discriminance analysis of task in video and understand at
For the important content of social science and natural science field, human body daily behavior has close ties with human body function index.Example
The time of having a rest such as is calculated by being monitored to the motor behavior that human body lies down, by being supervised to the behaviors such as walk, run
Survey can calculate energy expenditure.This embodies application of the technology in the fields such as sports items and life and health.In addition, the technology
It is all had a wide range of applications in numerous areas such as safety monitoring, smart city construction.With individually from imaging sensor or it is uniaxial plus
Velocity sensor analysis data are compared, and tracking and high-dimensional data processing of dynamic object etc. are more complicated, thus
Challenge with bigger.
Existing personage's Activity recognition is mainly inputted by two kinds of sensors, and one is use single or multiple wearable dresses
Set acceleration, the displacement information for obtaining human body.Another kind is the video letter that personage is obtained using single or multiple imaging sensors
Breath, by carrying out pattern match or the action message of neural network judgement output human body to video information.Both modes respectively have
Pros and cons:By gyroscope, acceleration transducer obtain data analysis it is more difficult, and can not correctly with personage's scene carry out
Match;And the data of imaging sensor acquisition are high by environmental fluctuating, precision in the process matched to behavior is not high.
Action recognition class problem is considered a kind of classification problem, much is designed to use start in classificatory method
Identify specifically have using Logistic regression analyses, decision-tree model, Naive Bayes Classifier and support vector machines.This
A little methods have pros and cons in practical applications.
And for the research of human body system of behavior, used technology is also and immature both at home and abroad.Most of human body behavior
Identifying system depends on and processing manually is marked to data, then places data into model and be identified.Have to data relatively strong
Dependence, running efficiency of system is low, be not suitable for industrialization with commercialization demand.
So far, for the Human bodys' response method and system in sensing data, it is also necessary to largely be ground
Study carefully work.
Invention content
Technical problem:The technical problems to be solved by the invention be by set of system, acquisition multisensor movable information with
Scene information is extracted with action message and is carried out merging for the two information, and uses neural network algorithm by image information
Improve the accuracy to Human bodys' response.
Technical solution:A kind of Human bodys' response method system based on neural network of the present invention includes the following steps:
Using multisensor multimode, include the following steps:
Step 1) is monitored the image sensing module of person's wearing by unlatching, obtains the consecutive image centered on wearer
Information, wherein image procossing are the information on the regions n*m, and wherein n is image transverse direction number of pels per line, and m is image longitudinal direction each column
Pixel number;
Step 2) is monitored the movement sensing module of person's wearing by unlatching, obtains the x-axis of record wearer motion's information
Acceleration, y-axis acceleration and z-axis acceleration information, wherein x-axis are that human body vertical direction is vertical and positive axis is directing forwardly, y-axis
And positive axis parallel with human body vertical direction is directed toward head, and z-axis is that human body vertical direction is vertical and positive axis is directed toward on the left of human body;
Step 3) scene Recognition module receives the information from imaging sensor, by the image of the n*m pixel regions of reception
Original image is carried out image gray processing and equalization, reduces the error for being passed to image and disturbance by information.By gray processing with it is balanced
Change treated input of the n*m pixel regions image information as neural network, using based on long short-term memory recurrent neural net
Network (LSTM-RNN) carries out scene classification;
Step 4) acts evaluation module and receives from motion sensor information, by the x-axis of wearer motion's information of reception
Acceleration, y-axis acceleration and z-axis acceleration are stored as three-dimensional vector group V={ (x1,y1,z1),(x2,y2,z2),...,(xn,
yn,zn)};Define tmFor time granularity, t is takenmInput of the three-dimensional vector group as neural network in time, using by movement number
According to collection training, action assessment is carried out based on long short-term memory recurrent neural network (LSTM-RNN), is obtained in the time granularity
Atomic action;
The micro- intelligent server module of step 5) receives the information from scene Recognition module and action evaluation module, by the two
After integration, Activity recognition is carried out;Take kmA continuous atomic action, i.e., with kmtmIt is mark with situation elements for recognition time unit
Label retrieve the sublist in motion information data library, carry out fuzzy matching and return to Activity recognition result if successful match
Micro- intelligent server;
The micro- intelligent server module of step 6) classifies the Activity recognition result of Activity recognition module according to user setting.
Use demand according to user and setting, identify whether the action currently occurred reaches warning level.If event reaches warning
Rank, micro- intelligent server can send alert command and pass to alert module.Alert module by one or more modes to
Family gives a warning;
Step 7) system interface is entrance of the user to system call interception and setting, and user can be arranged specific by the interface
Server configures, and imaging sensor, motion sensor are tied to system.The operating status of monitoring and observing system, setting system
The operational mode of system;
Step 8) log module monitors and all operating statuses for recording micro- intelligent server, to system operation
Different stage warning message recorded, log is stored by relevant database.User passes through system circle
The data of log module are inquired in face, are safeguarded to system.
Wherein,
The step 3) is specific as follows:
Step 31) scene Recognition module receives the image information of the n*m pixel regions from imaging sensor, mainly passes through
Pretreatment is carried out with scene Recognition to video scene to describe scene low-level image feature.Subsequent scene Recognition for convenience, needs
Gray processing and equalization processing are carried out to scene image;
Step 32) carries out gray processing processing to original scene image.For each pixel, since human eye is to red
Light, green light, the sensitivity of blue light are different, assign its different weight using weighted mean method, can obtain the gray scale of the point
Value Gray=crR+cgG+cbB, wherein cr,cg,cbThe respectively weight of red light, green light, blue light in the transfer, cr+cg
+cb=1;
Step 33) carries out histogram equalization processing to original scene image.In the case of insufficient light, scene Recognition
It will appear larger error, it is therefore desirable to histogram equalization be carried out to image, to improve contrast and brightness.Above-mentioned histogram
Equalization is the method being adjusted to contrast using image histogram in image processing field.Defining gray level isWherein T (r) is gray scale transformation function, rkIt is k-th of gray level, prIt is gray level rk
The probability of appearance is approximate, nkIt is that gray level is r in imagekNumber of pixels (k=0,1,2 ..., n-1), n is pixel in image
Sum;
Pretreatment of the step 34) Jing Guo step 32) and step 33), scene image realize gray processing and equalization.Then
Recognition with Recurrent Neural Network RNN is completed in training on multiple data sets, using the output of the full articulamentum of the Recognition with Recurrent Neural Network as carrying
The scene characteristic vector taken;
Step 35) carries out scene characteristic classification using LSTM type RNN neural networks.First calculate the candidate memory at current time
Cell value ct=tanh (Wxcxt+Whcht-1+bc), wherein tanh is hyperbolic tangent function, xtFor present input data, ht-1It is upper
One moment LSTM unit output valve, Wxc、WhcThe weights of corresponding input data and the output of last moment LSTM unit are indicated respectively,
bcFor amount of bias;
Step 36) inputs the influence to mnemon state value using input gate control current data.Input gateWherein σ is excitation function, ct-1For last moment candidate's mnemon value, WithCorresponding input gated data, last moment LSTM unit input gate and last moment input gate mnemon are indicated respectively
The weights of value, biFor amount of bias;
Step 37) gates influence of the historical information processed to current mnemon state value using forgetting.Forgeing door isWherein,It indicates corresponding respectively and forgets door, last moment
LSTM units forget door and last moment forgets the weights of door mnemon value, biFor amount of bias;
Step 38) calculates current time mnemon state value ct=ft⊙ct-1+it⊙ct, wherein ⊙ expressions multiply point by point
Product, the update of mnemon state depend on oneself state ct-1With current candidate mnemon value ct, and by input gate and
Forget door this two parts factor is adjusted respectively;
Step 39) out gate is used to control the output of mnemon state value, defines out gate and isWherein,Corresponding out gate, last moment LSTM are indicated respectively
The weights of unit out gate and last moment out gate mnemon value, boFor amount of bias;
Step 310) calculates the output h of LSTM unitst=ot⊙tanh(ct), LSTM networks, which use, passes through the reversed of time
Propagation algorithm is trained;
The step 4) is specific as follows:
Three-dimensional information of step 41) the action evaluation module reception from motion sensor, pre-processing sensor acceleration x,
Tri- number of axle evidence of y, z is equivalent to tri- channels RGB in image;Define tmFor time granularity, t is takenmThree-dimensional vector group in time
Input as neural network;
Recognition with Recurrent Neural Network RNN is completed in step 42) training on multiple data sets, and complete using the Recognition with Recurrent Neural Network connects
Connect motion characteristic vector of the layer output as extraction;
Step 42) carries out action assessment using LSTM type RNN neural networks.Step 35)~step 310) is executed, t is exportedm
Assessment result is acted in period.
The step 5) is specific as follows:
The micro- intelligent server module of step 51) receives the information from scene Recognition module and action evaluation module, takes kmtm
For recognition time unit, in kmtmIn time, scene Recognition result is constant, obtains action sequence collectionWherein ActioniFor kmtmI-th of atomic action in time;
Step 52) retrieves the sublist in motion information data library using situation elements as label, carries out fuzzy matching, if
With success, then Activity recognition result is returned into micro- intelligent server.
Wherein
In the step 1), n empirically takes 352, m empirically to take 288.
In step 32), crEmpirically take 0.30, cgEmpirically take 0.59, cbEmpirically take 0.11.
In the step 36), σ empirically takes logistic sigmoid functions.
In the step 4), tmEmpirically take 3.
In the step 5), kmEmpirically take 5.
Advantageous effect:The present invention has the following technical effects using above technical scheme is compared with the prior art:
The present invention uses gray processing processing and histogram equalization processing to the image information of imaging sensor, uses
LSTM-RNN neural network algorithms carry out scene Recognition to the image information of treated sensor, and use LSTM-RNN god
Acceleration information through network algorithm motion sensor carries out action recognition.And by the motion sequence of usage scenario label in behavior
It is matched in database, obtains specific behavioural information.And notify user's emergence message using alarm modules.By user circle
Face adjusts and setting and the state of system operation.Use the operating status of log module monitors and record whole system.It is logical
The application of these methods and the support of companion module are crossed to human body wearing sensing data consummatory behavior identification, there is good standard
True property and stability, specifically:
(1) present invention uses LSTM-RNN neural network algorithms, effectively consideration can act continuity to action recognition
It influences, it is close to the contextual relation of sensing data, increase the accuracy of motor behavior identification.
(2) scene information is combined consideration with action message by the present invention, is used as label in behavior by scene information
Action sequence is matched in database, it is more acurrate to complete the identification to human body behavior.
(3) present invention proposes a kind of effective, highly practical system structure, and configures user interface module and remember with operation
Module is recorded, the stability of Human bodys' response, the concrete application of the convenient invention industrially are improved.
Description of the drawings
Fig. 1 is the Human bodys' response method flow based on neural network.
Fig. 2 is the system function module figure of the Human bodys' response method based on neural network.
Fig. 3 is LSTM-RNN neural network schematic diagrames.
Specific implementation mode
Technical scheme of the present invention is described in further detail below in conjunction with the accompanying drawings:
In specific implementation, Fig. 1 is the Human bodys' response method flow based on neural network.This example assumes the system
Applied to health supervision purposes.The system is arranged for home health care by system interface in user, and by a set of sensor
In apparatus bound to system and warning level and warning mode are set.
Identified person dresses a set of sensing system, including image recognition sensor and movable information sensor first, this
In example image sensing module and action sensing module by low-power ZigBee communication mode based on IEEE802.15.4 with
One routing node is connected.The routing node is connect with a host by USB serial ports, and the image of capture and exercise data are transmitted
To existing scene Recognition module in a software form and locomotion evaluation module.
Scene Recognition module is passed to image to image sensing module and handles, and carries out gray scale to original scene image first
Change is handled.For each pixel, since human eye is different to red light, green light, the sensitivity of blue light, using weighting
The method of average assigns its different weight, can obtain the gray value Gray=c of the pointrR+cgG+cbB, wherein crTake 0.30, cg0.59 is taken,
cbTake 0.11.
Scene Recognition module carries out histogram equalization processing after carrying out gray processing to image to original scene image.?
Define gray scale transformation functionWherein rkIt is k-th of gray level, nkIt is gray scale in image
Grade is rkNumber of pixels (k=0,1,2 ..., n-1), n is the sum of pixel in image.
Scene Recognition module is using gray processing with the 352*288 image datas after equalization processing as the defeated of neural network
Enter, carries out scene classification using LSTM type RNN neural networks, obtain the scene tag of image.
Act evaluation module receive come from motion sensor information, by the x-axis acceleration of wearer motion's information of reception,
Y-axis acceleration is stored as three-dimensional vector group V={ (x with z-axis acceleration1,y1,z1),(x2,y2,z2),...,(xn,yn,zn), it takes
tmInput of the three-dimensional vector group as neural network in the=3s times carries out action assessment using LSTM type RNN neural networks,
Obtain the atomic action in 3s.
Micro- intelligent server, scene Recognition module, action evaluation module are located on same host in this example, micro- intelligence
Server obtains atomic action sequence and the scene markers that a length is 5 per 15s, by the son of this label and human body behavioral data
Table is matched, and the highest behavior of matching degree is obtained.
The warning level that micro- intelligent server detection behavior is arranged in home monitoring system, as result Activity recognition is
Wrestling, then server dials alarm call, transmission alarming short message notify client;Such as result is identified as seeing TV security level row
Not start the warning function of alarm module then.Last utilization logger system is by atomic action sequence, the concrete scene in 15s
Database is recorded with Activity recognition result.
Fig. 3 is the specific configuration of each cell units of LSTM type RNN neural networks, and nucleus module is by inputting in the model
Door forgets door, out gate and mnemon composition.In t time point input action characteristic sequences Xt, hidden layer is obtained by calculation
ht=ot⊙tanh(ct) and cell stateIt is supplied to next LSTM networks.LSTM type RNN nerve nets
Network has a kind of chain type form of replicated blocks, is linked using tanh layers in the model.
Claims (9)
1. a kind of Human bodys' response method based on neural network, it is characterised in that using multisensor multimode, including with
Lower step:
Step 1) is monitored the image sensing module of person's wearing by unlatching, obtains the consecutive image letter centered on wearer
Breath, wherein image procossing are the information on the regions n*m, and wherein n is image transverse direction number of pels per line, and m is image longitudinal direction each column picture
Prime number;
Step 2) is monitored the movement sensing module of person's wearing by unlatching, and the x-axis for obtaining record wearer motion's information accelerates
Degree, y-axis acceleration and z-axis acceleration information, wherein x-axis are that human body vertical direction is vertical and positive axis is directing forwardly, y-axis and human body
Vertical direction is parallel and positive axis is directed toward head, and z-axis is that human body vertical direction is vertical and positive axis is directed toward on the left of human body;
Step 3) scene Recognition module receives the information from imaging sensor, by the image information of the n*m pixel regions of reception,
Original image is subjected to image gray processing and equalization, reduces the error for being passed to image and disturbance.At gray processing and equalization
Input of the n*m pixel regions image information as neural network after reason, using based on long short-term memory recurrent neural network
LSTM-RNN carries out scene classification;
Step 4) acts evaluation module and receives from motion sensor information, and the x-axis of wearer motion's information of reception is accelerated
Degree, y-axis acceleration and z-axis acceleration are stored as three-dimensional vector group V={ (x1,y1,z1),(x2,y2,z2),...,(xn,yn,
zn)};Define tmFor time granularity, t is takenmInput of the three-dimensional vector group as neural network in time, using by exercise data
Collection training, action assessment is carried out based on long short-term memory recurrent neural network LSTM-RNN, obtains the original in the time granularity
Son action;
The micro- intelligent server module of step 5) receives the information from scene Recognition module and action evaluation module, and the two is integrated
Afterwards, Activity recognition is carried out;Take kmA continuous atomic action, i.e., with kmtmFor recognition time unit, using situation elements as label,
The sublist in motion information data library is retrieved, fuzzy matching is carried out and Activity recognition result is returned into micro- intelligence if successful match
It can server;
The micro- intelligent server module of step 6) classifies the Activity recognition result of Activity recognition module according to user setting, according to
The use demand of user and setting, identify whether the action currently occurred reaches warning level;If event reaches warning level,
Micro- intelligent server can send alert command and pass to alert module;Alert module sends out user by one or more modes
Warning;
Step 7) system interface is entrance of the user to system call interception and setting, and specific service can be arranged in user by the interface
Device configures, and imaging sensor, motion sensor are tied to system;The operating status of monitoring and observing system, is arranged system
Operational mode;
Step 8) log module monitors and all operating statuses for recording micro- intelligent server, not to system operation
Same level warning message is recorded, and log is stored by relevant database, user is looked by system interface
The data for asking log module, safeguard system.
2. a kind of Human bodys' response method based on neural network according to claim 1, it is characterised in that the step
It is rapid 3) use multisensor multimode, it is specific as follows:
Step 31) scene Recognition module receives the image information of the n*m pixel regions from imaging sensor, mainly by regarding
Frequency scene carries out pretreatment with scene Recognition to describe scene low-level image feature, and subsequent scene Recognition, needs to field for convenience
Scape image carries out gray processing and equalization processing;
Step 32) carries out gray processing processing to original scene image, for each pixel, since human eye is to red light, green
Coloured light, the sensitivity of blue light are different, assign its different weight using weighted mean method, can obtain the gray value Gray of the point
=crR+cgG+cbB, wherein cr,cg,cbThe respectively weight of red light, green light, blue light in the transfer, cr+cg+cb=1;
Step 33) carries out histogram equalization processing to original scene image, and in the case of insufficient light, scene Recognition can go out
Existing larger error, it is therefore desirable to histogram equalization be carried out to image, to improve contrast and brightness;Above-mentioned histogram equalization
Change is the method being adjusted to contrast using image histogram in image processing field, defines gray level and isWherein T (r) is gray scale transformation function, rkIt is k-th of gray level, prIt is gray level rk
The probability of appearance is approximate, nkIt is that gray level is r in imagekNumber of pixels (k=0,1,2 ..., n-1), n is pixel in image
Sum;
Pretreatment of the step 34) Jing Guo step 32) and step 33), scene image realizes gray processing and equalization, then more
Recognition with Recurrent Neural Network RNN is completed in training on a data set, using the output of the full articulamentum of the Recognition with Recurrent Neural Network as extraction
Scene characteristic vector;
Step 35) carries out scene characteristic classification using LSTM type RNN neural networks, first calculates the candidate mnemon at current time
ValueWherein, tanh is hyperbolic tangent function, xtFor present input data, ht-1It is upper one
Moment LSTM unit output valve, Wxc、WhcThe weights of corresponding input data and the output of last moment LSTM unit, b are indicated respectivelycFor
Amount of bias;
Step 36) inputs the influence to mnemon state value, input gate using input gate control current dataWherein σ is excitation function, ct-1For last moment candidate's mnemon value, WithCorresponding input gated data, last moment LSTM unit input gate and last moment input gate mnemon are indicated respectively
The weights of value, biFor amount of bias;
Using the influence for gating historical information processed to current mnemon state value is forgotten, forget door is step 37)Wherein,It indicates corresponding respectively and forgets door, last moment
LSTM units forget door and last moment forgets the weights of door mnemon value, biFor amount of bias;
Step 38) calculates current time mnemon state valueWherein, e indicates point-by-point product, memory
Location mode update depends on oneself state ct-1With current candidate mnemon valueAnd by input gate and forget door
This two parts factor is adjusted respectively;
Step 39) out gate is used to control the output of mnemon state value, defines out gate and isWherein,Corresponding out gate, last moment LSTM are indicated respectively
The weights of unit out gate and last moment out gate mnemon value, boFor amount of bias;
Step 310) calculates the output h of LSTM unitst=ot e tanh(ct), LSTM networks use the backpropagation by the time
Algorithm is trained.
3. a kind of Human bodys' response method based on neural network according to claim 1, it is characterised in that the step
It is rapid 4) use multisensor multimode, it is specific as follows:
Step 41) acts evaluation module and receives the three-dimensional information from motion sensor, pre-processing sensor acceleration x, y, z tri-
Number of axle evidence is equivalent to tri- channels RGB in image;Define tmFor time granularity, t is takenmThree-dimensional vector group in time is as god
Input through network;
Recognition with Recurrent Neural Network RNN is completed in step 42) training on multiple data sets, utilizes the full articulamentum of the Recognition with Recurrent Neural Network
Export the motion characteristic vector as extraction;
Step 42) carries out action assessment using LSTM type RNN neural networks, executes step 35)~step 310), exports tmTime
Assessment result is acted in section.
4. a kind of Human bodys' response method and system based on neural network according to claim 1, it is characterised in that
The step 5) uses multisensor multimode, specific as follows:
The micro- intelligent server module of step 51) receives the information from scene Recognition module and action evaluation module, takes kmtmTo know
Other chronomere, in kmtmIn time, scene Recognition result is constant, obtains action sequence collection
Wherein ActioniFor kmtmI-th of atomic action in time;
Step 52) retrieves the sublist in motion information data library using situation elements as label, carries out fuzzy matching, if matching at
Activity recognition result is then returned to micro- intelligent server by work(.
5. a kind of human body recognition method and system based on neural network according to claim 1, which is characterized in that described
In step 1), n empirically takes 352, m empirically to take 288.
6. a kind of human body recognition method and system based on neural network according to claim 2, which is characterized in that described
In step 32), crEmpirically take 0.30, cgEmpirically take 0.59, cbEmpirically take 0.11.
7. a kind of human body recognition method and system based on neural network according to claim 2, which is characterized in that described
In step 36), σ empirically takes logistic sigmoid functions.
8. a kind of human body recognition method and system based on neural network according to claim 1, which is characterized in that described
In step 4), tmEmpirically take 3.
9. a kind of human body recognition method and system based on neural network according to claim 1, which is characterized in that described
In step 5), kmEmpirically take 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810422265.9A CN108764059B (en) | 2018-05-04 | 2018-05-04 | Human behavior recognition method and system based on neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810422265.9A CN108764059B (en) | 2018-05-04 | 2018-05-04 | Human behavior recognition method and system based on neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108764059A true CN108764059A (en) | 2018-11-06 |
CN108764059B CN108764059B (en) | 2021-01-01 |
Family
ID=64009051
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810422265.9A Active CN108764059B (en) | 2018-05-04 | 2018-05-04 | Human behavior recognition method and system based on neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108764059B (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670548A (en) * | 2018-12-20 | 2019-04-23 | 电子科技大学 | HAR algorithm is inputted based on the more sizes for improving LSTM-CNN |
CN109727270A (en) * | 2018-12-10 | 2019-05-07 | 杭州帝视科技有限公司 | The movement mechanism and analysis of texture method and system of Cardiac Magnetic Resonance Images |
CN109726662A (en) * | 2018-12-24 | 2019-05-07 | 南京师范大学 | Multi-class human posture recognition method based on convolution sum circulation combination neural net |
CN110276380A (en) * | 2019-05-22 | 2019-09-24 | 杭州电子科技大学 | A kind of real time kinematics online direction system based on depth model frame |
CN110503684A (en) * | 2019-08-09 | 2019-11-26 | 北京影谱科技股份有限公司 | Camera position and orientation estimation method and device |
CN111050266A (en) * | 2019-12-20 | 2020-04-21 | 朱凤邹 | Method and system for performing function control based on earphone detection action |
CN111203878A (en) * | 2020-01-14 | 2020-05-29 | 北京航空航天大学 | Robot sequence task learning method based on visual simulation |
CN111796980A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment and storage medium |
CN111898524A (en) * | 2020-07-29 | 2020-11-06 | 江苏艾什顿科技有限公司 | 5G edge computing gateway and application thereof |
CN112732071A (en) * | 2020-12-11 | 2021-04-30 | 浙江大学 | Calibration-free eye movement tracking system and application |
WO2021081768A1 (en) * | 2019-10-29 | 2021-05-06 | 深圳市欢太科技有限公司 | Interface switching method and apparatus, wearable electronic device and storage medium |
CN112926553A (en) * | 2021-04-25 | 2021-06-08 | 北京芯盾时代科技有限公司 | Training method and device for motion detection network |
CN113673328A (en) * | 2021-07-14 | 2021-11-19 | 南京邮电大学 | Crowd area monitoring method based on feature aggregation network |
CN114402575A (en) * | 2020-03-25 | 2022-04-26 | 株式会社日立制作所 | Action recognition server, action recognition system and action recognition method |
CN116229581A (en) * | 2023-03-23 | 2023-06-06 | 珠海市安克电子技术有限公司 | Intelligent interconnection first-aid system based on big data |
CN116756354A (en) * | 2023-08-24 | 2023-09-15 | 北京电子科技学院 | Photo archive analysis management system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850846A (en) * | 2015-06-02 | 2015-08-19 | 深圳大学 | Human behavior recognition method and human behavior recognition system based on depth neural network |
US20160026914A1 (en) * | 2011-11-26 | 2016-01-28 | Microsoft Technology Licensing, Llc | Discriminative pretraining of deep neural networks |
CN106446876A (en) * | 2016-11-17 | 2017-02-22 | 南方科技大学 | Sensing behavior identification method and device |
CN106951852A (en) * | 2017-03-15 | 2017-07-14 | 深圳汇创联合自动化控制有限公司 | A kind of effective Human bodys' response system |
CN107145878A (en) * | 2017-06-01 | 2017-09-08 | 重庆邮电大学 | Old man's anomaly detection method based on deep learning |
-
2018
- 2018-05-04 CN CN201810422265.9A patent/CN108764059B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160026914A1 (en) * | 2011-11-26 | 2016-01-28 | Microsoft Technology Licensing, Llc | Discriminative pretraining of deep neural networks |
CN104850846A (en) * | 2015-06-02 | 2015-08-19 | 深圳大学 | Human behavior recognition method and human behavior recognition system based on depth neural network |
CN106446876A (en) * | 2016-11-17 | 2017-02-22 | 南方科技大学 | Sensing behavior identification method and device |
CN106951852A (en) * | 2017-03-15 | 2017-07-14 | 深圳汇创联合自动化控制有限公司 | A kind of effective Human bodys' response system |
CN107145878A (en) * | 2017-06-01 | 2017-09-08 | 重庆邮电大学 | Old man's anomaly detection method based on deep learning |
Non-Patent Citations (1)
Title |
---|
朱煜等: "基于深度学习的人体行为识别算法综述", 《自动化学报》 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727270A (en) * | 2018-12-10 | 2019-05-07 | 杭州帝视科技有限公司 | The movement mechanism and analysis of texture method and system of Cardiac Magnetic Resonance Images |
CN109670548A (en) * | 2018-12-20 | 2019-04-23 | 电子科技大学 | HAR algorithm is inputted based on the more sizes for improving LSTM-CNN |
CN109670548B (en) * | 2018-12-20 | 2023-01-06 | 电子科技大学 | Multi-size input HAR algorithm based on improved LSTM-CNN |
CN109726662A (en) * | 2018-12-24 | 2019-05-07 | 南京师范大学 | Multi-class human posture recognition method based on convolution sum circulation combination neural net |
CN111796980A (en) * | 2019-04-09 | 2020-10-20 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment and storage medium |
CN110276380A (en) * | 2019-05-22 | 2019-09-24 | 杭州电子科技大学 | A kind of real time kinematics online direction system based on depth model frame |
CN110503684A (en) * | 2019-08-09 | 2019-11-26 | 北京影谱科技股份有限公司 | Camera position and orientation estimation method and device |
WO2021081768A1 (en) * | 2019-10-29 | 2021-05-06 | 深圳市欢太科技有限公司 | Interface switching method and apparatus, wearable electronic device and storage medium |
CN111050266A (en) * | 2019-12-20 | 2020-04-21 | 朱凤邹 | Method and system for performing function control based on earphone detection action |
CN111203878A (en) * | 2020-01-14 | 2020-05-29 | 北京航空航天大学 | Robot sequence task learning method based on visual simulation |
CN111203878B (en) * | 2020-01-14 | 2021-10-01 | 北京航空航天大学 | Robot sequence task learning method based on visual simulation |
CN114402575B (en) * | 2020-03-25 | 2023-12-12 | 株式会社日立制作所 | Action recognition server, action recognition system, and action recognition method |
CN114402575A (en) * | 2020-03-25 | 2022-04-26 | 株式会社日立制作所 | Action recognition server, action recognition system and action recognition method |
CN111898524A (en) * | 2020-07-29 | 2020-11-06 | 江苏艾什顿科技有限公司 | 5G edge computing gateway and application thereof |
CN112732071A (en) * | 2020-12-11 | 2021-04-30 | 浙江大学 | Calibration-free eye movement tracking system and application |
CN112732071B (en) * | 2020-12-11 | 2023-04-07 | 浙江大学 | Calibration-free eye movement tracking system and application |
CN112926553B (en) * | 2021-04-25 | 2021-08-13 | 北京芯盾时代科技有限公司 | Training method and device for motion detection network |
CN112926553A (en) * | 2021-04-25 | 2021-06-08 | 北京芯盾时代科技有限公司 | Training method and device for motion detection network |
CN113673328A (en) * | 2021-07-14 | 2021-11-19 | 南京邮电大学 | Crowd area monitoring method based on feature aggregation network |
CN113673328B (en) * | 2021-07-14 | 2023-08-18 | 南京邮电大学 | Crowd area monitoring method based on feature aggregation network |
CN116229581A (en) * | 2023-03-23 | 2023-06-06 | 珠海市安克电子技术有限公司 | Intelligent interconnection first-aid system based on big data |
CN116229581B (en) * | 2023-03-23 | 2023-09-19 | 珠海市安克电子技术有限公司 | Intelligent interconnection first-aid system based on big data |
CN116756354A (en) * | 2023-08-24 | 2023-09-15 | 北京电子科技学院 | Photo archive analysis management system |
Also Published As
Publication number | Publication date |
---|---|
CN108764059B (en) | 2021-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108764059A (en) | A kind of Human bodys' response method and system based on neural network | |
CN104063719B (en) | Pedestrian detection method and device based on depth convolutional network | |
CN110110707A (en) | Artificial intelligence CNN, LSTM neural network dynamic identifying system | |
CN110414305A (en) | Artificial intelligence convolutional neural networks face identification system | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
CN108932479A (en) | A kind of human body anomaly detection method | |
CN106570477A (en) | Vehicle model recognition model construction method based on depth learning and vehicle model recognition method based on depth learning | |
CN106846729A (en) | A kind of fall detection method and system based on convolutional neural networks | |
CN110046550A (en) | Pedestrian's Attribute Recognition system and method based on multilayer feature study | |
CN107967941A (en) | A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct | |
Banjarey et al. | Human activity recognition using 1D convolutional neural network | |
CN114743678A (en) | Intelligent bracelet physiological index abnormity analysis method and system based on improved GDN algorithm | |
CN114612813A (en) | Identity recognition method, model training method, device, equipment and storage medium | |
US20220125359A1 (en) | Systems and methods for automated monitoring of human behavior | |
CN107967944A (en) | A kind of outdoor environment big data measuring of human health method and platform based on Hadoop | |
CN113627326A (en) | Behavior identification method based on wearable device and human skeleton | |
CN107967455A (en) | A kind of transparent learning method of intelligent human-body multidimensional physical feature big data and system | |
Liu et al. | Action Recognition with PIR Sensor Array and Bidirectional Long Short-term Memory Neural Network | |
CN115188031A (en) | Fingerprint identification method, computer program product, storage medium and electronic device | |
Saputra et al. | Car Classification Based on Image Using Transfer Learning Convolutional Neural Network | |
Belmir et al. | Plant Leaf Disease Prediction and Classification Using Deep Learning | |
Mahmoodzadeh | Human Activity Recognition based on Deep Belief Network Classifier and Combination of Local and Global Features | |
CN114694245A (en) | Real-time behavior recognition and sign state monitoring method and system based on capsules and GRUs | |
Venu et al. | Disease Identification in Plant Leaf Using Deep Convolutional Neural Networks | |
Sharma et al. | Towards Improving Human Activity Recognition Using Artificial Neural Network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: No. 66, New Model Road, Gulou District, Nanjing City, Jiangsu Province, 210000 Applicant after: NANJING University OF POSTS AND TELECOMMUNICATIONS Address before: 210023 9 Wen Yuan Road, Ya Dong new town, Nanjing, Jiangsu. Applicant before: NANJING University OF POSTS AND TELECOMMUNICATIONS |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |