CN112102358B - Non-invasive animal behavior characteristic observation method - Google Patents

Non-invasive animal behavior characteristic observation method Download PDF

Info

Publication number
CN112102358B
CN112102358B CN202011055481.8A CN202011055481A CN112102358B CN 112102358 B CN112102358 B CN 112102358B CN 202011055481 A CN202011055481 A CN 202011055481A CN 112102358 B CN112102358 B CN 112102358B
Authority
CN
China
Prior art keywords
animal
joint
point
coordinates
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011055481.8A
Other languages
Chinese (zh)
Other versions
CN112102358A (en
Inventor
段峰
杨振宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN202011055481.8A priority Critical patent/CN112102358B/en
Publication of CN112102358A publication Critical patent/CN112102358A/en
Application granted granted Critical
Publication of CN112102358B publication Critical patent/CN112102358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a non-invasive animal behavior characteristic observation method, which belongs to the technical field of artificial intelligence algorithm zoology and behavior analysis, and is characterized in that the limb movement analysis of an animal is carried out according to the marked characteristic point coordinates, and the front and rear limb joint movement states of a mouse are analyzed through the coordinates of each joint point of the animal; randomly dividing a data set into a training set and a testing set; the training set is trained by the neural network, and then the error of the neural network prediction is calculated by the test set and evaluated. The invention also uses the transfer learning, when the other animal is analyzed, a small amount of labeled data sets can be combined with the previous training model to achieve similar training effect, thereby reducing the cost and time of manual marking.

Description

Non-invasive animal behavior characteristic observation method
Technical Field
The invention relates to the field of artificial intelligence algorithm zoology and behavior analysis, in particular to a non-invasive animal behavior characteristic observation method.
Background
The animal behavior refers to the activity form, voice and body posture which are expressed by the animal under the control of the animal's own thought, and the change can be identified on the appearance, and the change can play a role in transmitting information to the outside. With the exploration and development of human beings in the fields of neuroscience, medicine and pharmacology, social science and the like, animal behaviors are often observed and researched as experimental objects. In the case of rats, researchers often verify the reliability of a drug or treatment by observing the behavior of the rat. With the progress of research, the precision requirement of behavior observation of mice is higher and higher, so that the invasive sensors and markers are widely applied to social behavior observation of mice. However, both the invasive sensor and the marker have great limitations, the invasive sensor has high cost, complex operations are required to be performed, and the operation process also has certain damage to the experimental object, which can cause potential influence on the experimental result; the use of the marker is generally to directly mark the mouse epidermis, so that the operation is simple and the cost is low, but actually, in the movement process of the mouse, the joint bones and the mouse epidermis relatively move to a large extent, so that the experiment precision is poor. Therefore, it is a problem to be solved by the present invention to use a rat observing system which does not cause damage to animals and keeps accuracy at a high level.
Due to the restriction of observation requirements, the detection of animal behaviors in the past is generally carried out by placing animals in behavior boxes with large areas and less restriction for observation and then carrying out detection in a manual observation mode. In recent years, the traditional image processing technology and animal behavior observation have long-foot progress, but when the observation mode is combined with machine learning and behavior kinematics, a camera is difficult to observe the specific behaviors of the animal, the observation effect of the camera is poor, particularly, when the observation is performed on the four limbs and the head of the animal, the limb movement of the animal can be hidden by the body due to the wide observation space of the traditional behavior observation box, and the lack of the movement excitation means also enables the limb movement of the animal to have randomness, which has great interference on the machine learning process. Therefore, accurately observing the behavioral characteristics of the limbs and the head of the animal and combining them with machine learning are also problems to be solved by the present invention.
Disclosure of Invention
Therefore, it is required to provide a non-invasive animal behavior feature observation method, which can observe specific features of an animal by limiting the action range of the animal, enhance training efficiency, and perform behavior analysis by the change of the offset angle of the limbs and the head of the animal.
In order to achieve the above object, the present invention provides a non-invasive animal behavior feature observation method, which specifically comprises the following steps:
acquiring an animal motion video;
processing the video image, marking the characteristic point data of the animal in the video image, and storing the data into a data set;
analyzing the limb movement of the animal according to the feature point data of the animal, namely analyzing the movement states of front and rear limb joints of the animal through the change of each joint of the animal;
randomly dividing the data set into a training set and a testing set, training a neural network model by using the training set, and evaluating the neural network model by using the testing set;
and analyzing the animal behaviors by adopting a neural network model.
In the further optimization of the technical scheme, the neural network model has four stage-level pre-estimated feature point coordinates, and a loss function is used for improving the estimation precision.
Further optimization of the technical scheme, the evaluation of the neural network model by using the test set specifically includes: generating a heat map of Gaussian probability for each marked feature point, after estimating the coordinates of the feature points, restoring the Gaussian heat map to the size of an original input image by using bicubic interpolation, calculating the error between the marked heat map and the estimated heat map by using mean square error, converting the maximum position of the heat map into the coordinates of the feature points by using a soft-argmax method, and calculating the error between the marked feature points and the estimated feature points by using a loss function.
In a further optimization of the technical scheme, the soft-argmax formula is as follows:
Figure BDA0002710735430000031
wherein x is a vector of probability values and i is x i Of the position of (a).
According to the technical scheme, the optimization is further carried out, the loss function is a piecewise loss function, and the structure of the piecewise loss function smooth L1 is as follows:
Figure BDA0002710735430000032
where x is the L1 loss function between the marker point and the evaluation point.
The technical scheme further optimizes the animal behavior analysis method, and further comprises the step of transfer learning, wherein the other animal behavior analysis method is adopted for carrying out behavior analysis.
In a further optimization of the technical solution, the transfer learning method includes,
set the two trained datasets as T a And T b And the combined data set is set as T = T a ∪T b Setting the unmarked test data set as S, taking the whole iterative times of the system as N, and setting the initial weight vector as
Figure BDA0002710735430000033
Among them are:
Figure BDA0002710735430000034
is then set up
Figure BDA0002710735430000035
Let P t The following formula is satisfied:
Figure BDA0002710735430000036
according to the combined training data set T and the weight distribution P on T t And the unlabeled data set S to obtain a classifier h on S t Calculate h t At T b Error rate of (2):
Figure BDA0002710735430000041
is provided with
Figure BDA0002710735430000042
The latest weight vector is obtained as:
Figure BDA0002710735430000043
the final classifier is derived as:
Figure BDA0002710735430000044
the technical scheme is further optimized, joint point coordinates are calibrated according to limb moving images of the animal, and meanwhile an animal limb kinematic model for calculating the angular velocity and the angular acceleration of the joint points is established.
Different from the prior art, the technical scheme has the following advantages: the method focuses on observing the behavior characteristics of the head and the four limbs of the animal, extracts more abundant characteristics from the image by using the characteristic point marking and neural network training methods, and adopts a mode of manually marking the characteristic points due to the detection and estimation of the image content.
Drawings
FIG. 1 is a flow chart of a method for non-invasive observation of animal behavior characteristics;
FIG. 2 is a flow chart of a transfer learning method;
FIG. 3 is a coordinate diagram of animal limb movement;
fig. 4 is a diagram for analyzing the movement of the joint point B.
Detailed Description
In order to explain technical contents, structural features, objects and effects of the technical solutions in detail, the following detailed description is given with reference to the accompanying drawings in combination with the embodiments.
As shown in fig. 1, a flow chart of a non-invasive animal behavior feature observation method is provided, and the method specifically comprises the following steps:
s1, acquiring an animal motion video, and capturing a video of the animal performing uniform motion on a track by using a high frame rate camera.
Obtaining moving pictures of animals frame by frame, selecting a plurality of characteristic points in each picture for marking, making a corresponding data set, and storing the data set; in order to improve the robustness during training, the resolution of the training picture is adjusted according to the shape of the animal, the proportion of the background in the training picture is reduced, for example, when healthy crawling and mice without hanging devices are observed, the resolution of the shot picture is set to 512 x 256; when a paraplegic mouse using the hanging device was observed, the resolution of the photograph taken was set to 256 × 512. Processing the video image, marking the characteristic point data of the animal in the video image, and storing the data into a data set;
s2, analyzing the limb movement of the animal according to the feature point data of the animal, namely analyzing the movement states of front and rear limb joints of the animal through the change of each joint of the animal;
s3, randomly dividing the data set into a training set and a testing set, training the neural network model by using the training set, evaluating the neural network model by using the testing set, and calculating the prediction accuracy of the neural network by using the testing set;
and (3) introducing the training set into a convolutional neural network which has four stage levels and is used for estimating the coordinates of the characteristic points for training, using a loss function to improve the estimation precision, then calculating the prediction error of the neural network through the test set, and evaluating the performance of the neural network. The estimation of the mark characteristic points is treated as a probability problem, each mark characteristic point generates a heat map with Gaussian probability, after the characteristic point coordinates are estimated, bicubic interpolation is used for restoring the Gaussian heat map to the size of an original input image, the error between the mark heat map and the estimated heat map is calculated by using the mean square error, meanwhile, the maximum position of the heat map is converted into the coordinates of the characteristic points by using a soft-argmax method, and then the error between the mark characteristic points and the estimated characteristic points is calculated by using a loss function.
The method comprises the steps of using a basic feature extraction network, carrying out feature extraction on three-channel color animal images shot by a camera by using two continuous convolution layers with the size of 3 x 3 and the number of convolution kernels of 64 in order to reduce calculation amount, increasing the number of channels while not changing the size of the images, then reducing the size of the images by passing through a pooling layer with the step length of 2, then reducing the size of the images by passing through the two convolution layers with the size of 3 x 3 and the pooling layer with the step length of 2 again, and finally obtaining basic feature mapping of 128 channels by using four continuous convolution layers with the size of 3 x 3.
A convolutional neural network with four stages for estimating feature point coordinates is used, which is referenced to a cascaded convolutional network in human pose estimation. The method comprises the following steps that basic feature mapping is received in the first stage, animal feature point position prediction of n channels is obtained through six layers of convolutional neural networks using 3 x 3 convolutional kernels, wherein n represents the number of categories of current feature points, in the following three stages, the basic feature mapping is fused with estimated output of the previous section to obtain input of 128+ n channels, deep feature extraction is carried out in the first-stage convolutional network formed by nine continuous layers of convolutional neural networks using 3 x 3 convolutional kernels to obtain feature point coordinates, and then output of the next section is obtained, namely the animal feature point position prediction of the n channels.
The estimation accuracy of the feature points is improved by using the piecewise loss function. The selection of the loss function has a direct influence on the estimation accuracy, and the requirements for the loss function in different intervals have different characteristics. When the confidence coefficient of the estimated feature point is close to the feature point label, an L1 norm loss function (with the standard form of being in a standard form of being small in gradient and slow in convergence speed) is selected
Figure BDA0002710735430000061
Wherein Y is i Is a target value, f (x) i ) Is a target value); when the distance between the confidence coefficient of the estimated characteristic point and the label of the characteristic point is larger, an L2 norm loss function (in the standard form of ^ er/greater) which has larger gradient, high convergence speed and not good robustness is selected>
Figure BDA0002710735430000062
Wherein Y is i Is a target value, f (x) i ) Is a target ofValue), the structure of the piecewise loss function smooth L1 is as follows:
Figure BDA0002710735430000063
where x is the L1 loss function between the marker point and the evaluation point.
And calculating the error of the tag point and the estimation point by using the Gaussian heat map, and evaluating the performance of the neural network. The estimates for the feature points are treated as a probabilistic problem, with each feature point generating a heat map of gaussian probabilities. After the coordinates of the feature points are estimated, the Gaussian heatmap can be restored to the size of the original input image by using bicubic interpolation, and the position of the maximum probability value is selected as the coordinates of the landmark points. On this basis, the mean square error is used to calculate the error between the tag heat map and the estimated heat map, with the corresponding equation:
Figure BDA0002710735430000071
wherein A is a fixed amplitude of 1 and σ xy Is variance, and takes a value of 3,x c ,y c The center coordinates of the feature points; in addition, the maximum position of the heat map is converted into coordinates of the feature points using the soft-argmax method, and then the error of the tag feature points from the estimated feature points is calculated using the L1 loss function. The one-dimensional soft-argmax formula is:
Figure BDA0002710735430000072
wherein x is a vector of probability values and i is x i The position of (a).
And when the error of the neural network prediction is calculated through the test set and the performance of the neural network is evaluated, a normalization standard is used for eliminating random errors caused by animal shape and size and the distance of a camera, and finally the correct error of the estimated characteristic coordinate point is judged through an evaluation function.
Standardized evaluation criteria were used to reduce errors due to individual differences. Generally speaking, the result of evaluation estimation is generally interpolation of directly calculated estimation result and actual result, and the interpolation is compared with the corresponding threshold, but in actual behavior observation, the result is often influenced by various conditions, such as size difference of animals and distance difference between a camera and a shooting object in an experiment, and the distance between the directly calculated estimation coordinate and the label coordinate is too subjective as an evaluation criterion. The eyes and the nose of the animal are very obvious features, the feature point coordinates obtained by marking are higher in accuracy than the marked feature point coordinates of a common joint point, so that the Euclidean distance between the eyes and the nose is used as the normalized denominator, and the structure of the evaluation parameter is as follows:
Figure BDA0002710735430000081
in the above formula, (x) i ',y i ') is the estimated coordinates of the ith plot, (x) i ,y i ) Is the mark coordinate of the ith chart, (x) inose ,y inose ) As coordinates of the animal nose of the ith figure, (x) ieye ,y ieye ) And (3) taking the coordinate of the animal eye of the ith image, wherein the evaluation parameter epsilon is 0.1, and if the normalized result is less than 0, the estimation is correct, otherwise, the estimation is wrong.
And S4, analyzing the animal behaviors by adopting a neural network model.
In order to assist behavior observation and supplement the non-invasive animal behavior observation system, the invention also establishes a limb kinematics model of the animal, and obtains the corresponding angular velocity and angular acceleration by analyzing the joint point coordinates of the animal. The method has the following characteristics:
obtaining corresponding joint point coordinates according to the characteristics of each joint point in the marked photo, and setting the characteristic points as A from bottom to top 1 ,A 2 ,…A n Corresponding coordinate (X) 1 ,Y 1 ),(X 2 ,Y 2 ),…(X n ,Y n ) Then sequentially deducing the joint angle theta corresponding to each characteristic point 12 ,…θ n The concrete formula is as follows:
Figure BDA0002710735430000082
/>
Figure BDA0002710735430000083
Figure BDA0002710735430000084
in the formula, a reference vector
Figure BDA0002710735430000085
A unit vector with the length of 1 and the direction horizontally rightward;
after each joint angle is obtained, the instantaneous angular velocity and the instantaneous angular acceleration of the time period can be sequentially obtained by taking the difference value of the joint angles corresponding to the same feature point of two adjacent photos, and the specific formula is as follows:
Figure BDA0002710735430000091
Figure BDA0002710735430000092
in the formula, theta k i Represents the joint angle corresponding to the kth characteristic point in the ith picture, and the time interval between two adjacent pictures is 100 m
Figure BDA0002710735430000093
In order to reduce the manpower consumed by retraining and accelerate the operation efficiency of the non-invasive animal behavior characteristic observation system, the invention also uses transfer learning, and when the behavior analysis is carried out on another animal, a small amount of labeled data sets can be combined with the previous training model to achieve similar training effect. Referring to fig. 2, a flow chart of a transfer learning method is shown, which includes the following features:
set the two trained datasets as T a And T b The combined data set is set as T = T a ∪T b Setting the unmarked test data set as S, taking the whole iterative times of the system as N, and setting the initial weight vector as
Figure BDA0002710735430000094
Among them are:
Figure BDA0002710735430000095
is then set up
Figure BDA0002710735430000096
Let P t The following formula is satisfied:
Figure BDA0002710735430000101
according to the combined training data set T and the weight distribution P on T t And unmarked data set S to obtain a classifier h on S t Calculate h t At T b Error rate of (2):
Figure BDA0002710735430000102
is provided with
Figure BDA0002710735430000103
The latest weight vector is obtained as:
Figure BDA0002710735430000104
the final classifier is derived as:
Figure BDA0002710735430000105
referring to fig. 3 and 4, there are shown a coordinate point diagram of the limb movement of an animal and a diagram for analyzing the movement of the joint point B, respectively. The invention preferably provides a behavior observation system for observing animal behavior characteristics, which comprises the following steps:
the method comprises the following steps: placing animals on a conveyor belt of a runway, turning on a drive motor switch, recording mouse movement videos by using a high-definition camera, and adjusting the frame rate to 100 frames for 20 seconds;
step two: taking the middle 10 seconds in the video as a data set, and intercepting the video frame by frame to obtain a complete animal behavior photo;
step three: preprocessing the collected data, checking whether the collected animal photos have obvious errors such as discontinuous actions, fuzzy images or overlapped joint positions, and the like, directly discarding the data set if the problems exist, re-collecting the data, and calibrating the characteristic points of the data if the data are correct to obtain a training set containing the coordinates of the characteristic points;
step four: the training set is led into a convolutional neural network which has four stage levels and is used for estimating the coordinates of the characteristic points for training, and smooth L1 loss functions are used for improving the estimation precision;
step five: generating a heat map of Gaussian probability from the feature points, after estimating the coordinates of the feature points, restoring the Gaussian heat map to the size of an original input image by using bicubic interpolation, calculating the error between the tag heat map and the estimated heat map by using a mean square error, converting the maximum position of the heat map into the coordinates of the feature points by using a soft-argmax method, and calculating the error between the tag feature points and the estimated feature points by using an L1 loss function;
step six: and (3) eliminating random errors caused by animal shape and size and camera distance by using a normalization standard during test set verification, and finally judging the correctness of the estimated characteristic coordinate point by using an evaluation function.
In a preferred embodiment of the present invention, the joint coordinates are calibrated according to the hind limb moving images of the animal, and an animal limb kinematics model for calculating the angular velocity and the angular acceleration of the joint is established, including the following steps:
the method comprises the following steps: obtaining the coordinates (X) of the three joint points of the wrist joint, the elbow joint and the shoulder joint of the animal according to the three joint characteristic points marked in the marked picture 1 ,Y 1 )、(X 2 ,Y 2 )、(X 3 ,Y 3 ) Coordinates of a shoulder joint corresponding point A, an elbow joint corresponding point B and a wrist joint corresponding point C;
step two: according to the coordinates of the three joint points, obtaining an included angle alpha between the shoulder joint and the horizontal vector, an included angle beta between the elbow joint and the wrist joint and an included angle gamma between the wrist joint and the horizontal vector;
step three: and according to the frame rate set by the camera, the time interval between two corresponding adjacent pictures is calculated to be 0.01s, and the angular velocities and the angular accelerations of three joint angles are obtained according to the difference of the same joint angles of the adjacent pictures.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising 8230; \8230;" or "comprising 8230; \8230;" does not exclude additional elements from existing in a process, method, article, or terminal device that comprises the element. Further, herein, "greater than," "less than," "more than," and the like are understood to exclude the present numbers; the terms "above", "below", "within" and the like are to be understood as including the number.
Although the embodiments have been described, once the basic inventive concept is obtained, other variations and modifications of these embodiments can be made by those skilled in the art, so that the above embodiments are only examples of the present invention, and not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes using the contents of the present specification and drawings, or any other related technical fields, which are directly or indirectly applied thereto, are included in the scope of the present invention.

Claims (6)

1. A method for non-invasive observation of animal behavior characteristics, comprising: comprises the following steps of (a) preparing a solution,
acquiring an animal motion video;
processing the video image, marking the characteristic point data of the animal in the video image, and storing the data into a data set;
analyzing the limb movement of the animal according to the feature point data of the animal, namely analyzing the movement states of front and rear limb joints of the animal through the change of each joint of the animal;
randomly dividing the data set into a training set and a testing set, training a neural network model by using the training set, and evaluating the neural network model by using the testing set;
the evaluation of the neural network model by adopting the test set specifically comprises the following steps: generating a heat map of Gaussian probability for each marking feature point, after estimating the coordinates of the feature points, restoring the Gaussian heat map to the size of an original input image by using bicubic interpolation, calculating the error between the marking heat map and the estimated heat map by using a mean square error, converting the maximum position of the heat map into the coordinates of the feature points by using a soft-argmax method, and calculating the errors between the marking feature points and the estimated feature points by using a loss function;
analyzing the animal behaviors by adopting a neural network model;
the method also establishes a limb kinematics model of the animal,
obtaining corresponding joint point coordinates according to the characteristics of each joint point in the marked photo, and setting the characteristic points as A from bottom to top 1 ,A 2 ,…A n Corresponding coordinate (X) 1 ,Y 1 ),(X 2 ,Y 2 ),…(X n ,Y n ) Then sequentially deducing the joint angle theta corresponding to each characteristic point 12 ,…θ n The concrete formula is as follows:
Figure FDA0004048704560000011
Figure FDA0004048704560000012
Figure FDA0004048704560000013
in the formula, reference vector
Figure FDA0004048704560000021
A unit vector with the length of 1 and the direction horizontally rightward;
after each joint angle is obtained, the instantaneous angular velocity and the instantaneous angular acceleration of the time period can be sequentially obtained by taking the difference value of the joint angles corresponding to the same feature point of the two adjacent photos, and the specific formula is as follows:
Figure FDA0004048704560000022
Figure FDA0004048704560000023
in the formula, theta k i Representing the joint angle corresponding to the kth characteristic point in the ith picture, and the time interval between two adjacent pictures is 100 th since the frame rate p of the camera is
Figure FDA0004048704560000024
/>
2. A non-invasive animal behavioral characteristic observation method according to claim 1, characterized in that: the neural network model has four stage-level pre-estimated feature point coordinates, and a loss function is used for improving estimation accuracy.
3. A non-invasive animal behavioral characteristic observation method according to claim 1, characterized in that: the soft-argmax formula is:
Figure FDA0004048704560000025
wherein x is a vector of probability values and i is x i The position of (a).
4. A non-invasive animal behavioral characteristic observation method according to claim 1, characterized in that: the loss function is a piecewise loss function, and the structure of the piecewise loss function smooth L1 is as follows:
Figure FDA0004048704560000026
where x is the L1 loss function between the marker point and the estimator point.
5. A non-invasive animal behavioral characteristic observation method according to claim 1, characterized in that: also comprises transfer learning, and behavior analysis is carried out on another animal by adopting a transfer learning method.
6. A non-invasive animal behavioral characteristic observation method according to claim 1, characterized in that: the method also comprises the steps of calibrating the coordinates of the joint points according to the limb moving images of the animals, and simultaneously establishing an animal limb kinematic model for calculating the angular velocity and the angular acceleration of the joint points.
CN202011055481.8A 2020-09-29 2020-09-29 Non-invasive animal behavior characteristic observation method Active CN112102358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011055481.8A CN112102358B (en) 2020-09-29 2020-09-29 Non-invasive animal behavior characteristic observation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011055481.8A CN112102358B (en) 2020-09-29 2020-09-29 Non-invasive animal behavior characteristic observation method

Publications (2)

Publication Number Publication Date
CN112102358A CN112102358A (en) 2020-12-18
CN112102358B true CN112102358B (en) 2023-04-07

Family

ID=73782592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011055481.8A Active CN112102358B (en) 2020-09-29 2020-09-29 Non-invasive animal behavior characteristic observation method

Country Status (1)

Country Link
CN (1) CN112102358B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989728A (en) * 2021-12-06 2022-01-28 北京航空航天大学 Animal behavior analysis method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194344A (en) * 2017-05-16 2017-09-22 西安电子科技大学 The Human bodys' response method at adaptive bone center
CN110659675A (en) * 2019-09-05 2020-01-07 南开大学 Welding seam defect detection method based on AdaBoost algorithm
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111626211A (en) * 2020-05-27 2020-09-04 大连成者云软件有限公司 Sitting posture identification method based on monocular video image sequence

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194344A (en) * 2017-05-16 2017-09-22 西安电子科技大学 The Human bodys' response method at adaptive bone center
CN110659675A (en) * 2019-09-05 2020-01-07 南开大学 Welding seam defect detection method based on AdaBoost algorithm
CN111008583A (en) * 2019-11-28 2020-04-14 清华大学 Pedestrian and rider posture estimation method assisted by limb characteristics
CN111626211A (en) * 2020-05-27 2020-09-04 大连成者云软件有限公司 Sitting posture identification method based on monocular video image sequence

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
【迁移学习】算法之TrAdaBoost;zephyrji96;《CSDN:https://blog.csdn.net/qq_36552489/article/details/102612483》;20191017;第1-5页 *
Alexander Mathis et al..DeepLabCut: markerless pose estimation of user-defined body parts with deep learning.《Nature Neuroscience》.2018, *
DeepLabCut: markerless pose estimation of user-defined body parts with deep learning;Alexander Mathis et al.;《Nature Neuroscience》;20180930;第1281-1289页 *
Rat Behavior Observation System Based on Transfer Learning;TIANLEI JIN et al.;《IEEE Access》;20190513;全文 *

Also Published As

Publication number Publication date
CN112102358A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2017133009A1 (en) Method for positioning human joint using depth image of convolutional neural network
WO2020042419A1 (en) Gait-based identity recognition method and apparatus, and electronic device
CN106127125B (en) Distributed DTW Human bodys' response method based on human body behavioural characteristic
CN107516127B (en) Method and system for service robot to autonomously acquire attribution semantics of human-worn carried articles
CN105512621A (en) Kinect-based badminton motion guidance system
CN111160294B (en) Gait recognition method based on graph convolution network
WO2021051526A1 (en) Multi-view 3d human pose estimation method and related apparatus
CN111199207B (en) Two-dimensional multi-human body posture estimation method based on depth residual error neural network
CN110555408A (en) Single-camera real-time three-dimensional human body posture detection method based on self-adaptive mapping relation
CN110555975A (en) Drowning prevention monitoring method and system
CN110633004A (en) Interaction method, device and system based on human body posture estimation
CN117671738B (en) Human body posture recognition system based on artificial intelligence
CN110738650A (en) infectious disease infection identification method, terminal device and storage medium
Ansar et al. Robust hand gesture tracking and recognition for healthcare via Recurent neural network
CN112102358B (en) Non-invasive animal behavior characteristic observation method
CN109544632B (en) Semantic SLAM object association method based on hierarchical topic model
CN109993116B (en) Pedestrian re-identification method based on mutual learning of human bones
JP7488674B2 (en) OBJECT RECOGNITION DEVICE, OBJECT RECOGNITION METHOD, AND OBJECT RECOGNITION PROGRAM
TWI812053B (en) Positioning method, electronic equipment and computer-readable storage medium
CN116543462A (en) Method for identifying and judging dairy cow health condition based on dairy cow behaviors of video bones
CN112099330B (en) Holographic human body reconstruction method based on external camera and wearable display control equipment
JP2022090760A (en) Learning device, creation method of learning data, learning method and learning program
Zhou et al. Visual tracking using improved multiple instance learning with co-training framework for moving robot
CN113963434A (en) Target behavior characteristic detection and motion trail prediction method based on human body
CN111507185A (en) Tumble detection method based on stack cavity convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant