CN112184791A - Yak weight prediction method based on CNN-LSTM neural network - Google Patents

Yak weight prediction method based on CNN-LSTM neural network Download PDF

Info

Publication number
CN112184791A
CN112184791A CN202011045785.6A CN202011045785A CN112184791A CN 112184791 A CN112184791 A CN 112184791A CN 202011045785 A CN202011045785 A CN 202011045785A CN 112184791 A CN112184791 A CN 112184791A
Authority
CN
China
Prior art keywords
yak
point cloud
cloud data
neural network
cnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011045785.6A
Other languages
Chinese (zh)
Inventor
彭飞
陈颖
周齐朋
廖勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SIMUTECH Inc
Original Assignee
SIMUTECH Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SIMUTECH Inc filed Critical SIMUTECH Inc
Priority to CN202011045785.6A priority Critical patent/CN112184791A/en
Publication of CN112184791A publication Critical patent/CN112184791A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a yak weight prediction method based on a CNN-LSTM neural network, which comprises the following steps: s1: acquiring a side view of a yak by using a camera to obtain and store yak point cloud data; s2: preprocessing stored yak point cloud data to obtain point cloud data with horizontal railings removed; s3: repairing the point cloud data without the horizontal railing to obtain repaired yak point cloud data; s4: and (4) taking the repaired yak point cloud data as input, and predicting the weight of the yak by using a CNN-LSTM neural network. According to the method, the neural network model and the three-dimensional visualization technology are utilized to construct the yak weight prediction model through the yak weight prediction method based on the CNN-LSTM neural network, so that the non-contact measurement of the yak weight is realized, and convenience is provided for large-scale and standardized breeding of the yaks.

Description

Yak weight prediction method based on CNN-LSTM neural network
Technical Field
The invention belongs to the technical field of livestock breeding, and particularly relates to a yak weight prediction method based on a CNN-LSTM neural network.
Background
Accurate farming is one of the important research directions in modern agriculture and is also an important component of intelligent agriculture. Accurate animal husbandry mainly refers to a complete set of scientific breeding and management method which is implemented by animal science and information technology to animal individuals in a fixed time and quantity mode, quality and safety of animal husbandry products are guaranteed, and high benefit, low cost and sustainable development of animal husbandry are promoted.
The weight of the yak is an important component in the body type evaluation, and the weight measurement of the yak plays an important role in breeding, feed ration, determination of the dosage of treatment, judgment of the health condition of the yak and the like; the traditional method for measuring the yaks is a practical measurement method, namely a platform type weighbridge is used, so that the yaks stand on the platform to weigh the weight, the measured value is the most accurate, but for large-scale and standardized breeding, the practical measurement method consumes a large amount of manpower, material resources, financial resources and time, the yaks need to be driven to a weighing platform, once the yaks generate stress reaction, the measurement precision is difficult to control, the measurement operation is complex, and errors are generated; in actual production, due to the lack of weighing equipment in a cattle farm or the fact that the equipment is far away from a cowshed, the weighing of yaks becomes increasingly difficult; generally, the body weight estimation can be performed by measuring the body size data of yaks, and the main estimation methods are as follows: empirical formula method and multiple linear regression equation method; however, the measurement of the body ruler requires manual measurement by using a caliper, a tape measure and the like, which is time-consuming and labor-consuming, and the measurement accuracy is also influenced by factors such as professional quality of a measurer, the matching degree of yaks and the like; with the rapid development and wide application of artificial intelligence, the artificial intelligence technology provides a new method for yak weight measurement, and through the neural network model, three-dimensional visualization technology and other technologies, the non-contact measurement of yak weight parameters can be realized, and no stress stimulation is generated on the yak; the neural network model can extract the weight characteristics of the yaks hidden in the data through the analysis and the processing of the three-dimensional point cloud data of the yaks, and then can construct a yak weight prediction model through training the characteristics, so that the non-contact measurement of the weight of the yaks is realized, and convenience is provided for the large-scale and standardized breeding of the yaks.
Disclosure of Invention
The invention aims to solve the problem of yak weight prediction, and provides a yak weight prediction method based on a CNN-LSTM neural network.
The technical scheme of the invention is as follows: a yak weight prediction method based on a CNN-LSTM neural network comprises the following steps:
s1: acquiring a side view of a yak by using a camera to obtain and store yak point cloud data;
s2: preprocessing stored yak point cloud data to obtain point cloud data with horizontal railings removed;
s3: repairing the point cloud data with the horizontal railing removed by utilizing a cubic B-spline curve method to obtain repaired yak point cloud data;
s4: and (4) taking the repaired yak point cloud data as input, and predicting the weight of the yak by using a CNN-LSTM neural network.
The invention has the beneficial effects that:
(1) according to the method, the neural network model and the three-dimensional visualization technology are utilized to construct the yak weight prediction model through the yak weight prediction method based on the CNN-LSTM neural network, so that the non-contact measurement of the yak weight is realized, and convenience is provided for large-scale and standardized breeding of the yaks.
(2) According to the method for predicting the weight of the yak in the network based on the CNN-LSTM neural network, the problems of huge workload and lack of objectivity caused by the fact that the weight of the yak is measured by a traditional manual method can be solved, and the problems that the yak generates stress response and the like are avoided.
Further, step S1 includes the following sub-steps:
s11: installing a camera at a position 2.15m away from the axis of the horizontal railing and 0.9m away from the ground;
s12: collecting a side view of a yak by using a camera and storing the side view in a xef file form;
s13: for each yak, 5 frames of depth images containing complete yak bodies in the xef file are selected;
s14: extracting depth data in each frame of depth image and storing the depth data in a txt file;
s15: reading the depth data in the txt file, converting the depth data into yak point cloud data by using an m _ pCoordinateMepper- > MapDepthFrameToCameraspace () function, and storing the yak point cloud data into the text file according to lines.
The beneficial effects of the further scheme are as follows: in the invention, the camera equipment is arranged on the bracket and arranged at one side of a railing channel between a yak playground and milking, so that a multi-frame whole-body point cloud of a yak is effectively shot and the influence of the environmental point cloud is reduced, and therefore, the field of view should cover the railing channel. The camera equipment was placed at its center 2.15 meters from the axis of the nearest rail (diameter 0.05m) at a height of 0.9m from the ground. Due to the fact that infrared rays in sunlight can cause the acquired point cloud to generate a missing area and a large amount of noise exists, the point cloud is acquired at 5-7 points in sunset time, and interference of the sunlight is reduced.
Further, in step S15, the method for converting the depth data into the yak point cloud data includes: and converting the depth data into o-xyz surface three-dimensional point cloud data which takes the center of the photosensitive element of the infrared camera as a coordinate origin o, the x axis is horizontally left, the y axis is vertically upward and the z axis is vertical along the shooting direction by utilizing an m _ pCoordinateMepper- > MapDepthFrameCameraPaspace () function to obtain the yak point cloud data.
Further, step S2 includes the following sub-steps:
s21: visually measuring the direction vector of the horizontal railing;
s22: taking the direction vector of the horizontal railing as a given axis, extracting the central straight line parameter of the horizontal railing by using a straight line template matching method, setting the distance threshold value to be 0.025m and the angle threshold value to be 30 degrees in the straight line template matching method, and obtaining a straight line equation l0
S23: in equation of a straight line l0As a boundary, a series of surrounding cylinders are made, and the expressions of the cylinders along the positive direction and the negative direction of the z axis are respectively as follows:
Figure BDA0002707925430000041
Figure BDA0002707925430000042
wherein m denotes an abscissa of the linear direction vector (m, n, k), n denotes an ordinate of the linear direction vector (m, n, k), k denotes an ordinate of the linear direction vector (m, n, k), and x0Represents a coordinate point P on a straight line0(x0,y0,z0) Abscissa of (a), y0Represents a coordinate point P on a straight line0(x0,y0,z0) Ordinate of (a), z0Represents a coordinate point P on a straight line0(x0,y0,z0) I represents the number of searches that are stepped up to 0.02 in the positive direction along the z-axis, j represents the number of searches that are stepped up to 0.02 in the negative direction along the z-axis, and (x, y, z) represents any point on the cylinder;
s24: and removing the points contained in the series of surrounding cylinders to obtain the point cloud data of the removed horizontal railing.
The beneficial effects of the further scheme are as follows: in the invention, because the initial three-dimensional point cloud data volume obtained by the camera is large, besides the yak, the yak comprises the background point cloud of the railing, in order to extract the yak body size parameter data, the yak needs to be extracted from the background point cloud, and unnecessary background point cloud data and the like are removed, so that the data volume is reduced. The horizontal railing is in a straight line and is approximately in a horizontal state, so that the approximate direction vector of the horizontal railing is measured firstly through visualization. Because there are noise points in the horizontal railing point cloud, the linear template angle threshold is set to 30 °. At the same time, the linear equation l is extracted0The linear equation can be approximated to the axis of the horizontal handrail, and the horizontal handrail point cloud cannot be removed, so that the linear equation is taken as the axis to form a cylinder to contain the handrail point cloud. To remove as much of the horizontal railing point cloud as possible, use0For dividing, making a series of surrounding cylinders along the positive direction and the negative direction of the z axis, and removing points contained in the surrounding cylinders in the formula to obtain a removal levelPoint cloud data of the balustrade.
Further, step S3 includes the following sub-steps:
s31: fitting the point cloud data without the horizontal railing by utilizing a cubic B-spline curve method to obtain a cubic B-spline curve;
s32: and fitting and taking points for the cubic B-spline curve to obtain restored yak point cloud data.
The beneficial effects of the further scheme are as follows: in the invention, the preprocessed point cloud data has a region with larger loss, which is shielded by the horizontal railing, and has important influence on the subsequent extraction of the body size parameters. Therefore, the missing area needs to be repaired by a graph obtained by calculating point cloud data of the removed horizontal railing, and the yak point cloud slice projection is fitted by using a cubic B-spline curve.
Further, in step S31, the method for fitting the point cloud data without the horizontal rail by using the cubic B-spline method includes: calculating the kth cubic B-spline curve segment Pk,3(t) the calculation formula is:
Figure BDA0002707925430000051
wherein, Pi+kRepresenting the vertices, G, constituting the curve segmentsi,3(t) denotes a basis function, m denotes the number of cubic B-spline curve segments, i denotes the different bases of the basis functions of the cubic B-spline curve segments, t denotes a given parameter, and the basis function Gi,3The expression of (t) is:
Figure BDA0002707925430000052
the kth cubic B-spline curve segment Pk,3(t) the expression is:
Figure BDA0002707925430000053
further, in step S32, the method for fitting the cubic B-spline curve to take points includes: and uniformly taking values of the given parameter t, completing fitting and point taking, and obtaining restored yak point cloud data.
The beneficial effects of the further scheme are as follows: in the invention, when the curve fitted by the cubic B-spline method is used for fitting and point taking, the fitting effect depends on the value of the parameter t. In order to make the fitting point array uniform, given parameters t can be uniformly valued, the repair of the area with large loss of the yak point cloud is realized, and a repair map of the yak point cloud data is obtained.
Further, S4 includes the following sub-steps:
s41: selecting samples from the repaired yak point cloud data set as a training set and a testing set of the model respectively;
s42: training the training set data as the input of a CNN-LSTM neural network to obtain a weight prediction model;
s43: and (4) performing yak weight prediction by taking the test set data as the input of the weight prediction model, and outputting a final yak weight prediction result by the weight prediction model.
The beneficial effects of the further scheme are as follows: in the present invention, the CNN-LSTM neural network is composed of a CNN neural network and an LSTM neural network. The CNN convolutional neural network used is 6 layers, the convolutional kernel size of each layer is 3 × 3, and the step size is 1. The LSTM neural network is 3 LSTM units.
Drawings
Fig. 1 is a flow chart of a yak weight prediction method;
FIG. 2 is a schematic diagram of a linear template matching method;
FIG. 3 is a result graph of point cloud data for horizontal rails not removed;
FIG. 4 is a graph of a horizontal railing point cloud data removal result;
FIG. 5 is a block diagram of cubic B-spline method repair.
Detailed Description
The embodiments of the present invention will be further described with reference to the accompanying drawings.
As shown in figure 1, the invention provides a yak weight prediction method based on a CNN-LSTM neural network, which comprises the following steps:
s1: acquiring a side view of a yak by using a camera to obtain and store yak point cloud data;
s2: preprocessing stored yak point cloud data to obtain point cloud data with horizontal railings removed;
s3: repairing the point cloud data with the horizontal railing removed by utilizing a cubic B-spline curve method to obtain repaired yak point cloud data;
s4: and (4) taking the repaired yak point cloud data as input, and predicting the weight of the yak by using a CNN-LSTM neural network.
In the embodiment of the present invention, as shown in fig. 1, step S1 includes the following sub-steps:
s11: installing a camera at a position 2.15m away from the axis of the horizontal railing and 0.9m away from the ground;
s12: collecting a side view of a yak by using a camera and storing the side view in a xef file form;
s13: for each yak, 5 frames of depth images containing complete yak bodies in the xef file are selected;
s14: extracting depth data in each frame of depth image and storing the depth data in a txt file;
s15: reading the depth data in the txt file, converting the depth data into yak point cloud data by using an m _ pCoordinateMepper- > MapDepthFrameToCameraspace () function, and storing the yak point cloud data into the text file according to lines.
In the invention, the camera equipment is arranged on the bracket and arranged at one side of a railing channel between a yak playground and milking, so that a multi-frame whole-body point cloud of a yak is effectively shot and the influence of the environmental point cloud is reduced, and therefore, the field of view should cover the railing channel. The camera equipment was placed at its center 2.15 meters from the axis of the nearest rail (diameter 0.05m) at a height of 0.9m from the ground. Due to the fact that infrared rays in sunlight can cause the acquired point cloud to generate a missing area and a large amount of noise exists, the point cloud is acquired at 5-7 points in sunset time, and interference of the sunlight is reduced.
In the embodiment of the present invention, as shown in fig. 1, in step S15, the method for converting the depth data into the yak point cloud data includes: and converting the depth data into o-xyz surface three-dimensional point cloud data which takes the center of the photosensitive element of the infrared camera as a coordinate origin o, the x axis is horizontally left, the y axis is vertically upward and the z axis is vertical along the shooting direction by utilizing an m _ pCoordinateMepper- > MapDepthFrameCameraPaspace () function to obtain the yak point cloud data.
As shown in fig. 3, is a result diagram of the horizontal railing point cloud data that is not removed.
In the embodiment of the present invention, as shown in fig. 2, step S2 includes the following sub-steps:
s21: visually measuring the direction vector of the horizontal railing;
s22: taking the direction vector of the horizontal railing as a given axis, extracting the central straight line parameter of the horizontal railing by using a straight line template matching method, setting the distance threshold value to be 0.025m and the angle threshold value to be 30 degrees in the straight line template matching method, and obtaining a straight line equation l0
S23: in equation of a straight line l0As a boundary, a series of surrounding cylinders are made, and the expressions of the cylinders along the positive direction and the negative direction of the z axis are respectively as follows:
Figure BDA0002707925430000081
Figure BDA0002707925430000082
wherein m denotes an abscissa of the linear direction vector (m, n, k), n denotes an ordinate of the linear direction vector (m, n, k), k denotes an ordinate of the linear direction vector (m, n, k), and x0Represents a coordinate point P on a straight line0(x0,y0,z0) Abscissa of (a), y0Represents a coordinate point P on a straight line0(x0,y0,z0) Ordinate of (a), z0Represents a coordinate point P on a straight line0(x0,y0,z0) I represents a search number of steps of 0.02 in the positive direction along the z-axis, and j represents a step in the negative direction along the z-axisFurther, the number of searches of 0.02, (x, y, z) represents an arbitrary point on the cylinder;
s24: and removing the points contained in the series of surrounding cylinders to obtain the point cloud data of the removed horizontal railing.
In the invention, because the initial three-dimensional point cloud data volume obtained by the camera is large, besides the yak, the yak comprises the background point cloud of the railing, in order to extract the yak body size parameter data, the yak needs to be extracted from the background point cloud, and unnecessary background point cloud data and the like are removed, so that the data volume is reduced. The horizontal railing is in a straight line and is approximately in a horizontal state, so that the approximate direction vector of the horizontal railing is measured firstly through visualization. Because there are noise points in the horizontal railing point cloud, the linear template angle threshold is set to 30 °. At the same time, the linear equation l is extracted0The linear equation can be approximated to the axis of the horizontal handrail, and the horizontal handrail point cloud cannot be removed, so that the linear equation is taken as the axis to form a cylinder to contain the handrail point cloud. To remove as much of the horizontal railing point cloud as possible, use0And (4) for dividing, making a series of surrounding cylinders along the positive direction and the negative direction of the z axis, and removing points contained in the surrounding cylinders in the formula to obtain the point cloud data of the removed horizontal railing.
As shown in fig. 4, a result diagram of removing the horizontal railing point cloud data is shown.
In the embodiment of the present invention, as shown in fig. 1, step S3 includes the following sub-steps:
s31: fitting the point cloud data without the horizontal railing by utilizing a cubic B-spline curve method to obtain a cubic B-spline curve;
s32: and fitting and taking points for the cubic B-spline curve to obtain restored yak point cloud data.
In the invention, the preprocessed point cloud data has a region with larger loss, which is shielded by the horizontal railing, and has important influence on the subsequent extraction of the body size parameters. Therefore, the missing area needs to be repaired by a graph obtained by calculating point cloud data of the removed horizontal railing, and the yak point cloud slice projection is fitted by using a cubic B-spline curve.
As shown in fig. 5, the overall graph is restored by the cubic B-spline method.
In the embodiment of the present invention, as shown in fig. 1, in step S31, the method for fitting the point cloud data without the horizontal rail by using the cubic B-spline method includes: calculating the kth cubic B-spline curve segment Pk,3(t) the calculation formula is:
Figure BDA0002707925430000091
wherein, Pi+kRepresenting the vertices, G, constituting the curve segmentsi,3(t) denotes a basis function, m denotes the number of cubic B-spline curve segments, i denotes the different bases of the basis functions of the cubic B-spline curve segments, t denotes a given parameter, and the basis function Gi,3The expression of (t) is:
Figure BDA0002707925430000101
the kth cubic B-spline curve segment Pk,3(t) the expression is:
Figure BDA0002707925430000102
in the embodiment of the present invention, as shown in fig. 1, in step S32, the method for fitting the cubic B-spline curve to take points includes: and uniformly taking values of the given parameter t, completing fitting and point taking, and obtaining restored yak point cloud data.
In the invention, when the curve fitted by the cubic B-spline method is used for fitting and point taking, the fitting effect depends on the value of the parameter t. In order to make the fitting point array uniform, given parameters t can be uniformly valued, the repair of the area with large loss of the yak point cloud is realized, and a repair map of the yak point cloud data is obtained.
In the embodiment of the present invention, as shown in fig. 1, step S4 includes the following sub-steps:
s41: selecting samples from the repaired yak point cloud data set as a training set and a testing set of the model respectively;
s42: training the training set data as the input of a CNN-LSTM neural network to obtain a weight prediction model;
s43: and (4) performing yak weight prediction by taking the test set data as the input of the weight prediction model, and outputting a final yak weight prediction result by the weight prediction model.
The CNN-LSTM neural network is composed of a CNN neural network and an LSTM neural network. The CNN convolutional neural network used is 6 layers, the convolutional kernel size of each layer is 3 × 3, and the step size is 1. The LSTM neural network is 3 LSTM units.
In the embodiment of the invention, (1) 750 groups of samples are selected from 1000 groups of cow weight data samples to be used as a training group for establishing a prediction model, and the rest 250 groups are used as a verification group of the prediction model. (2) And arranging the yak point cloud data into a matrix as input and taking the weight value as output for training in a CNN-LSTM model for directly predicting the weight of the yaks by using the extracted yak point cloud data picture to obtain a weight prediction model. (3) And inputting the data of the test set into the trained weight prediction model to obtain the final yak weight prediction result. Finally, the average absolute error obtained on the test set is 5.4%, the mean square error is 9.3%, and the precision can meet the basic requirement of weight prediction.
The working principle and the process of the invention are as follows: firstly, acquiring a side view of a yak by using image acquisition equipment, and storing data; then, in order to extract yak body size parameter data, yaks need to be extracted from background point cloud, unnecessary background point cloud data and the like are removed, and the data volume is reduced, so railing point cloud data need to be removed through preprocessing; the method comprises the following steps that a large missing area which is shielded by a horizontal rail exists in the preprocessed yak point cloud, the missing area needs to be repaired, and a cubic B spline curve method is used for fitting yak point cloud slice projection; and finally, taking the repaired yak image point cloud data as input, and predicting the weight of the yak by using a CNN-LSTM neural network to obtain a final yak weight prediction result.
The invention has the beneficial effects that:
(1) according to the method, the neural network model and the three-dimensional visualization technology are utilized to construct the yak weight prediction model through the yak weight prediction method based on the CNN-LSTM neural network, so that the non-contact measurement of the yak weight is realized, and convenience is provided for large-scale and standardized breeding of the yaks.
(2) According to the method for predicting the weight of the yak in the network based on the CNN-LSTM neural network, the problems of huge workload and lack of objectivity caused by the fact that the weight of the yak is measured by a traditional manual method can be solved, and the problems that the yak generates stress response and the like are avoided.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (8)

1. A yak weight prediction method based on a CNN-LSTM neural network is characterized by comprising the following steps:
s1: acquiring a side view of a yak by using a camera to obtain and store yak point cloud data;
s2: preprocessing stored yak point cloud data to obtain point cloud data with horizontal railings removed;
s3: repairing the point cloud data with the horizontal railing removed by utilizing a cubic B-spline curve method to obtain repaired yak point cloud data;
s4: and (4) taking the repaired yak point cloud data as input, and predicting the weight of the yak by using a CNN-LSTM neural network.
2. The method for predicting the weight of a yak based on the CNN-LSTM neural network as claimed in claim 1, wherein the step S1 comprises the following sub-steps:
s11: installing a camera at a position 2.15m away from the axis of the horizontal railing and 0.9m away from the ground;
s12: collecting a side view of a yak by using a camera and storing the side view in a xef file form;
s13: for each yak, 5 frames of depth images containing complete yak bodies in the xef file are selected;
s14: extracting depth data in each frame of depth image and storing the depth data in a txt file;
s15: reading the depth data in the txt file, converting the depth data into yak point cloud data by using an m _ pCoordinateMepper- > MapDepthFrameToCameraspace () function, and storing the yak point cloud data into the text file according to lines.
3. The method for predicting the weight of a yak based on the CNN-LSTM neural network as claimed in claim 2, wherein the method for converting the depth data into the yak point cloud data in step S15 is as follows: and converting the depth data into o-xyz surface three-dimensional point cloud data which takes the center of the photosensitive element of the infrared camera as a coordinate origin o, the x axis is horizontally left, the y axis is vertically upward and the z axis is vertical along the shooting direction by utilizing an m _ pCoordinateMepper- > MapDepthFrameCameraPaspace () function to obtain the yak point cloud data.
4. The method for predicting the weight of a yak based on the CNN-LSTM neural network as claimed in claim 1, wherein the step S2 comprises the following sub-steps:
s21: visually measuring the direction vector of the horizontal railing;
s22: taking the direction vector of the horizontal railing as a given axis, extracting the central straight line parameter of the horizontal railing by using a straight line template matching method, setting the distance threshold value to be 0.025m and the angle threshold value to be 30 degrees in the straight line template matching method, and obtaining a straight line equation l0
S23: in equation of a straight line l0As a boundary, a series of surrounding cylinders are made, and the expressions of the cylinders along the positive direction and the negative direction of the z axis are respectively as follows:
Figure FDA0002707925420000021
Figure FDA0002707925420000022
wherein m denotes an abscissa of the linear direction vector (m, n, k), n denotes an ordinate of the linear direction vector (m, n, k), k denotes an ordinate of the linear direction vector (m, n, k), and x0Represents a coordinate point P of the upper straight line0(x0,y0,z0) Abscissa of (a), y0Represents a coordinate point P on a straight line0(x0,y0,z0) Ordinate of (a), z0Represents a coordinate point P on a straight line0(x0,y0,z0) I represents the number of searches that are stepped up to 0.02 in the positive direction along the z-axis, j represents the number of searches that are stepped up to 0.02 in the negative direction along the z-axis, and (x, y, z) represents any point on the cylinder;
s24: and removing the points contained in the series of surrounding cylinders to obtain the point cloud data of the removed horizontal railing.
5. The method for predicting the weight of a yak based on the CNN-LSTM neural network as claimed in claim 1, wherein the step S3 comprises the following sub-steps:
s31: fitting the point cloud data without the horizontal railing by utilizing a cubic B-spline curve method to obtain a cubic B-spline curve;
s32: and fitting and taking points for the cubic B-spline curve to obtain restored yak point cloud data.
6. The method for predicting the weight of the yak based on the CNN-LSTM neural network as claimed in claim 5, wherein in the step S31, the fitting method of the point cloud data with the horizontal rail removed by using the cubic B-spline method comprises: calculating the kth cubic B-spline curve segment Pk,3(t) the calculation formula is:
Figure FDA0002707925420000031
wherein, Pi+kRepresenting the vertices, G, constituting the curve segmentsi,3(t) denotes a basis function, m denotes the number of cubic B-spline curve segments, i denotes the different bases of the basis functions of the cubic B-spline curve segments, t denotes a given parameter, and the basis function Gi,3The expression of (t) is:
Figure FDA0002707925420000032
the kth cubic B-spline curve segment Pk,3(t) the expression is:
Figure FDA0002707925420000033
7. the method for predicting the weight of a yak based on the CNN-LSTM neural network as claimed in claim 5, wherein the step S32 of fitting the cubic B-spline curve to obtain the points comprises: and uniformly taking values of the given parameter t, completing fitting and point taking, and obtaining restored yak point cloud data.
8. The method for predicting the weight of a yak based on the CNN-LSTM neural network as claimed in claim 1, wherein the step S4 comprises the following sub-steps:
s41: selecting samples from the repaired yak point cloud data set as a training set and a testing set of the model respectively;
s42: training the training set data as the input of a CNN-LSTM neural network to obtain a weight prediction model;
s43: and (4) performing yak weight prediction by taking the test set data as the input of the weight prediction model, and outputting a final yak weight prediction result by the weight prediction model.
CN202011045785.6A 2020-09-29 2020-09-29 Yak weight prediction method based on CNN-LSTM neural network Pending CN112184791A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011045785.6A CN112184791A (en) 2020-09-29 2020-09-29 Yak weight prediction method based on CNN-LSTM neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011045785.6A CN112184791A (en) 2020-09-29 2020-09-29 Yak weight prediction method based on CNN-LSTM neural network

Publications (1)

Publication Number Publication Date
CN112184791A true CN112184791A (en) 2021-01-05

Family

ID=73946821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011045785.6A Pending CN112184791A (en) 2020-09-29 2020-09-29 Yak weight prediction method based on CNN-LSTM neural network

Country Status (1)

Country Link
CN (1) CN112184791A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990014A (en) * 2021-03-15 2021-06-18 深圳喜为智慧科技有限公司 Pig weight estimation method, system, device and storage medium
CN113344001A (en) * 2021-07-02 2021-09-03 河南牧原智能科技有限公司 Organism weight estimation method, device, equipment and storage medium
CN114972165A (en) * 2022-03-24 2022-08-30 中山大学孙逸仙纪念医院 Method and device for measuring time-average shearing force
CN116090094A (en) * 2022-12-27 2023-05-09 武汉理工大学 Hull thermal model building method, device and equipment based on infrared thermal imaging

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180438A (en) * 2017-04-26 2017-09-19 清华大学 Estimate yak body chi, the method for body weight and corresponding portable computer device
CN108647261A (en) * 2018-04-27 2018-10-12 中国人民解放军91977部队 Global isoplethes drawing method based on meteorological data discrete point gridding processing
CN110823138A (en) * 2019-10-24 2020-02-21 安徽磐彩装饰工程有限公司 Insulation board detection method and mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107180438A (en) * 2017-04-26 2017-09-19 清华大学 Estimate yak body chi, the method for body weight and corresponding portable computer device
CN108647261A (en) * 2018-04-27 2018-10-12 中国人民解放军91977部队 Global isoplethes drawing method based on meteorological data discrete point gridding processing
CN110823138A (en) * 2019-10-24 2020-02-21 安徽磐彩装饰工程有限公司 Insulation board detection method and mechanism

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
何东健等: "基于改进三次B样条曲线的奶牛点云缺失区域修复方法", 《农业机械学报》 *
叶珉吕;: "基于点云的历史建筑形变信息提取方法研究", 测绘与空间地理信息 *
张铭凯;梁晋;刘烈金;梁瑜;王晓光;: "基于SR300体感器人体扫描点云的去噪方法", 中南大学学报(自然科学版) *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112990014A (en) * 2021-03-15 2021-06-18 深圳喜为智慧科技有限公司 Pig weight estimation method, system, device and storage medium
CN113344001A (en) * 2021-07-02 2021-09-03 河南牧原智能科技有限公司 Organism weight estimation method, device, equipment and storage medium
CN114972165A (en) * 2022-03-24 2022-08-30 中山大学孙逸仙纪念医院 Method and device for measuring time-average shearing force
CN114972165B (en) * 2022-03-24 2024-03-15 中山大学孙逸仙纪念医院 Method and device for measuring time average shearing force
CN116090094A (en) * 2022-12-27 2023-05-09 武汉理工大学 Hull thermal model building method, device and equipment based on infrared thermal imaging
CN116090094B (en) * 2022-12-27 2024-06-04 武汉理工大学 Hull thermal model building method, device and equipment based on infrared thermal imaging

Similar Documents

Publication Publication Date Title
CN107180438B (en) Method for estimating size and weight of yak and corresponding portable computer device
CN112184791A (en) Yak weight prediction method based on CNN-LSTM neural network
CN105784083B (en) Dairy cow's conformation measuring method and system based on stereovision technique
CN109141248B (en) Pig weight measuring and calculating method and system based on image
CN109785337B (en) In-column mammal counting method based on example segmentation algorithm
CN108961330B (en) Pig body length measuring and calculating method and system based on image
CN104344877B (en) The field enumeration Weighing method of a kind of chicken and system
CN110530477B (en) Replacement gilt weight estimation method
CN108961269A (en) Pig weight measuring method and system based on image
CN109632059A (en) A kind of intelligence method for culturing pigs, system, electronic equipment and storage medium
CN111141653B (en) Tunnel leakage rate prediction method based on neural network
CN113096178A (en) Pig weight estimation method, device, equipment and storage medium
CN115294185B (en) Pig weight estimation method and related equipment
CN116052211A (en) Knowledge distillation-based YOLOv5s lightweight sheep variety identification method and system
CN115512215A (en) Underwater biological monitoring method and device and storage medium
CN113706512A (en) Live pig weight measurement method based on deep learning and depth camera
CN112907546A (en) Beef body ruler non-contact measuring device and method
CN112508890A (en) Dairy cow body fat rate detection method based on secondary evaluation model
CN104517236A (en) Automatic animal shape phenotype measuring system
CN110554406B (en) Method for inverting secondary forest structure parameters based on unmanned aerial vehicle stereo photogrammetry point cloud
CN111507432A (en) Intelligent weighing method and system for agricultural insurance claims, electronic equipment and storage medium
CN116740704B (en) Wheat leaf phenotype parameter change rate monitoring method and device based on deep learning
CN115294181B (en) Cow body type assessment index measurement method based on two-stage key point positioning
CN113628182B (en) Automatic fish weight estimation method and device, electronic equipment and storage medium
CN114403023B (en) Pig feeding method, device and system based on terahertz fat thickness measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210105