CN107330403B - Yak counting method based on video data - Google Patents

Yak counting method based on video data Download PDF

Info

Publication number
CN107330403B
CN107330403B CN201710524645.9A CN201710524645A CN107330403B CN 107330403 B CN107330403 B CN 107330403B CN 201710524645 A CN201710524645 A CN 201710524645A CN 107330403 B CN107330403 B CN 107330403B
Authority
CN
China
Prior art keywords
image
yak
region
yaks
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710524645.9A
Other languages
Chinese (zh)
Other versions
CN107330403A (en
Inventor
赵洪文
罗晓林
安添午
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Academy of Grassland Science
Original Assignee
Sichuan Academy of Grassland Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Academy of Grassland Science filed Critical Sichuan Academy of Grassland Science
Priority to CN201710524645.9A priority Critical patent/CN107330403B/en
Publication of CN107330403A publication Critical patent/CN107330403A/en
Application granted granted Critical
Publication of CN107330403B publication Critical patent/CN107330403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the technical field of information automation, and discloses a video data-based yak counting method, which comprises the following steps: acquiring image data of high-resolution yaks when the yaks are slaughtered; carrying out mean filtering on the image, and adopting a pyramid construction structure; utilizing pyramid top layer image data as a data source, graying an image, and performing histogram equalization operation; carrying out convolution filtering on the image by using a sobel gradient operator; carrying out connected domain marking on the filtered image, and extracting a suspected target area; extracting the characteristics of the region and establishing a characteristic space; predicting a shape index, a standard deviation of a regional gray value, a color mean value of the regional gray and a Bayes classifier based on a minimum classification error rate criterion on line to judge whether the yak region is present; the region growing algorithm is used for compensating the region growing algorithm, and the counting performance of the algorithm is effectively improved. The method effectively solves the problem of yak number statistics, and reduces the required time cost and labor cost.

Description

Yak counting method based on video data
Technical Field
The invention belongs to the technical field of information automation, and particularly relates to a yak counting method based on video data.
Background
Yak breeding is in a development stage in animal husbandry in China at present, and the development of the Yak technology software influences the economic level of animal husbandry in China to a great extent, particularly for Tibetan region animal husbandry mainly in Qinghai-Tibet plateau areas in China. For a long time, the yak breeding in China is mainly based on the original rough grazing and grazing type, and the yaks are regarded as valuable material wealth and foundation for the masses of the Tibetan herdsmen in the Tibetan region, so that each family stores a large number of yaks for a long time, and less yaks are two to three hundred yaks, and more yaks are thousands yaks. In order to find out the number of heads of yaks in a house, the number of heads of the yaks is very important for herdsmen, and for a long time, many herdsmen carry out head counting by driving the cattle herds into nests or roadway zones and surrounding the nests by a plurality of manpower in the house, even if the head counting is approximate, the statistics can be only carried out. However, the yaks belong to original species of the yaks, have certain wild property and certain alertness for human beings, and are frightened and run around because of the presence of the dead people under normal conditions, which increases certain difficulty for counting the number of the yaks.
In the conventional beef cattle statistical technique, for example, patent document CN105160394A discloses a method for counting cattle by reading signal states of a pair of front infrared correlation photoelectric sensors S1, a pair of rear infrared correlation photoelectric sensors S2, and a pair of bottom infrared correlation photoelectric sensors S3 disposed right below or behind the rear infrared correlation photoelectric sensors.
However, the technology is limited to animals with mild characters such as beef cattle, yaks have certain wildness, and the sensor equipment arranged on the yaks has certain difficulty and danger; the yaks belong to long-term grazing species, and for the statistical effect of counting, a signal receiver needs to be installed at a fixed point, so that the yaks have certain limitations, and when a large number of yaks are crowded, the yaks can influence signal equipment, so that the yak breeding method is not suitable for being applied to yak farmers on plateaus.
In order to divide the yak groups into groups according to indexes such as growth and breeding stages and milk production performance and to manage circulation among the yak groups in future and provide scientific basis for fine management such as accurate cow group counting, the counting of the cows can be conveniently, quickly and accurately counted in the occasions such as cowshed management and group management where the cows need to be counted.
In summary, the problems of the prior art are as follows: the method is not suitable for the biological characteristics of the yaks, has the characteristic that the statistical effect is influenced by a signal sensor, wastes time and labor, and has inaccurate results.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a video data-based cattle consumption counting method.
The invention is realized in such a way that a yak counting method based on video data comprises the following steps:
acquiring image data of high-resolution yaks when the yaks enter and exit a colony house;
step two, performing mean filtering on the image, and adopting a pyramid construction structure;
thirdly, utilizing the pyramid top layer image data as a data source, graying the image, and performing histogram equalization operation;
carrying out convolution filtering on the image by using a sobel gradient operator, and carrying out binarization processing on the image;
step five, performing morphological operation filtering on the image by adopting a 7x7 template, performing connected domain marking on the filtered image, and extracting a suspected target area;
extracting the characteristics of the region and establishing a characteristic space; meanwhile, a Bayes classifier based on a minimum error rate criterion is applied by an off-line learning method to divide the yak sample into a positive sample and a negative sample for training;
seventhly, predicting the shape index, the standard deviation of the gray value of the region, the color mean value of the gray value of the region and a Bayes classifier based on the minimum classification error rate criterion on line, and judging whether the yak region is present;
step eight, if the yak area is the yak area, performing growth segmentation on the area based on the color characteristics, connecting the fractured yak area, counting the target area, outputting the current number of the yaks, and returning to the step one again; if not, returning to the step one and restarting;
and step nine, compensating the image by using a region growing algorithm at the later stage of identification, so that the counting performance of the algorithm can be effectively improved.
Further, the probability density function for simulating the characteristics of the yaks and the non-yaks by adopting the normal distribution probability density of the multidimensional random variable is as follows:
Figure BDA0001338282700000031
omega in the formulaiRepresenting a characteristic class of yaks orNon-yak characteristic class, i is 0 for yak class, i is 1 for non-yak class; p (X | ω)i) Representing conditional probabilities, i.e. at ωiProbability density of occurrence of the feature vector X under class conditions. X represents a feature vector in the feature space, SiRepresented is a covariance matrix of class i.
The logarithmic form of the discriminant function is defined as:
Figure BDA0001338282700000041
further, selecting a region cut as an algorithm for image segmentation, and aggregating pixels into a larger area according to a predefined criterion; starting from a group of growing points, combining the adjacent pixels with similar properties to the growing points with the growing points to form new growing points, and repeating the process until the growing points cannot grow.
Further, in the second step, in the process of down-sampling the image, an average weighted average filter is adopted:
Figure BDA0001338282700000042
where n represents the nth layer of the image pyramid. f. ofn(x, y) represents the pixel value at the nth level position x, y of the image pyramid. And if the color image is the color image, down-sampling each channel respectively.
Further, the third step converts the color image into a gray image according to the following formula:
fG(x,y)=0.3*fr(x,y)+0.59*fg(x,y)+0.11*fb(x,y);
wherein f isGAnd (x, y) represents the gray value of the converted gray image at x rows and y columns. f. ofr(x,y),fg(x,y),fb(x, y) represent the pixel values at x, y for the red, green and blue channels of the color map, respectively.
The transformation formula for histogram equalization is shown below:
Figure BDA0001338282700000051
wherein, O (l), N (l) represent the number of the first level pixels of the histogram before and after transformation.
Further, the formula of the Sobel operator in the fourth step is as follows:
Figure BDA0001338282700000052
where Mx represents a gradient template in the vertical direction and My represents a gradient template in the horizontal direction.
After convolution operation is carried out on an original image through a Sobel template, an image area where a yak is located is selected through binarization, and the mathematical formula is as follows:
Figure BDA0001338282700000053
where F (x, y) represents the state of the binary image at x, y.
Further, the seventh step adopts a criterion based on the minimum error rate, and the mathematical expression is as follows:
Figure BDA0001338282700000054
the invention has the advantages and positive effects that: by utilizing the method, the problem of yak number statistics can be effectively solved after the image video data of the yak group is collected, and compared with the traditional method that after the herdsman is blocked by a fence or manpower, the yak belongs to the original wild species through manual number statistics, a large amount of manpower resources in a pasturing area are saved; install infrared signal sensor for every ox than the tradition utilization, practiced thrift a large amount of equipment cost, because the yak quantity is great in the herdsman in the Tibetan region, and purchase infrared signal sensor and not liked by the common people. A convenient processing method is provided for the masses of the Tibetan and herdsmen in the Tibetan region, and the required time cost and labor cost are greatly reduced.
Drawings
Fig. 1 is a flowchart of a method for counting cattle based on video data according to an embodiment of the present invention.
Fig. 2 is a flowchart of an implementation of a method for counting cattle based on video data according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of data distribution provided by the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the yak counting method based on video data provided by the embodiment of the present invention includes the following steps:
s101: acquiring image data of high-resolution yaks when the yaks are slaughtered;
s102: carrying out mean filtering on the image, and adopting a pyramid construction structure;
s103: utilizing pyramid top layer image data as a data source, graying an image, and performing histogram equalization operation;
s104: carrying out convolution filtering on the image by using a sobel gradient operator, and carrying out binarization processing on the image;
s105: performing morphological operation filtering on the image by adopting a 7x7 template, performing connected domain marking on the filtered image, and extracting a suspected target area;
s106: and extracting the characteristics of the region and establishing a characteristic space (shape index, region gray value standard deviation and region gray value color mean value). Meanwhile, a Bayes classifier based on a minimum error rate criterion is applied by an off-line learning method to divide the yak sample into a positive sample and a negative sample for training;
s107: predicting a shape index, a standard deviation of a regional gray value, a color mean value of the regional gray and a Bayes classifier based on a minimum classification error rate criterion on line to judge whether the yak area is the yak area;
s108: and if the yak area is the yak area, performing growth segmentation based on the area of the color characteristics, connecting the fractured yak areas, counting the target area, outputting the current number of the yaks, and returning to the S101. If not, returning to S101 and restarting;
s109: and the region growing algorithm is used for compensating the region growing algorithm in the later stage of identification, so that the counting performance of the algorithm can be effectively improved.
The invention specifically comprises the following steps:
acquiring image data of high-resolution yaks when the yaks enter and exit a colony house;
in order to accurately count the number of yaks in a long-distance large scene and balance the complexity of the whole algorithm, a gun camera (Hai Kan Wei Shi) with the resolution of 400 ten thousand pixels, the focal length of 4mm and the depth of field of 30 meters is selected. The hard disk video recorder can be selected to record the shot video for later searching or analysis according to specific requirements, the video is not recorded in the algorithm, the data is transmitted back to the analysis host and then analyzed in real time, and the counting result of the number of the cattle herds is given. The analysis host is configured to: intel core i52.3GHz, 500G hard disk, windows 7 system.
Step two, performing mean filtering on the image, and adopting a pyramid construction structure;
since the time complexity of the algorithm is affected by too high image resolution, a low-resolution rough positioning and high-resolution precise identification strategy is adopted. In the process of down-sampling the image, if a simple method of counting rows of odd number of the image is adopted, the sawtooth effect can occur, so an average weighted average filter is adopted:
Figure BDA0001338282700000081
where n represents the nth layer of the image pyramid, the benefit of this process is that 1 eliminates the aliasing effect during the downsampling process; 2 may smooth the image to eliminate some single point noise. Representing the pixel value at the nth level position x, y of the image pyramid. And if the color image is the color image, down-sampling each channel respectively.
Thirdly, utilizing the pyramid top layer image data as a data source, graying the image, and performing histogram equalization operation;
if color image data is used as a data source for yak counting, the counting result will inevitably be disturbed by brightness variations. Therefore, the conversion formula for converting the color image into the gray image is as follows:
fG(x,y)=0.3*fr(x,y)+0.59*fg(x,y)+0.11*fb(x,y)
wherein f isGAnd (x, y) represents the gray value of the converted gray image at x rows and y columns. f. ofr(x,y),fg(x,y),fb(x, y) represent the pixel values at x, y for the red, green and blue channels of the color map, respectively.
In order to further enhance the image contrast, the histogram equalization technology is adopted to enhance the image, so that the contrast between image contents can be increased after the processing. Histogram equalization is a non-linear process, and the purpose of this operator is to pass through an image enhancement technique suitable for human visual analysis. After this operator processing the histogram of the image becomes flat and all brightness levels will appear with equal probability. The transformation formula is as follows:
Figure BDA0001338282700000091
wherein, O (l), N (l) represent the number of the first level pixels of the histogram before and after transformation.
Carrying out convolution filtering on the image by using a sobel gradient operator, and carrying out binarization processing on the image;
a large number of yak videos are observed, and yaks are found to be areas with uniform gray values in the images, so that the suspected target areas in the images can be preliminarily screened out by using a gradient detection operator. The mathematical description of the Sobel operator is as follows:
Figure BDA0001338282700000092
where Mx represents a gradient template in the vertical direction and My represents a gradient template in the horizontal direction.
After the original image is subjected to convolution operation through a Sobel template, the obtained gradient image has small response to the gradient of the region with uniform brightness in the gray image, so that binarization is adopted to select the image region where the yaks are located, and mathematical description is as follows:
Figure BDA0001338282700000101
where F (x, y) represents the state of the binary image at x, y.
Step five, performing morphological operation filtering on the image by adopting a 7x7 template, performing connected domain marking on the filtered image, and extracting a suspected target area;
isolated noise is inevitably generated in the binarization process, so that the filtering processing is carried out on the image by adopting morphological open operation, and a complete yak region can be extracted after the processing; and marking a target area in the image by adopting a method of scanning the image twice to form a suspected target chain.
Extracting the characteristics of the region and establishing a characteristic space; meanwhile, by an off-line learning method, a Bayes classifier based on a minimum error rate criterion is applied, and the classifier is trained by artificially marked yak samples (divided into positive samples and negative samples);
the shape index, the area gray standard deviation and the gray mean value of the yak area in the image are used for establishing a feature space, and due to the fact that the shape index data, the standard deviation and the gray mean value are not in the same order of magnitude, the gray and the standard deviation are subjected to normalization processing in the training process. The data distribution is shown in fig. 3:
the positive samples and the negative samples are well classified by using an artificial marking method, the bag classification training actually estimates the parameters of the prior probability density functions of the two classes by using a statistical method, and the characteristics of the yak region and the non-yak region are assumed to accord with positive-over distribution. The expression for the probability density function is as follows:
Figure BDA0001338282700000102
omega in the formulaiRepresenting yak characteristic class or non-yak characteristic class, i is 0 representing yak class, i is 1 representing non-yak class; p (X | ω)i) What is represented is the conditional probability, i.e. the probability density of the occurrence of the feature vector X under class conditions. X represents an eigenvector in the eigenspace, and represents a covariance matrix of class i.
Seventhly, predicting the shape index, the standard deviation of the gray values of the region, the color mean value of the gray values of the region and a Bayes classifier based on the minimum classification error rate criterion on line, and judging whether the yak area exists;
after the bayer classifiers are trained in an off-line state, videos are transmitted back to the video analysis host through the camera and the video acquisition card and serve as data sources of the yak counting algorithm to count the number of the yaks in real time. Through the yak feature extraction algorithm, the suspected yak region features (shape index, region gray mean value and region gray value standard deviation) are extracted in real time and transmitted to a bayer classifier to make a decision on whether the current suspected target is a yak or not. In order to improve the accuracy of the algorithm, a criterion based on a minimum error rate is adopted. The mathematical expression is as follows:
Figure BDA0001338282700000111
step eight, if the yak region is the yak region, further selecting a region cut as an algorithm for image segmentation, and aggregating pixels into a larger region according to a predefined criterion; starting from a group of growing points, combining the adjacent pixels with similar properties to the growing points with the growing points to form new growing points, and repeating the process until the growing points cannot grow. Growing and dividing the areas based on the color characteristics, connecting the broken yak areas, counting the target areas, outputting the current number of yaks, and returning to the first step; if not, returning to the step one and restarting.
The application of the principles of the present invention is described further below.
1 interface information description
After the characteristics of yaks in video images acquired in the field of a large number of cattle farms are analyzed, the problem is summarized as a supervised machine learning problem. Since the concept of the yak is a subjective concept, the behavior of sample marking is the process of transmitting subjective probability of people to a machine. Therefore, samples should be artificially selected and marked as positive samples and negative samples, the yak is a concept of a region level, so that the characteristics based on a pixel level are not selected, and the color of the region can be selected as the characteristics because the grazing yaks are all performed under the condition of good illumination. The Bayes classifier is one of the most widely researched classifiers, the statistical theory basis of the Bayes classifier is perfect, the statistical theory basis is easy to realize, the training process is fast, and the performance is stable, so that the Bayes classifier is selected when the yak counting algorithm is designed. Threshold processing and morphological opening operation are carried out on the gradient image in the yak feature extraction process, so that the fracture of a target region caused by noise is inevitably introduced, and the region growing algorithm is used for compensating the target region in the later stage of identification, so that the counting performance of the algorithm can be effectively improved.
1.1 importance of feature selection
The determination of the proper feature space, namely, the distribution of various types of samples in the mutually segmented regions in the feature space, provides a good basis for the successful design of the classifier. The flow is shown in figure 2.
1.2 Bayes decision criterion based on minimum classification error Rate
If M types of modes are known and the statistical distribution of the M types of modes in the n-dimensional space is known, namely, the types of modes omega are knowniI 1,2,3, …, MProbability of experience P (omega)i) And a conditional probability density function P (X | ω)i). For a sample to be detected, a Bayes formula can calculate the probability that the sample belongs to various modes, namely the posterior probability; and estimating the class to which the pattern X belongs to have the highest possibility, classifying the pattern X into the class with the highest possibility, and taking the posterior probability as a criterion for judging the attribution of the pattern to be identified. Bayes equation (1):
Figure BDA0001338282700000131
in practical engineering problems, statistical data tends to exhibit characteristics that are too distributed. Assuming that the characteristics of both yak and non-yak regions obey normal distribution, the problem of model training becomes how to estimate the parameters that are distributed too much using labeled samples. The probability density function (2) for simulating the characteristics of yaks and non-yaks by adopting the normal distribution probability density of the multidimensional random variable is shown as follows:
Figure BDA0001338282700000132
the yak identification problem is converted into a two-classification problem, and omega in the formula (2)iRepresents yak characteristics or non-yak characteristics, in the experiment, i is 0 for yak and i is 1 for non-yak. P (X | ω)i) What is represented is the conditional probability, i.e. the probability density of the occurrence of the feature vector X under class conditions. X represents an eigenvector in the eigenspace, and represents a covariance matrix of class i. The logarithmic form of the discriminant function is defined as shown in (3):
Figure BDA0001338282700000133
1.3 region growing-based image segmentation
The region cut is selected as an algorithm for image segmentation, and the region growing (region cut) is a process of gathering pixels into a larger region according to a predefined criterion. The idea is to start from a group of growing points (the yak region identified by the bayer classifier in the invention), combine the adjacent pixels with similar properties to the growing points with the growing points to form new growing points, and repeat the process until the growing points cannot grow. The similarity criterion used is the color information between the pixels.
1.4 software installation and use
The software can process two data sources 1 real-time video stream data, 2 static video files in various formats. The running environment is a window operating system or a linux system, and a Mac system is not supported at present. The following description will be given by taking the window 7 system as an example. Effect after window software installation is completed.
The effect of the present invention will be described in detail with reference to the test.
1. The invention passes the tests of data and database integrity test, pressure test, integration test, function test, user interface test, performance evaluation, load test and the like until reaching high quality.
The algorithm proposed in the present invention is implemented using Qt and MSVC compilers on Windows 764 bits. The hardware environment is as follows: intel (R) core (TM) i5-4200U CPU @1.6GHz 2.3GHz,4.00GB memory. In the experiment of the present invention, 22 positive samples and 31 negative samples were selected, and the sample data is shown in table 1, in which only 10 data are shown.
TABLE 1 training sample data
Figure BDA0001338282700000141
Figure BDA0001338282700000151
2. Real-time performance testing
The resolution of the test image is 5000 ten thousand pixels (8688 × 5792), the image tower is built by down-sampling through the mean value filter, the resolution of the image of the fourth layer is 80 ten thousand pixels (1086 × 724), and the processing speed of the algorithm is 38ms, so that the frame rate of real-time video sampling is achieved. Therefore, the algorithm provided by the invention can be burnt into embedded equipment to be used as stock equipment of a cattle farm to give the analysis result of the cattle herd in real time.
2. Recognition performance testing
53 sample images are randomly extracted to form a test sample library, and classification performance test is performed on the algorithm provided by the invention.
TABLE 2Bayes Yak identification Algorithm Performance analysis
Figure BDA0001338282700000152
On the test image set, classification results of 80 targets are recorded as shown in table 2, and the correct recognition rate of the algorithm is as follows: (TP + TN)/(P + N) ═ 76/80 ═ 95%, the recognition error rate is: 5%, the sensitivity is: TP/P is 94.35%, and the specific potency is: TN/N95.56% precision: TP/(TP + FP) ═ 94.29%. From the above data, it can be seen that the algorithm of the present invention can achieve 95% recognition accuracy on the test library.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (2)

1. A yak counting method based on video data is characterized by comprising the following steps:
acquiring image data of high-resolution yaks when the yaks enter and exit a colony house;
step two, performing mean filtering on the image, and adopting a pyramid construction structure; in the process of down-sampling the image, an average weighted mean filter is adopted:
Figure FDA0002687083330000011
where n represents the nth layer of the image pyramid; f. ofn(x, y) represents the pixel value at the nth level position x, y of the image pyramid;
thirdly, utilizing the pyramid top layer image data as a data source, graying the image, and performing histogram equalization operation;
the conversion formula for converting a color image into a grayscale image is as follows:
fG(x,y)=0.3*fr(x,y)+0.59*fg(x,y)+0.11*fb(x,y);
wherein f isG(x, y) represents the gray value of the converted gray image at x rows and y columns; f. ofr(x,y),fg(x,y),fb(x, y) represents the pixel values at x, y for the red, green and blue channels of the color map, respectively;
the histogram equalization transform formula is shown below:
Figure FDA0002687083330000012
wherein, O (l), N (l) respectively represent the number of the first-level pixel points of the histograms before and after transformation;
carrying out convolution filtering on the image by using a Sobel gradient operator, and carrying out binarization processing on the image;
the formula of the Sobel gradient operator is as follows:
Figure FDA0002687083330000021
Figure FDA0002687083330000022
wherein Mx represents a gradient template in the vertical direction, and My represents a gradient template in the horizontal direction;
after convolution operation is carried out on an original image through a Sobel template, an image area where a yak is located is selected through binarization, and the mathematical formula is as follows:
Figure FDA0002687083330000023
wherein F (x, y) represents the state of the binary image at x, y; g (x, y) represents a gradient response value of the input image at x, y, and T represents a threshold value for binarizing the gradient image;
step five, performing morphological operation filtering on the image by adopting a 7x7 template, performing connected domain marking on the filtered image, and extracting a suspected target area;
extracting the characteristics of the region and establishing a characteristic space; meanwhile, a Bayes classifier based on a minimum error rate criterion is applied by an off-line learning method to divide the yak sample into a positive sample and a negative sample for training;
in the course of carrying on the classified training of the bayes, estimate the parameter of the prior probability density function of two kinds with the statistical method, if the characteristic of yak area and non-yak area accords with the positive distribution too, the probability density function of adopting the normal distribution probability density of the multi-dimensional random variable to simulate yak and non-yak characteristic is:
Figure FDA0002687083330000024
omega in the formulaiRepresenting yak characteristic class or non-yak characteristic class, i is 0 representing yak class, i is 1 representing non-yak class; p (X | ω)i) Representing conditional probabilities, i.e. at ωiProbability density of occurrence of the feature vector X under class conditions; x represents a feature vector in the feature space, SiRepresented is a covariance matrix of class i;
seventhly, predicting the shape index, the standard deviation of the gray value of the region, the color mean value of the gray value of the region and a Bayes classifier based on the minimum classification error rate criterion on line, and judging whether the yak region is present;
the seventh step adopts a criterion based on the minimum error rate, and the mathematical expression is as follows:
Figure FDA0002687083330000031
step eight, if the yak area is the yak area, performing growth segmentation on the area based on the color characteristics, connecting the fractured yak area, counting the target area, outputting the current number of the yaks, and returning to the step one again; if not, returning to the step one and restarting;
and step nine, compensating the image by using a region growing algorithm at the later stage of identification, and improving the counting performance of the algorithm.
2. The video data-based yak counting method according to claim 1, wherein a region cut is selected as an algorithm for image segmentation, and a process of aggregating pixels into a larger area according to a predefined criterion; starting from a group of growing points, combining the adjacent pixels with similar properties to the growing points with the growing points to form new growing points, and repeating the process until the growing points cannot grow.
CN201710524645.9A 2017-06-30 2017-06-30 Yak counting method based on video data Active CN107330403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710524645.9A CN107330403B (en) 2017-06-30 2017-06-30 Yak counting method based on video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710524645.9A CN107330403B (en) 2017-06-30 2017-06-30 Yak counting method based on video data

Publications (2)

Publication Number Publication Date
CN107330403A CN107330403A (en) 2017-11-07
CN107330403B true CN107330403B (en) 2020-11-17

Family

ID=60198735

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710524645.9A Active CN107330403B (en) 2017-06-30 2017-06-30 Yak counting method based on video data

Country Status (1)

Country Link
CN (1) CN107330403B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826581B (en) * 2018-08-10 2023-11-07 京东科技控股股份有限公司 Animal number identification method, device, medium and electronic equipment
CN109670398A (en) * 2018-11-07 2019-04-23 北京农信互联科技集团有限公司 Pig image analysis method and pig image analysis equipment
CN109658414A (en) * 2018-12-13 2019-04-19 北京小龙潜行科技有限公司 A kind of intelligent checking method and device of pig
CN110414385B (en) * 2019-07-12 2021-06-25 淮阴工学院 Lane line detection method and system based on homography transformation and characteristic window
CN111461117A (en) * 2020-03-30 2020-07-28 西藏自治区农牧科学院畜牧兽医研究所 Yak calf growth environment monitoring system and method
CN112528962B (en) * 2021-01-01 2021-07-20 生态环境部卫星环境应用中心 Pasturing area cattle and horse group monitoring method based on high-resolution satellite remote sensing image
CN114219767B (en) * 2021-11-24 2022-08-19 慧之安信息技术股份有限公司 Sheep flock counting management method based on Internet of things edge box

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057785A1 (en) * 2000-02-01 2001-08-09 Chromavision Medical Systems, Inc. Method and apparatus for automated image analysis of biological specimens
CN102324016A (en) * 2011-05-27 2012-01-18 郝红卫 Statistical method for high-density crowd flow
CN104897556A (en) * 2015-05-29 2015-09-09 河北工业大学 Milk somatic cell counting device and method based on intelligent terminal and micro-fluidic chip
CN105303568A (en) * 2015-10-15 2016-02-03 陕西科技大学 Method for counting somatic cells of milk based on image processing
CN106023231A (en) * 2016-06-07 2016-10-12 首都师范大学 Method for automatically detecting cattle and sheep in high resolution image
EP3271054A1 (en) * 2015-03-19 2018-01-24 Arctic Nutrition AS Methods for obtaining phospholipids and compositions thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001057785A1 (en) * 2000-02-01 2001-08-09 Chromavision Medical Systems, Inc. Method and apparatus for automated image analysis of biological specimens
CN102324016A (en) * 2011-05-27 2012-01-18 郝红卫 Statistical method for high-density crowd flow
EP3271054A1 (en) * 2015-03-19 2018-01-24 Arctic Nutrition AS Methods for obtaining phospholipids and compositions thereof
CN104897556A (en) * 2015-05-29 2015-09-09 河北工业大学 Milk somatic cell counting device and method based on intelligent terminal and micro-fluidic chip
CN105303568A (en) * 2015-10-15 2016-02-03 陕西科技大学 Method for counting somatic cells of milk based on image processing
CN106023231A (en) * 2016-06-07 2016-10-12 首都师范大学 Method for automatically detecting cattle and sheep in high resolution image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《面向对象影像分析中的多尺度方法研究》;黄志坚;《中国优秀博士学位论文全文数据库 信息科技辑》;20161231;全文 *

Also Published As

Publication number Publication date
CN107330403A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107330403B (en) Yak counting method based on video data
CN106778902B (en) Dairy cow individual identification method based on deep convolutional neural network
CN107292298B (en) Ox face recognition method based on convolutional neural networks and sorter model
CN111178197B (en) Mass R-CNN and Soft-NMS fusion based group-fed adherent pig example segmentation method
CN109325431B (en) Method and device for detecting vegetation coverage in feeding path of grassland grazing sheep
CN109410184B (en) Live broadcast pornographic image detection method based on dense confrontation network semi-supervised learning
Lainez et al. Automated fingerlings counting using convolutional neural network
Masood et al. Plants disease segmentation using image processing
Pinto et al. Crop disease classification using texture analysis
Liang et al. Low-cost weed identification system using drones
Hasan et al. Fish diseases detection using convolutional neural network (CNN)
CN107273815A (en) A kind of individual behavior recognition methods and system
Sabri et al. Nutrient deficiency detection in maize (Zea mays L.) leaves using image processing
CN114581948A (en) Animal face identification method
CN112883915A (en) Automatic wheat ear identification method and system based on transfer learning
Li et al. Y-BGD: Broiler counting based on multi-object tracking
Grbovic et al. Wheat ear detection in RGB and thermal images using deep neural networks
Wang et al. Pig face recognition model based on a cascaded network
Pauzi et al. A review on image processing for fish disease detection
CN111160422B (en) Analysis method for detecting attack behaviors of group-raised pigs by adopting convolutional neural network and long-term and short-term memory
Miranda et al. Pest identification using image processing techniques in detecting image pattern through neural network
CN107341456B (en) Weather sunny and cloudy classification method based on single outdoor color image
Saxena et al. Disease detection in plant leaves using deep learning models: AlexNet and GoogLeNet
Thavamani et al. GLCM and K-means based chicken gender classification
CN112800968A (en) Method for identifying identity of pig in drinking area based on feature histogram fusion of HOG blocks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant