CN113177564B - Computer vision pig key point identification method - Google Patents

Computer vision pig key point identification method Download PDF

Info

Publication number
CN113177564B
CN113177564B CN202110531027.3A CN202110531027A CN113177564B CN 113177564 B CN113177564 B CN 113177564B CN 202110531027 A CN202110531027 A CN 202110531027A CN 113177564 B CN113177564 B CN 113177564B
Authority
CN
China
Prior art keywords
pig
target
picture
pigs
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110531027.3A
Other languages
Chinese (zh)
Other versions
CN113177564A (en
Inventor
张玉良
李攀鹏
黄煜
尤园
刘兴宇
黄晓晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Muyuan Intelligent Technology Co Ltd
Original Assignee
Henan Muyuan Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Muyuan Intelligent Technology Co Ltd filed Critical Henan Muyuan Intelligent Technology Co Ltd
Priority to CN202110531027.3A priority Critical patent/CN113177564B/en
Publication of CN113177564A publication Critical patent/CN113177564A/en
Application granted granted Critical
Publication of CN113177564B publication Critical patent/CN113177564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a computer vision pig key point identification method, and belongs to the technical field of machine vision. Wherein the method comprises the following steps: acquiring image information from the acquired pig image data, and removing abnormal pictures in the image information to obtain a target image; processing the target image by using an open source target detection model, and detecting the square coordinates of all target pigs in the visual field range in a dense scene; and carrying out deep neural network calculation on the single pig pictures to obtain feature mapping, and obtaining coordinates of key points of the pigs through the feature mapping. The method can work in a dense field of pigs, can automatically detect the key points of the parts of the pigs in the dense field, and can provide technical support for the development of the subsequent identification method of various specific sick pigs through the accurate identification of the states of the key points.

Description

Computer vision pig key point identification method
Technical Field
The invention relates to the technical field of machine vision, in particular to a computer vision pig key point identification method.
Background
In the related machine vision research process of pigs, key points of a characteristic area corresponding to the pig body are generally required to be extracted, and different key points of the pigs can reflect the health condition of the pigs to a certain extent.
The computer vision method provides a technology for automatically identifying the sick pigs without people watching videos, and the most basic and critical technology in the existing technologies for identifying the sick pigs by various computer vision is to identify the key points of the parts of the pigs, so that the technical difficulty in identifying the later-stage sick pigs (such as body trauma, ear biting and the like) can be greatly reduced. The technology in the current market or the related technical literature in research and development is simply monitored in limited simple environments, such as single pig single columns, or columns within 4 pigs, and cannot be realized in the facing complex scenes (the number of pig heads in the columns is generally more than 5). Meanwhile, the identified key points of the pigs are few, generally the head, the ear, the nose, the leg and the tail are large in characteristic area and cannot be refined, and the key points of the parts of the pigs cannot be accurately identified.
The computer vision pig key point recognition algorithm in the dense scene can automatically detect the position key points of pigs in dense fields, and can reduce the development technical difficulty of various subsequent specific disease pig recognition methods.
Disclosure of Invention
In view of the above, the invention provides a computer vision pig key point identification method, which not only can work in a pig dense scene, but also can automatically detect the position key points of pigs in dense columns, and can provide technical support for the development of various subsequent specific disease pig identification methods through the accurate identification of the key point states.
In order to achieve the above purpose, the present invention provides the following technical solutions:
the invention provides a computer vision pig key point identification method, which comprises the following steps:
step 1, collecting image data of pigs;
step 2, eliminating abnormal pictures in the image data to obtain a target image;
step 3, processing the target image by using an open source target detection model to obtain the block coordinates of all target pigs in a dense scene and a visual field range;
step 4, extracting the frame coordinates of all target pigs to form single pig pictures one by one;
and 5, processing the single pig picture through the deep neural network to obtain feature mapping, and obtaining the coordinates of the key points of the pig by utilizing the feature mapping.
Further, the abnormal pictures comprise illumination abnormal pictures, fuzzy abnormal pictures, fog abnormal pictures and angle abnormal pictures.
Further, the step 2 includes:
1) Processing the abnormal picture through opencv and the HSV value of the image to remove the illumination abnormal picture;
2) Converting the abnormal picture into a gray level picture and dividing the gray level picture into 4 areas on average, and calculating a fixed value by using Laplacian in each area to remove the fuzzy abnormal picture;
3) Processing the abnormal pictures through minimum value filtering to remove the abnormal pictures with fog;
4) And (5) processing the abnormal pictures through the FastLineDetector straight line, and eliminating the angle abnormal pictures.
Further, the step 3 includes:
1) Matching the data of the target image with a yolov4 target detection model;
2) Processing a target image through a yolov4 target detection model, obtaining an inference result, and obtaining the block coordinates and confidence probabilities of all pigs according to the inference result;
3) And carrying out confidence probability processing on the square coordinates of the pigs to obtain the square coordinates of the target pigs.
Further, when the confidence probability of the target pig is processed, the square coordinates of the target pig with the confidence probability smaller than 0.3 are filtered out.
Further, the step 4 includes:
reversely calculating the square coordinates of the target pigs back to the matrix of the target image, and taking out pixel values according to the target frame position index;
and carrying out pixel index on the pixel value in the target image by using the coordinate calculated by yolov4 to obtain a target area, so that each target pig in any picture in the target area forms a single pig picture in the square coordinates of the target pigs.
Further, the step 5 includes:
converting the single pig picture into square size through a size picture conversion algorithm;
and inputting the single pig picture into an open source resnet50 depth neural network for calculation, obtaining a pig key point feature map, and obtaining pig key point position coordinates through the feature map.
Further, obtaining a single pig picture with the size of 224x224, inputting the single pig picture into a trained resnet50 depth neural network, and calculating the single pig picture into a feature map of 20x24x48 through a series of convolution operation and pooling operation, wherein 20 in the 20x24x48 corresponds to 20 key points, and a subarray of 24x48 corresponds to the feature map of each key point;
and finding out a peak value from each characteristic mapping, wherein the characteristic mapping with the peak value larger than the threshold value of 0.5 is a target key point, and reversely calculating the size of the target image by utilizing the coordinates of the target key point, wherein the size of the target image is the position coordinates of the key point of the pig.
Further, the processing of the target key point coordinate back calculation target image size comprises the following steps: and the coordinates of the key points of the corresponding pig are (x, y), and the coordinates of the key points of the corresponding pig in the target image are (x 0, y 0).
Further, the pig key points comprise a left ear, a right ear, a back force center, a back front point, a back middle point, a back tail point, a left back hip joint, a left back knee joint, a left back ankle joint, a left front knee joint, a left front hip joint, a right back ankle joint, a right back knee joint, a right back hip joint, a right front knee joint, a right front ankle joint and an abdomen center of the pig.
Compared with the prior art, the invention has the beneficial effects that: firstly, acquiring image information from image data of pig images, removing abnormal images in the image information to obtain target images, and then processing the image images by using an open source target detection model to detect block coordinates of all target pigs in a visual field range in a dense scene; and finally, carrying out deep neural network calculation on the single pig pictures to obtain feature mapping, and obtaining coordinates of key points of the pigs through the feature mapping to obtain information of the key points of the pigs. The computer vision pig key point identification method provided by the invention can automatically detect the pig position key points, and can reduce the development technical difficulty of various subsequent specific pig identification methods through the key points, for example: when the ear biting is detected, the position of the ear of the pig in the dense scene needs to be detected, and the key points of the pig comprise left and right ears; when detecting the bite, firstly detecting the position of the abdomen of the pig in a dense scene, wherein the key points of the pig comprise the abdomen; the ear trauma of the pig is directly detected by using the ear semantic recognized by the key point detection.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic illustration of the key points on only one side of a pig in accordance with the present invention;
FIG. 2 is a schematic view of the key points on the other side of the pig according to the invention;
fig. 3 is a flowchart of a computer vision pig key point identification method according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In connection with the technical proposal of the invention, the term explanation of the related content is exemplified as follows:
opencv: an open source cross-platform computer vision and machine learning software library;
HSV: HSV (Value) is a color space created according to the visual characteristics of colors, and the parameters of the colors in this model are: hue (H), saturation (S), brightness (V);
gray scale map: the method comprises the steps of dividing the logarithmic relationship between white and black into a plurality of levels, namely gray scales, dividing the gray scales into 256 levels, and enabling an image represented by the gray scales to be called a gray scale image;
laplacian operator: in the invention, the method is used for calculating the second derivative, the region with the pixel value rapidly changed in the picture is found, the boundary of the normal picture is clear, so the variance is larger, the boundary information contained in the blurred picture is less, so the variance is smaller, namely Gaussian blur- > graying- > Laplace calculation- > absolute value (convertScaleAbs) - > calculating the variance of the output image, and the degree of blur is judged according to the variance;
and (3) minimum value filtering: taking the minimum value of the target pixel and the peripheral pixels, and then filling back the target pixel;
LineSegmentDetector: a method of detecting a straight line;
neural network: an algorithm mathematical model for simulating animal neural network behavior characteristics and carrying out distributed parallel information processing. The network relies on the complexity of the system, and the aim of processing information is achieved by adjusting the relation of interconnection among a large number of nodes in the network;
feedforward neural network: the simplest neural network has the advantages that the neurons are arranged in layers, each neuron is connected with the neurons of the previous layer only, the output of the previous layer is received and output to the next layer, and feedback does not exist between the layers;
convolution: generating a mathematical operator of a third function through the two functions f and g, and representing the integral of the overlapping length of the product of the function values of the overlapping part of the function f and the function g which are overturned and translated;
convolutional neural network: a class of feedforward neural networks that include convolution calculations and have a depth structure;
full tie layer: each node of the full-connection layer is connected with all nodes of the upper layer and is used for integrating the characteristics extracted from the front edge;
masking: for example, a circular object is arranged in a picture, a circle with the same size as the object is cut out from a piece of paper, the paper is covered on the picture, and only the circular object can be seen, and the paper is a mask;
image classification: a fixed classification label set exists, then for an input image, a classification label is found out from the classification label set, and finally the classification label is allocated to the input image;
resize picture transform algorithm: assuming that the original dimensions are high and wide and the number of channels is HxWx3,
the height, width and channel number of the converted size are H1xW1x3, the conversion algorithm traverses each channel, the pixels of the original picture are filled into the pixels on the picture matrix with the new size, the filling position on the new picture matrix cannot find the right pixel filling on the original picture, and the average value of the adjacent pixels is found for filling;
target detection and confidence probability: firstly, the target to be detected is defined, for example, in the patent, the target is a pig, then the detection is carried out, the so-called detection is that a computer algorithm is developed, the computer is used for leading the target in the picture, namely a pig frame (namely, the upper left corner position coordinate and the lower right corner position coordinate are given), the computer simultaneously gives the confidence probability (the range of values is 0 to 1) of the rectangular frame while the computer is used for leading the frame out, and in the patent, the yolo network head specifically calculates the value according to the feature map.
Yolov4 open source project and inference architecture: the entire model of yolov4 is divided into three sub-models: dark net backbone network, spp (spatial pyramid pooling) +PANet neck, yolo network head and non-maximal value suppression post-processing; when yolov4 is inferred, firstly, a picture (size 1920x 1080) is converted into a size 608x608x3, then, the picture enters a dark net backbone network for calculation, the calculated result can calculate the picture 608x608x3 into a floating point array 19x19x1024, then, a floating point array 76x76x256 is obtained through spp (space pyramid pooling) +PANet (path aggregation network) neck calculation, and finally, the names of a target detection position frame and a target are obtained through yolo network head and non-maximum value suppression post-processing calculation;
feature mapping: the feature map is a special multidimensional array stored in a computer, the data type is floating point type, and each value in the array stores a confidence probability value range between 0 and 1.
Fig. 1 and 2 schematically show that the key points of pigs to be obtained in the invention can be used for identifying fever pigs through three key points of the left ear 1, the right ear 2 and the back force center 3 of the pigs, and the three points identify the approximate position of the auricle so as to extract the temperature of the auricle; the pig weight can be estimated through three key points, namely a front point 4 of the back, a middle point 6 of the back and a tail point 5 of the back; six key points of the left rear hip joint point 11, the left rear knee joint point 12, the left rear ankle joint point 13, the left front ankle joint point 7, the left front knee joint point 8 and the left front hip joint point 9 are used for identifying abnormal behavior actions of the pig such as lameness, paralysis and the like, and can also be used for assisting in making the weight estimation of the pig; seven key points including a right rear ankle joint 13, a right rear knee joint 12, a right rear hip joint 11, a right front hip joint, a right front knee joint, a right front ankle joint and an abdomen center 10 can be used for detecting whether a pustule exists on the leg of a pig or not and also can be used for detecting whether the leg turns when the pig walks.
In accordance with the present invention, a computer-vision pig-only keypoint identification method is provided, it being noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although the flowcharts collectively illustrate a logical order, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein.
Fig. 1 is a flowchart of a computer vision pig key point recognition method according to an embodiment of the present invention, as shown in fig. 1, the method includes the steps of:
step 1, collecting image data of pigs;
step 2, eliminating abnormal pictures in the image data to obtain a target image;
step 3, processing the target image by using an open source target detection model to obtain the block coordinates of all target pigs in a dense scene and a visual field range;
step 4, extracting the frame coordinates of all target pigs to form single pig pictures one by one;
and 5, processing the single pig picture through the deep neural network to obtain feature mapping, and obtaining the coordinates of the key points of the pig by utilizing the feature mapping.
More specifically, according to the step 2, the abnormal pictures include abnormal illumination, blurred, foggy and abnormal angle pictures; processing the illumination abnormal pictures in the abnormal pictures by using opencv and image HSV values, and eliminating the illumination abnormal pictures; converting a blurred abnormal picture in the abnormal picture into a gray level picture, dividing the gray level picture into 4 areas on average, calculating the edge blurring degree value of the Laplacian of each area, and removing the blurred picture basically as a fixed value which is seen and adjusted simultaneously; carrying out minimum value filtering treatment on the abnormal pictures with fog in the abnormal pictures, and removing pictures with fog; and performing FastLineDetector linear processing on the angle abnormal pictures in the abnormal pictures, removing the angle abnormal pictures, and processing the abnormal pictures in the image data to obtain a target image.
According to the step 3, more specifically, a pig target detection data set marked in the company is used for training, pig target detection data such as picture data and artificial tags are matched into a Yolov4 open source technology during training, a training program can be run, and a model is obtained after running, wherein the interior of the company is provided with a Yolov4 target detection model capable of detecting pigs; reasoning the target picture by using the yolov4 target detection model; obtaining the square coordinates and confidence probabilities of all pigs according to the reasoning result of the yolov4 target detection model on the picture; and filtering the confidence probability to obtain the square coordinates of the target pigs, and filtering the square coordinates of the target pigs with the confidence probability smaller than 0.3 when the confidence probability is filtered.
According to the step 4, more specifically, back calculating the coordinates of the square frames of the target pigs to the matrix of the original picture (namely, the target image after the abnormal picture is removed), and taking out the pixel values according to the target frame position index; and carrying out pixel index on the pixel value in the target image by using the coordinate calculated by yolov4 to obtain a target area, so that each target pig in any picture in the target area forms a single pig picture in the square coordinates of the target pigs.
More specifically, according to the step 5, the obtained single pig picture is converted into square size through a resolution picture conversion algorithm; and (3) performing trained open source resnet50 depth neural network calculation by using single pig picture input, obtaining pig key point feature mapping, and obtaining pig key point position coordinates through the feature mapping.
In the above embodiment, the calculation of the trained open source resnet50 deep neural network by using single pig picture input to obtain the pig key point feature map includes: firstly, obtaining a single pig picture with the size of 224x224, inputting the single pig picture into a trained resnet50 depth neural network, and calculating the single pig picture into a feature map of 20x24x48 through a series of convolution operation and pooling operation, wherein the first 20 corresponds to 20 key points, and a subarray of each 24x48 corresponds to the feature map of each key point; the feature mapping of each key point is calculated into an array with the size of 224x224 as the size of the small picture of the original pig by using a picture size algorithm, a peak value is found from each feature probability mapping and is larger than a certain threshold value (0.5) to be the corresponding key point, and the coordinates of the key point are reversely calculated into the size of the target image to be the coordinates of the key point of the target image.
In the above embodiment, the calculation of the coordinates of the key points back to the target image size is the coordinates of the key points of the target image, including: assuming that the coordinates of key points of the target image are (x, y), and the coordinates of the upper left corner of the corresponding pig target detection frame are (x 0, y 0), the coordinates of the positions of the key points of the pig target in the target image are (x0+x, y0+y).
The work flow of the invention firstly reads pictures, then detects the key points of the swinery on the pictures, and specifically comprises the following steps:
step 1: collecting image data of pigs;
the specific steps of the step 2 are as follows:
step 2.1: calculating an HSV value of the picture, and eliminating the illumination abnormal picture;
step 2.2: calculating an edge blurring degree value of a Laplace operator of the picture, and eliminating a blurred picture;
step 2.3: removing pictures containing fog based on minimum value filtering;
step 2.4: removing pictures with abnormal column angles based on a FastLineDetector linear detector;
the specific steps of the step 3 are as follows:
step 3.1: training by using a pig target detection data set marked in a company based on a yolov4 open source project to obtain a yolov4 target detection model capable of detecting pigs;
step 3.2: using the yolov4 target detection model to infer a target image;
step 3.3: using the reasoning result of the yolov4 target image to obtain the square coordinates and confidence probability of each pig;
step 3.4: filtering out targets with smaller confidence probability to obtain the square coordinates of successfully detected target pigs;
the specific steps of the step 4 are as follows:
step 4.1: back calculating the original picture size coordinate by using the target frame coordinate obtained in the previous step;
step 4.2: performing pixel index in the original picture by using the coordinates after back calculation to obtain a target area;
step 4.3: repeating the operation on each target frame obtained in any picture, so that each target pig in the target image forms a single pig picture;
the specific steps of the step 5 are as follows:
step 5.1: using the single pig pictures obtained above, converting into square sizes such as 224x224 size using a picture conversion algorithm;
step 5.2: calculating a trained open source resnet50 depth neural network by using the picture input to obtain a feature map of key points of the pig body, wherein each key point of 20 key points is a feature probability map; wherein the resnet50 deep neural network computing key point map uses an open source item;
step 5.3: finding out a peak value from each feature probability map and obtaining a corresponding key point when the peak value is larger than a certain threshold value (0.5);
step 5.4: and (5) reversely calculating the key point coordinates to obtain the original image size, namely the original image key point coordinates.
In the present specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, and identical and similar parts between the embodiments are all enough to refer to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (6)

1. The computer vision pig key point identification method is characterized by comprising the following steps of:
step 1, collecting image data of pigs, and collecting and detecting key points in the interior of the outline of the pigs in the image data;
step 2, eliminating abnormal pictures in the image data to obtain a target image;
and 3, processing a target image by using an open source target detection model to obtain the block coordinates of all target pigs in a dense scene and a visual field range, wherein the block coordinates are as follows:
matching the data of the target image with a yolov4 target detection model;
processing a target image through a yolov4 target detection model, obtaining an inference result, and obtaining the block coordinates and confidence probabilities of all pigs according to the inference result;
carrying out confidence probability processing on the square coordinates of the pigs, filtering the square coordinates of the pigs with the confidence probability smaller than 0.3, and obtaining the square coordinates of the target pigs;
step 4, extracting the frame coordinates of all target pigs to form single pig pictures one by one;
step 5, processing a single pig picture through a deep neural network to obtain feature mapping, obtaining coordinates of key points of the pig by utilizing the feature mapping, and judging health conditions of the pig through the coordinates of the key points of the pig so as to identify sick pigs; the method comprises the steps of firstly converting a single pig picture into a single pig picture with 224x224 size through a size picture transformation algorithm, then inputting the single pig picture into a trained resnet50 depth neural network, and calculating the single pig picture into a feature map of 20x24x48 through a series of convolution operation and pooling operation, wherein 20 in the 20x24x48 corresponds to 20 key points, and a subarray of 24x48 corresponds to the feature map of each key point; and finding out a peak value from each characteristic mapping, wherein the characteristic mapping with the peak value larger than the threshold value of 0.5 is a target key point, and reversely calculating the size of the target image by utilizing the coordinates of the target key point, wherein the size of the target image is the position coordinates of the key point of the pig.
2. The method for identifying key points of a computer vision pig according to claim 1, wherein the abnormal pictures comprise an illumination abnormal picture, a blur abnormal picture, a fog abnormal picture and an angle abnormal picture.
3. The method for identifying key points of a computer-vision pig as claimed in claim 2, wherein the step 2 comprises:
processing the abnormal picture through opencv and the HSV value of the image to remove the illumination abnormal picture;
converting the abnormal picture into a gray level picture and dividing the gray level picture into 4 areas on average, wherein each area is calculated by a fixed value through Laplacian to remove the fuzzy abnormal picture;
processing the abnormal pictures through minimum value filtering to remove the abnormal pictures with fog;
and (5) processing the abnormal pictures through the FastLineDetector straight line, and eliminating the angle abnormal pictures.
4. The method for identifying key points of a computer-vision pig as defined in claim 1, wherein the step 4 comprises:
reversely calculating the square coordinates of the target pigs back to the matrix of the target image, and taking out pixel values according to the target frame position index;
and carrying out pixel index on the pixel value in the target image by using the coordinate calculated by yolov4 to obtain a target area, so that each target pig in any picture in the target area forms a single pig picture in the square coordinates of the target pigs.
5. The method for identifying the key points of the computer vision pig according to claim 1, wherein the process of back calculating the target image size by the coordinates of the target key points comprises the following steps: and the coordinates of the key points of the corresponding pig are (x, y), and the coordinates of the key points of the corresponding pig in the target image are (x 0, y 0).
6. The method of claim 1, wherein the pig keypoints comprise a left ear, a right ear, a back center, a back front point, a back midpoint, a back tail point, a left back hip joint, a left back knee joint, a left back ankle joint, a left front knee joint, a left front hip joint, a right back ankle joint, a right back knee joint, a right back hip joint, a right front knee joint, a right front ankle joint, and an abdomen center.
CN202110531027.3A 2021-05-16 2021-05-16 Computer vision pig key point identification method Active CN113177564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110531027.3A CN113177564B (en) 2021-05-16 2021-05-16 Computer vision pig key point identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110531027.3A CN113177564B (en) 2021-05-16 2021-05-16 Computer vision pig key point identification method

Publications (2)

Publication Number Publication Date
CN113177564A CN113177564A (en) 2021-07-27
CN113177564B true CN113177564B (en) 2023-07-25

Family

ID=76929181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110531027.3A Active CN113177564B (en) 2021-05-16 2021-05-16 Computer vision pig key point identification method

Country Status (1)

Country Link
CN (1) CN113177564B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113763429A (en) * 2021-09-08 2021-12-07 广州市健坤网络科技发展有限公司 Pig behavior recognition system and method based on video
CN114543674B (en) * 2022-02-22 2023-02-07 成都睿畜电子科技有限公司 Detection method and system based on image recognition

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598658A (en) * 2019-09-18 2019-12-20 华南农业大学 Convolutional network identification method for sow lactation behaviors

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108961330B (en) * 2018-06-22 2021-04-30 深源恒际科技有限公司 Pig body length measuring and calculating method and system based on image
CN109141248B (en) * 2018-07-26 2020-09-08 深源恒际科技有限公司 Pig weight measuring and calculating method and system based on image
US11631266B2 (en) * 2019-04-02 2023-04-18 Wilco Source Inc Automated document intake and processing system
CN111709287A (en) * 2020-05-15 2020-09-25 南京农业大学 Weaned piglet target tracking method based on deep learning
CN111814860A (en) * 2020-07-01 2020-10-23 浙江工业大学 Multi-target detection method for garbage classification

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598658A (en) * 2019-09-18 2019-12-20 华南农业大学 Convolutional network identification method for sow lactation behaviors

Also Published As

Publication number Publication date
CN113177564A (en) 2021-07-27

Similar Documents

Publication Publication Date Title
Shoieb et al. Computer-aided model for skin diagnosis using deep learning
CN105938564B (en) Rice disease identification method and system based on principal component analysis and neural network
US11055824B2 (en) Hybrid machine learning systems
US20210118144A1 (en) Image processing method, electronic device, and storage medium
CN105740945B (en) A kind of people counting method based on video analysis
CN113177564B (en) Computer vision pig key point identification method
CN109344693A (en) A kind of face multizone fusion expression recognition method based on deep learning
Yu et al. An object-based visual attention model for robotic applications
Khan et al. A deep survey on supervised learning based human detection and activity classification methods
CN111723687A (en) Human body action recognition method and device based on neural network
Chen et al. MFCNET: End-to-end approach for change detection in images
CN114241542A (en) Face recognition method based on image stitching
Zaidan et al. A new hybrid module for skin detector using fuzzy inference system structure and explicit rules
Pukhrambam et al. A smart study on medicinal plants identification and classification using image processing techniques
CN117197064A (en) Automatic non-contact eye red degree analysis method
CN117475353A (en) Video-based abnormal smoke identification method and system
Chien et al. Detecting nonexistent pedestrians
CN112487926A (en) Scenic spot feeding behavior identification method based on space-time diagram convolutional network
Xu et al. Covariance descriptor based convolution neural network for saliency computation in low contrast images
CN111476156A (en) Real-time intelligent monitoring algorithm for mice and other small animals
Li et al. Automatic pulmonary vein and left atrium segmentation for TAPVC preoperative evaluation using V-net with grouped attention
CN110826495A (en) Body left and right limb consistency tracking and distinguishing method and system based on face orientation
Mahajan et al. Emotion detection algorithm
CN108573230B (en) Face tracking method and face tracking device
CN114255203B (en) Fry quantity estimation method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant