CN115222666A - Shrimp larvae counting method, system, equipment and storage medium based on key point detection - Google Patents

Shrimp larvae counting method, system, equipment and storage medium based on key point detection Download PDF

Info

Publication number
CN115222666A
CN115222666A CN202210664948.1A CN202210664948A CN115222666A CN 115222666 A CN115222666 A CN 115222666A CN 202210664948 A CN202210664948 A CN 202210664948A CN 115222666 A CN115222666 A CN 115222666A
Authority
CN
China
Prior art keywords
shrimp
key point
point detection
network model
larvae
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210664948.1A
Other languages
Chinese (zh)
Inventor
李西明
吴精乙
高月芳
邵楚琪
郭玉彬
赵泽勇
劳慧雯
梁宇君
吴子彤
关颖盈
严家美
温嘉勇
刘瑞祥
吴颖琪
史东博
胡东晴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN202210664948.1A priority Critical patent/CN115222666A/en
Publication of CN115222666A publication Critical patent/CN115222666A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a shrimp larva counting method, a system, equipment and a storage medium based on key point detection, wherein the method comprises the following steps: acquiring a shrimp larva image set; constructing a key point detection network model; inputting the shrimp larvae image set into a key point detection network model for training to obtain a shrimp larvae key point detection network model; acquiring a shrimp larva image to be counted; inputting the shrimp larva image to be counted into a shrimp larva key point detection network model to obtain a key point detection result; and counting the shrimp larvae according to the detection result of the key points. The key point detection network model constructed by the method is not only suitable for accurately counting the young penaeus vannamei boone, but also suitable for accurately counting the young penaeus vannamei boone in a strip shape, improves the accuracy rate of young penaeus vannamei boone counting, improves the universality of young penaeus vannamei boone, and has wide application scenes.

Description

Shrimp larvae counting method, system, equipment and storage medium based on key point detection
Technical Field
The invention relates to a shrimp larva counting method, a shrimp larva counting system, shrimp larva counting equipment and a storage medium based on key point detection, and belongs to the technical field of aquatic product image processing.
Background
At present, scholars count young penaeus vannamei by using an image processing technology and the most advanced computer vision technology, such as: (Ji Yuyao, 2018) aiming at the characteristics of the penaeus vannamei boone, preprocessing the penaeus vannamei boone picture by corrosion and expansion, carrying out binarization processing and threshold segmentation on the penaeus vannamei boone image based on an improved TV-L1 model, and providing a connected area-recording method to realize counting of the adhered penaeus vannamei boone. Selecting a penaeus vannamei image data set (in Qiu Yu, 2021), labeling the image data set by adopting a labeling tool LabelImg, and carrying out intelligent identification and counting on the penaeus vannamei based on improved YOLOv4 and K-means algorithm clustering.
It is worth noting that: because the young penaeus vannamei boone has black hepatic pancreas and has the characteristics of transparent body and clear body boundary, when the young penaeus vannamei boone is treated by adopting a threshold segmentation and communicated area recording method, the background and the target are easy to segment, so that the segmented result is easy to count, and the counting accuracy of the young penaeus vannamei boone is improved; however, for shrimp seeds with large volume and slender bodies (such as Penaeus monodon shrimp seeds), if the threshold segmentation and connected area recording method is used for processing the shrimp seeds, serious background interference is caused, and the segmented results are difficult to count, so that the counting accuracy of the shrimp seeds is greatly reduced. Similarly, the improved YOLOv4 and K-means algorithms are not suitable for shrimp larvae with larger volume and slender bodies (such as the shrimp larvae of the penaeus monodon). It can be seen that the above method is too limited.
In view of the above, a shrimp larvae counting method with high universality and high accuracy is urgently needed.
References:
[1] Ji Yuyao, wei Weibo, zhao Zengfang, yang Zhenyu shrimp fry counting method based on improved TV-L1 model [ J ]. University of Qingdao (Nature science edition), 2018,31 (04): 62-68+82.
[2] In Qiu Yu, shrimp fry intelligent recognition algorithm research based on improved YOLOv4 [ J ]. Henan science, 2021,40 (06): 25-28.
Disclosure of Invention
In view of the above, the invention provides a shrimp larvae counting method, a system, computer equipment and a storage medium based on key point detection, and the constructed key point detection network model is not only suitable for accurate counting of the young shrimps of the penaeus vannamei but also suitable for accurate counting of the young shrimps in a strip shape, improves the universality of the young shrimps while improving the accuracy of counting the young shrimps, and has wide application scenes.
The invention aims to provide a shrimp larva counting method based on key point detection.
The second purpose of the invention is to provide a shrimp fry counting system based on key point detection.
It is a third object of the invention to provide a computer apparatus.
It is a fourth object of the invention to provide a storage medium.
The first purpose of the invention can be achieved by adopting the following technical scheme:
a method for shrimp fry counting based on keypoint detection, the method comprising:
acquiring a shrimp larva image set;
constructing a key point detection network model;
inputting the shrimp larvae image set into a key point detection network model for training to obtain a shrimp larvae key point detection network model;
acquiring a shrimp larva image to be counted;
inputting the shrimp larvae image to be counted into a shrimp larvae key point detection network model to obtain key point detection results;
and counting the shrimp larvae according to the detection result of the key points.
Further, the key point detection network model comprises a first subnet stage, a second subnet stage, a third subnet stage, a fourth subnet stage and a fifth subnet stage which are connected in sequence.
Further, the first subnet stage includes two first convolution layers;
the second subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module;
the third subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module;
the fourth subnet stage comprises eight first modules and eight second modules, wherein the eight first modules are connected in sequence, and the first module at the last is connected with the second module;
the fifth subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module;
the first module comprises four bottleneck submodules, and the second module is a transition fusion module.
Further, the bottleneck sub-module comprises a second convolution layer, a third convolution layer and a fourth convolution layer;
the second convolution Layer is followed by a Layer Normalization regularization Layer and a ReLu activation Layer;
the Layer Normalization regularization Layer and a ReLu activation Layer are connected behind the third convolution Layer;
a Layer Normalization regularization Layer is connected behind the fourth convolution Layer;
the convolution kernel size of the first convolution layer and the third convolution layer is 3 x 3;
the convolution kernel size of the second convolution layer and the fourth convolution layer is 1 × 1.
Further, the transition fusion module comprises one or more of an up-sampling operation, a down-sampling operation and a convolution operation;
the up-sampling operation uses a linear difference algorithm, and then is followed by a convolution layer with the convolution kernel size of 1 multiplied by 1 to carry out smoothing processing;
the downsampling operation is convolved with a convolution layer with a convolution kernel size of 3 x 3;
the convolution operation uses a convolution layer with a convolution kernel size of 3 x 3 for convolution.
Further, the inputting the shrimp larvae image set into the key point detection network model for training to obtain the shrimp larvae key point detection network model specifically includes:
carrying out key point labeling on the shrimp larvae image set to obtain a shrimp larvae key point labeling image set;
converting the shrimp larvae key point annotation image set into label file data;
converting the label file data into key point detection format data;
detecting format data according to the key points to generate a Gaussian thermal map set;
and training the key point detection network model by using the Gaussian heat map collection and the shrimp larvae image collection so as to obtain the shrimp larvae key point detection network model.
Further, carrying out key point labeling on the prawn larva image set, which specifically comprises the following steps;
marking key points on the middle part of each shrimp larva in the shrimp larva image set;
or marking key points on the head and the tail of each shrimp larva in the shrimp larva image set;
or labeling key points on the head, the middle and the tail of each young shrimp in the young shrimp image set.
The second purpose of the invention can be achieved by adopting the following technical scheme:
a shrimp fry counting system based on keypoint detection, the system comprising:
the first acquisition unit is used for acquiring a shrimp larva image set;
the construction unit is used for constructing a key point detection network model;
the training unit is used for inputting the shrimp larvae image set into the key point detection network model for training to obtain a shrimp larvae key point detection network model;
the second acquisition unit is used for acquiring the shrimp larvae image to be counted;
the detection unit is used for inputting the shrimp larvae images to be counted into the shrimp larvae key point detection network model to obtain key point detection results;
and the counting unit is used for counting the shrimp seeds according to the key point detection result.
The third purpose of the invention can be achieved by adopting the following technical scheme:
a computer device comprises a processor and a memory for storing programs executable by the processor, wherein the processor executes the programs stored in the memory to realize the shrimp fry counting method.
The fourth purpose of the invention can be achieved by adopting the following technical scheme:
a storage medium stores a program which, when executed by a processor, implements the shrimp fry counting method described above.
Compared with the prior art, the invention has the following beneficial effects:
1. the key point detection network model constructed by the method is not only suitable for accurately counting the young penaeus vannamei boone, but also suitable for accurately counting the young penaeus vannamei boone in a strip shape, improves the accuracy rate of young penaeus vannamei boone counting, improves the universality of young penaeus vannamei boone, and has wide application scenes.
2. The key point detection result obtained by the embodiment of the invention contains high-precision positioning information, and is convenient for realizing excellent visualization result.
3. The shrimp seed container designed by the embodiment of the invention is convenient to disassemble and convenient for taking and placing shrimps, wherein: the uncovered and bottomless rectangular box can prevent the shrimp larvae from jumping out of the tray, the transparent plastic design on the upper portion of the box can increase the light transmittance, and the shooting quality of the shrimp larvae pictures under indoor light or natural light is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a detailed flowchart of the shrimp larvae counting method based on the key point detection according to embodiments 1 and 2 of the present invention.
Fig. 2 is a schematic flowchart of a shrimp larvae counting method based on the key point detection according to examples 1 and 2 of the present invention.
Fig. 3 is a design view of the shrimp larvae holding tool of examples 1 and 2 of the present invention.
Fig. 4 is a structural diagram of a key point detection network model according to embodiments 1 and 2 of the present invention.
Fig. 5 (a) - (d) are key point selection graphs of the young litopenaeus vannamei according to example 1 of the present invention.
Fig. 6 is a low-density key point labeling diagram of the young penaeus vannamei boone in example 1 of the present invention.
Fig. 7 is a schematic diagram of key points and a visual counting result of the young penaeus vannamei boone in embodiment 1 of the present invention.
Fig. 8 (a) - (b) are key point selection graphs of the young penaeus monodon in example 2 of the present invention.
Fig. 9 is a high-density key point labeling diagram of a shrimp fry of penaeus monodon in example 2 of the present invention.
Fig. 10 is a schematic diagram of the key points and the visual counting result of the penaeus monodon fries in embodiment 2 of the present invention.
Fig. 11 is a block diagram of a shrimp larvae counting system based on the keypoint detection in embodiment 3 of the present invention.
Fig. 12 is a block diagram of a computer device according to embodiment 4 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts based on the embodiments of the present invention belong to the protection scope of the present invention.
Example 1:
as shown in fig. 1 and 2, the present embodiment provides a shrimp larvae counting method based on key point detection, which includes the following steps:
s101, acquiring a shrimp larva image set.
In this embodiment put into the shrimp larvae that contains little water containing utensil with many shrimp larvae, under indoor light or natural light, use the camera of smart mobile phone or industry camera to shoot many shrimp larvae in the shrimp larvae containing utensil, and then collect many shrimp larvae images by high to low different density.
In the embodiment, a plurality of shrimp larvae images are taken as a shrimp larvae image set; wherein, each shrimp image is an RGB image.
The shrimp larvae involved in the present example were all shrimp larvae of penaeus vannamei.
As shown in fig. 3, the embodiment further provides a shrimp seedling container, which comprises a white tray (which can also be a water tank, a gourd ladle, a pot and the like), a transparent plastic box and a rectangular box without a cover and a bottom; wherein: the bottom of the uncovered bottomless rectangular box is arranged on the white tray, the transparent plastic box is arranged at the top of the uncovered bottomless rectangular box, the size of the uncovered bottomless rectangular box is matched with that of the white tray, the lower part of the uncovered bottomless rectangular box is surrounded by white plastic with the height of 5cm, and the upper part of the uncovered bottomless rectangular box is surrounded by transparent plastic with the height of 10 cm.
Transparent plastic is adopted in this embodiment in order to guarantee the light transmissivity to guarantee the shooting quality of shrimp seedling picture under indoor light or natural light.
And S102, constructing a key point detection network model.
It is worth noting that: in the present embodiment, the key point of accurate detection is a necessary condition for high-accuracy counting. Therefore, in this embodiment, a keypoint detection network model (PCNet) is designed based on the design concept of the high-resolution HRNet backbone network.
As shown in fig. 4, the network model includes a first sub-network stage, a second sub-network stage, a third sub-network stage, a fourth sub-network stage, and a fifth sub-network stage, which are connected in sequence.
Further, the first sub-network stage comprises two first convolution layers for scaling the resolution of the input picture to one quarter size; the second subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module; the third subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module; the fourth subnet stage comprises eight first modules and eight second modules, wherein the eight first modules are connected in sequence, and the first module at the last is connected with the second module; the fifth subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module; the first module comprises four bottleneck sub-modules (BottleNect sub-modules), and the second module is a transition fusion module.
In the process from the high resolution subnet stage (first subnet stage) to the low resolution subnet stage (fifth subnet stage), the number of convolution output channels in the second subnet stage is 64, the number of convolution output channels in the third subnet stage is 64 and 128 respectively, the number of convolution output channels in the fourth subnet stage is 64, 128 and 256 respectively, and the number of convolution output channels in the fifth subnet stage is 64, 128, 256 and 512 respectively.
Further, the bottleneck sub-module comprises a second convolutional layer, a third convolutional layer and a fourth convolutional layer; the second convolution Layer is followed by a Layer Normalization regularization Layer and a ReLu activation Layer; the third convolution Layer is followed by a Layer Normalization regularization Layer and a ReLu activation Layer; a Layer Normalization regularization Layer is connected behind the fourth convolution Layer; the transition fusion module comprises one or more of an up-sampling operation, a down-sampling operation and a convolution operation; wherein: the up-sampling operation uses a linear difference algorithm, and then a convolution layer with the convolution kernel size of 1 multiplied by 1 is subjected to smoothing processing; the downsampling operation uses a convolution layer with a convolution kernel size of 3 multiplied by 3 and a sliding step of 2 to carry out convolution, and the convolution operation uses a convolution layer with a convolution kernel size of 3 multiplied by 3 and a sliding step of 1 to carry out convolution; after the same resolution is obtained by the up-sampling operation or the down-sampling operation, the convolution values are correspondingly added and fused.
It is worth noting that: in the whole network model, except that the sliding steps of the first convolutional layer and the downsampled convolutional layer are 2, the sliding steps of the other convolutional layers are 1.
The convolution unit in this embodiment generally refers to a first convolution layer, a second convolution layer, a third convolution layer, a fourth convolution layer, a fifth convolution layer, a convolution layer that performs upsampling, a convolution layer that performs downsampling, and a convolution layer that performs convolution operations in the key point detection network model.
Further, the convolution kernel size of the first convolution layer and the third convolution layer is 3 × 3; the convolution kernel size of the second convolution layer and the fourth convolution layer is 1 × 1.
The keypoint detection network model in this embodiment further includes a fifth convolution layer and a post-processing module.
Further, the convolution kernel size of the fifth convolution layer is 1 × 1.
The method for constructing the key point detection network model in the embodiment specifically includes: and sequentially connecting the first subnet stage, the second subnet stage, the third subnet stage, the fourth subnet stage, the fifth convolution layer and the post-processing module, thereby constructing and obtaining the key point detection network model.
S103, inputting the shrimp larvae image set into the key point detection network model for training to obtain the shrimp larvae key point detection network model.
Step S103, specifically including:
and S1031, carrying out key point labeling on the shrimp fry image set to obtain a shrimp fry key point labeling image set.
In this example, young litopenaeus vannamei was selected as an implementation target. At this time, the characteristics of the young penaeus vannamei boone need to be considered first, and then key point selection is carried out, specifically: firstly, the forms of the p 5-day-old young penaeus vannamei boone and the p 10-day-old young penaeus vannamei boone are consistent (the liver pancreas and the intestinal tract of the penaeus vannamei boone are all gray black): secondly, the eye characteristics of the p 5-day-old penaeus vannamei boone larvae are not obvious, and the body length is shorter; the p10 day old young penaeus vannamei boone has a pair of obvious black eyes and is longer in body length in contrast to the p5 day old young penaeus vannamei boone. Therefore, for shrimp larvae with more obvious shrimp hepatopancreas in this category, the present embodiment selects a key point to represent and selects the key point on the center of the shrimp hepatopancreas, as shown in fig. 5 (a) - (d).
After the key points are selected, carrying out key point labeling on the shrimp larva image set by using an open source CVAT labeling tool to obtain a shrimp larva key point labeling image set; wherein, the shrimp larvae key points are marked on one of the images of the image set, as shown in fig. 6.
S1032, converting the shrimp larvae key point labeling image set into label file data.
Specifically, after a shrimp larvae key point annotation image set is obtained, converting the shrimp larvae key point annotation image set into tag file data in an XML format; the XML-format label file data stores coordinate information of all shrimp liver pancreas key points of all shrimp seedlings in each shrimp seedling picture.
And S1033, converting the label file data into key point detection format data.
Specifically, by writing a script, tag file data in an XML format is converted into JSON-format file data, that is, key point detection format data.
And S1034, generating a Gaussian thermal map set according to the key point detection format data.
In step S1034, the formula for generating a gaussian heat map (Heatmap) is as follows:
Figure BDA0003692600620000081
wherein (x, y) represents each pixel point,
Figure BDA0003692600620000082
representing the key point of the true value.
It is worth noting that: the numerical value on the Gaussian heatmap represents the confidence coefficient of the image pixel point belonging to the key point; when the pixel point coincides with the key point of the true value, the output value of the Gaussian kernel is close to 1; conversely, when the difference is large, the output value of the gaussian kernel approaches 0.
In this embodiment, based on the key point detection format data, a plurality of gaussian heat maps are generated according to the formula for generating the gaussian heat maps; the multiple gaussian heatmaps were used as a gaussian heatmap set.
And S1035, training the key point detection network model by utilizing the Gaussian heat map collection and the shrimp larvae image collection, thereby obtaining the shrimp larvae key point detection network model.
Step S1035 specifically includes:
a. and mixing the Gaussian heat map data and the different density data in the shrimp fry image set, randomly disordering, and dividing the training set and the test set according to the proportion of 8:2 to obtain the training set and the test set.
b. And performing data enhancement on the training set and the testing set by using a random affine exchange mode and a random inversion mode.
c. Inputting the training set with enhanced data into a key point detection network model for training; wherein: the training round is 30, an Adam optimizer is adopted, the initial learning rate is 0.0015, the learning rate adopts a Warmup and linear adjustment strategy, and the Warmup iteration number is 50.
And finally, obtaining a trained key point detection network model, namely the shrimp larvae key point detection network model.
Further, in the training phase, the loss function used for predicting the gaussian heatmap is a mean square error loss function, also referred to as L2 norm loss function, which is used to calculate the average of the squared differences between the true and predicted values, as follows:
Figure BDA0003692600620000083
wherein,
Figure BDA0003692600620000084
indicates the predicted value, y i Representing the true value.
It is worth noting that: the loss function described above results in a larger error and a larger penalty for smaller errors. Therefore, the network model in this embodiment has a faster convergence rate and is more sensitive to outliers.
Further, in the training phase, the coordinate information of the key points in the JSON-format file data is saved as the following format:
[x 1 ,y 1 ,v 1 ,...,x k ,y k ,v k ]
wherein x, y represent the coordinates of the key points; v represents a visible mark, and when v =0, v represents an unlabeled point; when v =1, v represents a marked invisible keypoint (e.g. occlusion); when v =2, v represents a visible keypoint that has been labeled.
It is worth noting that: in actual prediction, it is not required to predict the visibility of each Keypoint, and the example uses the Object Keypoint Similarity index Object Keypoint Similarity (OKS) to describe how well the keypoints are predicted, as follows:
Figure BDA0003692600620000091
wherein d is i Representing Euclidean distance, sk, between the labeled keypoints and the predicted keypoints i Represents the standard deviation; when perfectly predicted, we would get OKS =1; when the difference between the predicted value and the actual value is too large, the OSK is close to 0.
And S104, acquiring the shrimp larvae image to be counted.
The image of the young penaeus vannamei boone to be counted in this embodiment is obtained from the test set in step S1035.
And S105, inputting the shrimp larvae image to be counted into the shrimp larvae key point detection network model to obtain a key point detection result.
In the embodiment, the image of the penaeus vannamei boone fry to be counted is input into the shrimp fry key point detection network model to obtain a key point detection result.
In the process of obtaining the detection result of the key point, the second module of the fifth sub-network stage only samples all the low-resolution features to the maximum resolution features and then performs fusion; and the fused feature map passes through a fifth convolution layer, and the number of channels is reduced to 1, so that a predicted Gaussian heat map is obtained. And finally, the predicted Gaussian heatmap is subjected to post-processing through a post-processing module to obtain a key point detection result.
In this embodiment, step S103 is a training phase of the key point detection network model, and steps S104 to S105 are an inference phase of the key point detection network model.
And S106, counting the shrimp larvae according to the detection result of the key points.
All the young penaeus vannamei boone in this example is represented by one key point. Therefore, the total number of the detected key points (key point detection result) is the shrimp larvae counting result in the shrimp larvae image to be counted.
As shown in fig. 7, in this embodiment, in addition to counting shrimp larvae, the shrimp larvae can be visually labeled according to the detection result of the key points.
Those skilled in the art will appreciate that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct related hardware, and the corresponding program may be stored in a computer readable storage medium.
It should be noted that although the method operations of the above-described embodiments are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Example 2:
the method provided by the present embodiment is substantially the same as the method provided by embodiment 1, and the main difference is that: the implementation object and step S103 are specifically as follows:
in step S1031, this embodiment is slightly different from embodiment 1 in that: in the embodiment, the young penaeus monodon is selected as an implementation object. At this time, the characteristics of the penaeus monodon young shrimps need to be considered first, and then key point selection is carried out, specifically: the size of the Penaeus monodon larva is strip-shaped, and is more slender than that of the Penaeus monodon larva in example 1. Therefore, for this kind of shrimp larvae with a slimmer size, the present embodiment selects three key points to represent, and selects the three key points on the forehead angle, the hepatopancreas of the shrimp and the tail of the shrimp respectively, as shown in fig. 8 (a) - (b).
The shrimp larvae involved in the embodiment are shrimp larvae of penaeus monodon.
After the key points are selected, carrying out key point labeling on the shrimp larva image set by using an open source CVAT labeling tool to obtain a shrimp larva key point labeling image set; wherein, the shrimp larvae key points are marked on one of the images of the image set, as shown in fig. 9.
In the embodiment, the tag file data in the XML format stores coordinate information of three key points (a shrimp forehead corner key point, a shrimp hepatopancreas key point, and a shrimp tail key point) of all shrimp larvae in each shrimp larvae picture.
In step S1033, this embodiment is slightly different from embodiment 1 in that: when the converted data is JSON format file data, all the shrimp larvae in each shrimp larvae picture can selectively reserve the coordinate information of key points, and the method specifically comprises the following steps:
mode A: and if the key point detection network model only detects the key points of the shrimp hepatopancreas, keeping the coordinate information of the key points of the shrimp hepatopancreas of all the shrimp fries in each shrimp fry picture.
Mode B: if the key point detection network model detects three key points (a shrimp forehead corner key point, a shrimp hepatopancreas key point and a shrimp tail key point), keeping coordinate information of the three key points of all the shrimp seeds in each shrimp seed picture and keeping correlation information among the three key points of all the shrimp seeds.
Mode C: and if the key point detection network model detects the head frontal angle key points and the tail key points, keeping the coordinate information of the head frontal angle key points and the tail key points of all the shrimp seeds in each shrimp seed picture, and keeping the association information between the head frontal angle key points and the tail key points of all the shrimp seeds.
Mode D: and if the key point detection network model detects the head corner key points and the tail corner key points and does not distinguish the two key points, keeping the coordinate information of the head corner key points and the tail corner key points of all the shrimp seeds in each shrimp seed picture.
In this embodiment, based on the a-D mode, a plurality of experiments are performed, and it is found that the accuracy of the obtained shrimp fry counting result is the highest when the shrimp fry counting is performed after the detection is performed in the D mode. Therefore, key points are reserved and detected in a D mode subsequently.
In this embodiment, when the Object Keypoint Similarity index (OKS) is used to describe how good the predicted keypoints are, the Similarity between the critical point of the forehead and the critical point of the tail of each penaeus monodon young shrimp is [0,1 ].
Based on the above step difference, step S101-step S105 are executed to obtain the key point detection result.
In step S106, this embodiment is slightly different from embodiment 1 in that: the penaeus monodon young shrimps in the embodiment are represented by two key points. Therefore, half of the total number of detected key points (key point detection result) is the shrimp fry counting result in the shrimp fry image to be counted, which is as follows:
Figure BDA0003692600620000111
wherein S is d The total number of the detected key points is shown, and N represents the shrimp larvae counting result in the shrimp larvae image to be counted.
As shown in fig. 10, in addition to counting the young shrimps, the embodiment can also realize visual labeling of the young shrimps according to the detection result of the key points.
The remaining steps in this embodiment are the same as those in embodiment 1, and are not described again here.
Those skilled in the art will appreciate that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct associated hardware, and the corresponding program may be stored in a computer-readable storage medium.
It should be noted that although the method operations of the above-described embodiments are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, in order to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Example 3:
as shown in fig. 11, the present embodiment provides a shrimp larvae counting system based on keypoint detection, which includes a first obtaining unit 1101, a building unit 1102, a training unit 1103, a second obtaining unit 1104, a detecting unit 1105 and a counting unit 1106, and the specific functions of each unit are as follows:
a first acquisition unit 1101 for acquiring a shrimp larvae image set;
a constructing unit 1102, configured to construct a key point detection network model;
a training unit 1103, configured to input the shrimp larvae image set into the key point detection network model for training, to obtain a shrimp larvae key point detection network model;
a second acquiring unit 1104, configured to acquire a shrimp larva image to be counted;
the detecting unit 1105 is used for inputting the shrimp larvae image to be counted into the shrimp larvae key point detecting network model to obtain key point detecting results;
a counting unit 1106, configured to count the shrimp larvae according to the detection result of the key point;
the shrimp larvae counting system based on the key point detection in the embodiment further comprises a visualization unit, wherein the visualization unit is used for realizing the visual marking of the shrimp larvae according to the key point detection result.
Example 4:
as shown in fig. 12, the present embodiment provides a computer apparatus including a processor 1202, a memory, an input device 1203, a display device 1204, and a network interface 1205 connected by a system bus 1201. Wherein the processor 1202 is configured to provide computing and controlling capabilities, the memory includes a nonvolatile storage medium 1206 and an internal memory 1207, the nonvolatile storage medium 1206 stores an operating system, a computer program and a database, the internal memory 1207 provides an environment for the operating system and the computer program in the nonvolatile storage medium 1206 to run, and when the computer program is executed by the processor 1202, the shrimp fry counting method of the above embodiment 1 and/or 2 is implemented as follows:
acquiring a shrimp larva image set;
constructing a key point detection network model;
inputting the shrimp larvae image set into a key point detection network model for training to obtain a shrimp larvae key point detection network model;
acquiring a shrimp larva image to be counted;
inputting the shrimp larva image to be counted into a shrimp larva key point detection network model to obtain a key point detection result;
and counting the shrimp larvae according to the detection result of the key points.
Example 5:
the present embodiment provides a storage medium, which is a computer-readable storage medium, and stores a computer program, and when the computer program is executed by a processor, the method for counting young shrimps according to the foregoing embodiment 1 and/or 2 is implemented as follows:
acquiring a shrimp larva image set;
constructing a key point detection network model;
inputting the shrimp larvae image set into a key point detection network model for training to obtain a shrimp larvae key point detection network model;
acquiring a shrimp fry image to be counted;
inputting the shrimp larva image to be counted into a shrimp larva key point detection network model to obtain a key point detection result;
and counting the shrimp larvae according to the detection result of the key points.
It should be noted that the computer readable storage medium of the present embodiment may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present embodiment, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this embodiment, however, a computer readable signal medium may include a propagated data signal with a computer readable program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable storage medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer-readable storage medium may be written with a computer program for performing the present embodiments in one or more programming languages, including an object oriented programming language such as Java, python, C + +, and conventional procedural programming languages, such as C, or similar programming languages, or combinations thereof. The program may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
In conclusion, the key point detection network model constructed by the method is not only suitable for accurately counting the young penaeus vannamei boone, but also suitable for accurately counting the young penaeus vannamei boone in a strip shape, improves the accuracy rate of counting the young penaeus vannamei boone, improves the universality of young penaeus vannamei boone types, and has wide application scenes.
The above description is only for the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the scope of the present invention.

Claims (10)

1. A shrimp fry counting method based on key point detection is characterized by comprising the following steps:
acquiring a shrimp fry image set;
constructing a key point detection network model;
inputting the shrimp larvae image set into a key point detection network model for training to obtain a shrimp larvae key point detection network model;
acquiring a shrimp larva image to be counted;
inputting the shrimp larva image to be counted into a shrimp larva key point detection network model to obtain a key point detection result;
and counting the shrimp larvae according to the detection result of the key points.
2. The shrimp seed counting method of claim 1 wherein the keypoint detection network model comprises a first subnet stage, a second subnet stage, a third subnet stage, a fourth subnet stage, and a fifth subnet stage connected in sequence.
3. A shrimp larvae counting method according to claim 2, wherein the first subnet stage comprises two first convolution layers;
the second subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module;
the third subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module;
the fourth subnet stage comprises eight first modules and eight second modules, wherein the eight first modules are connected in sequence, and the first module at the last is connected with the second module;
the fifth subnet stage comprises two first modules and a second module, wherein the two first modules are connected in sequence, and the last first module is connected with the second module;
the first module comprises four bottleneck sub-modules, and the second module is a transition fusion module.
4. The shrimp fry counting method of claim 3 wherein the bottleneck submodule comprises a second convolutional layer, a third convolutional layer and a fourth convolutional layer;
the second convolution Layer is followed by a Layer Normalization regularization Layer and a ReLu activation Layer;
the Layer Normalization regularization Layer and a ReLu activation Layer are connected behind the third convolution Layer;
the fourth convolution Layer is followed by a Layer Normalization regularization Layer;
the convolution kernel size of the first convolution layer and the third convolution layer is 3 x 3;
the convolution kernel size of the second convolution layer and the fourth convolution layer is 1 × 1.
5. The shrimp larvae counting method of claim 3 wherein said transition fusion module comprises one or more of an upsampling operation, a downsampling operation, and a convolution operation;
the up-sampling operation uses a linear difference algorithm, and then is followed by a convolution layer with the convolution kernel size of 1 multiplied by 1 to carry out smoothing processing;
the downsampling operation is convolved with a convolution layer with a convolution kernel size of 3 x 3;
the convolution operation uses a convolution layer with a convolution kernel size of 3 x 3 for convolution.
6. The shrimp larvae counting method according to claim 1, wherein the shrimp larvae image set is input into a key point detection network model for training to obtain the shrimp larvae key point detection network model, and the method specifically comprises the following steps:
carrying out key point labeling on the shrimp larvae image set to obtain a shrimp larvae key point labeling image set;
converting the shrimp larvae key point annotation image set into label file data;
converting the label file data into key point detection format data;
detecting format data according to the key points to generate a Gaussian thermal map set;
and training the key point detection network model by using the Gaussian heat map set and the shrimp larvae image set so as to obtain the shrimp larvae key point detection network model.
7. The shrimp larvae counting method according to claim 6, wherein the key point labeling is performed on the shrimp larvae image set, and specifically comprises;
marking key points on the middle part of each shrimp larva in the shrimp larva image set;
or labeling key points on the head and tail of each shrimp larva in the shrimp larva image set;
or labeling key points on the head, the middle and the tail of each young shrimp in the young shrimp image set.
8. A young shrimp counting system based on keypoint detection, the system comprising:
the first acquisition unit is used for acquiring a shrimp larva image set;
the construction unit is used for constructing a key point detection network model;
the training unit is used for inputting the shrimp larvae image set into the key point detection network model for training to obtain a shrimp larvae key point detection network model;
the second acquisition unit is used for acquiring the shrimp larvae image to be counted;
the detection unit is used for inputting the shrimp larva images to be counted into the shrimp larva key point detection network model to obtain key point detection results;
and the counting unit is used for counting the shrimp seeds according to the key point detection result.
9. A computer device comprising a processor and a memory for storing processor-executable programs, wherein the processor, when executing a program stored in the memory, implements the shrimp fry counting method of any one of claims 1-7.
10. A storage medium storing a program which, when executed by a processor, implements the shrimp fry counting method of any one of claims 1 to 7.
CN202210664948.1A 2022-06-14 2022-06-14 Shrimp larvae counting method, system, equipment and storage medium based on key point detection Pending CN115222666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210664948.1A CN115222666A (en) 2022-06-14 2022-06-14 Shrimp larvae counting method, system, equipment and storage medium based on key point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210664948.1A CN115222666A (en) 2022-06-14 2022-06-14 Shrimp larvae counting method, system, equipment and storage medium based on key point detection

Publications (1)

Publication Number Publication Date
CN115222666A true CN115222666A (en) 2022-10-21

Family

ID=83607694

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210664948.1A Pending CN115222666A (en) 2022-06-14 2022-06-14 Shrimp larvae counting method, system, equipment and storage medium based on key point detection

Country Status (1)

Country Link
CN (1) CN115222666A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937169A (en) * 2022-12-23 2023-04-07 广东创新科技职业学院 Shrimp fry counting method and system based on high resolution and target detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937169A (en) * 2022-12-23 2023-04-07 广东创新科技职业学院 Shrimp fry counting method and system based on high resolution and target detection

Similar Documents

Publication Publication Date Title
Yi et al. ASSD: Attentive single shot multibox detector
Cao et al. An improved faster R-CNN for small object detection
US11670071B2 (en) Fine-grained image recognition
Zhou et al. BOMSC-Net: Boundary optimization and multi-scale context awareness based building extraction from high-resolution remote sensing imagery
US11734851B2 (en) Face key point detection method and apparatus, storage medium, and electronic device
CN105612554B (en) Method for characterizing the image obtained by video-medical equipment
US10248854B2 (en) Hand motion identification method and apparatus
CN108399386A (en) Information extracting method in pie chart and device
CN108920580A (en) Image matching method, device, storage medium and terminal
CN110119148A (en) A kind of six-degree-of-freedom posture estimation method, device and computer readable storage medium
Fan et al. Pointly-supervised panoptic segmentation
CN112784750B (en) Fast video object segmentation method and device based on pixel and region feature matching
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
US20230401706A1 (en) Method for detecting a rib with a medical image, device, and medium
Xu et al. AdaZoom: Towards scale-aware large scene object detection
CN114565035A (en) Tongue picture analysis method, terminal equipment and storage medium
CN115222666A (en) Shrimp larvae counting method, system, equipment and storage medium based on key point detection
CN115457364A (en) Target detection knowledge distillation method and device, terminal equipment and storage medium
Wang et al. SAR ship detection in complex background based on multi-feature fusion and non-local channel attention mechanism
CN113537026A (en) Primitive detection method, device, equipment and medium in building plan
Zhang et al. Multilevel feature context semantic fusion network for cloud and cloud shadow segmentation
CN116342888A (en) Method and device for training segmentation model based on sparse labeling
CN116433722A (en) Target tracking method, electronic device, storage medium, and program product
CN114494999B (en) Double-branch combined target intensive prediction method and system
CN112750124B (en) Model generation method, image segmentation method, model generation device, image segmentation device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination