CN111753775B - Fish growth assessment method, device, equipment and storage medium - Google Patents

Fish growth assessment method, device, equipment and storage medium Download PDF

Info

Publication number
CN111753775B
CN111753775B CN202010608682.XA CN202010608682A CN111753775B CN 111753775 B CN111753775 B CN 111753775B CN 202010608682 A CN202010608682 A CN 202010608682A CN 111753775 B CN111753775 B CN 111753775B
Authority
CN
China
Prior art keywords
fish
images
frames
center point
continuous
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010608682.XA
Other languages
Chinese (zh)
Other versions
CN111753775A (en
Inventor
张为明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202010608682.XA priority Critical patent/CN111753775B/en
Publication of CN111753775A publication Critical patent/CN111753775A/en
Application granted granted Critical
Publication of CN111753775B publication Critical patent/CN111753775B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a fish growth assessment method, a device, equipment and a storage medium, wherein the method comprises the following steps: acquiring continuous N frames of fish images, wherein N is an integer greater than 1; the following processing is respectively carried out on each two continuous frames of fish images: extracting the characteristics of fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish; according to the identification of the fish in the N frames of fish images, obtaining the motion trail of the fish; and evaluating the growth trend of the fish according to the movement track of the fish. The method is used for improving the accuracy of fish growth assessment.

Description

Fish growth assessment method, device, equipment and storage medium
Technical Field
The application relates to the field of aquaculture, in particular to a fish growth assessment method, a device, equipment and a storage medium.
Background
Analysis of the aquatic industry shows that world aquaculture is most developed in asia, and the global percentage of aquaculture is nearly 90%. China is one of the major aquaculture countries in asia and one of the most recently practiced countries in the world.
In order to create a green, healthy, safe aquaculture, we need to observe the growth of fish. By observing the growth of the fish, the fish can be timely dealt with when problems occur, so that the healthy growth of the fish is ensured. Under the general condition, the underwater camera continuously shoots images of the fish shoal, detects proper fish through an algorithm and obtains the body size information of the fish, and then performs statistical analysis on the body size information of all the images. The method evaluates the growth of the fish based on the information of the fish in a single image, the growth trend of the fish cannot be comprehensively estimated, the one-sided performance exists, and the evaluation accuracy is poor.
In addition, in the prior art, there are also: the method has the advantages that the targets are segmented in the mode of segmented images, tracking is carried out through a traditional algorithm, and the growth of the fish is estimated according to the tracking result. Furthermore, the succession of time sequences is ignored, the same target in the multi-frame images is matched only by a rule, the operation is easy to miss, and particularly, the effect is very bad when the frame rate is low.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for evaluating the growth of fish, which are used for improving the accuracy of the growth evaluation of the fish.
In a first aspect, the application provides a method of assessing the growth of fish comprising:
acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
the following processing is respectively carried out on each two continuous frames of fish images: extracting the characteristics of fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish;
according to the identification of the fish in the N frames of fish images, obtaining the motion trail of the fish;
and evaluating the growth trend of the fish according to the movement track of the fish.
Optionally, estimating the growth trend of the fish according to the motion trail of the fish comprises:
according to the motion trail of the fish, obtaining the fish with abnormal motion, and filtering the motion trail of the fish with abnormal motion;
and evaluating the growth trend of the fish according to the motion trail of the fish after filtering.
Optionally, extracting features of fish in the two consecutive frames of fish images, and identifying the same fish in the two consecutive frames of fish images includes:
extracting the characteristics of the fish in the two continuous frames of fish images, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
when the offset of the center point of the fish is in a preset range, judging that the fish with similar characteristics is the same fish;
wherein the center point of the fish is the center point of the circumscribed rectangle of the fish.
Optionally, extracting the characteristics of the fish in the two continuous frames of fish images, obtaining the position information of each fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information, including:
inputting the continuous two-frame fish images into a detection tracking model;
extracting the characteristics of the fish in the two continuous frames of fish images through the detection tracking model, acquiring the position information of each fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
and acquiring the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish, which are output by the detection tracking model.
Optionally, the training process of the detection tracking model includes:
acquiring a sample image set, wherein the sample image set comprises Q continuous sample images and preset marking data, S sample images form a group of sample images, S is smaller than or equal to Q, and the preset marking data comprises: the true characteristics of the fish, the true center point of the fish, and the offset of the true center point of the same fish;
the following training process is performed on each group of sample images in the sample image set respectively:
inputting the group of sample images into an initial detection tracking model, and respectively carrying out the following processing on each two continuous frames of sample images: obtaining the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish, which are output by the initial detection tracking model;
and calculating to obtain a calculation result of the loss function according to the output result of the initial detection tracking model and the preset mark data, reversely transmitting the gradient to the initial detection tracking model, acquiring the next group of sample images from the sample image set after optimizing the initial detection tracking model, and repeatedly executing the training process until the calculation result of the loss function tends to be stable, and taking the initial detection tracking model as the final detection tracking model.
Optionally, acquiring the characteristics of the fish output by the detection tracking model includes:
respectively downsampling the two continuous frames of sample images, and extracting first characteristics of fish of each of the two downsampled continuous frames of sample images;
upsampling the downsampled two consecutive frames of sample images to extract respective second characteristics of fish of the upsampled two consecutive frames of sample images;
acquiring the hot spot map features of any one of the two continuous sample images;
and obtaining the characteristics of the fish according to the first characteristics, the second characteristics and the hotspot graph characteristics.
Optionally, after extracting the features of the fish in the two consecutive frames of fish images, the method further includes:
and classifying the fish according to the characteristics of the fish, and distributing the same type of identification to the same type of fish.
Optionally, before inputting the set of sample images into the initial detection tracking model, further comprising:
adding an external rectangle to the fish in the sample image, and distributing the preset marking data for the external rectangle.
In a second aspect, the present application provides a fish growth assessment device comprising:
the first acquisition module is used for acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
the extraction module is used for respectively carrying out the following processing on each two continuous frames of fish images: extracting the characteristics of fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish;
the second acquisition module is used for acquiring the motion trail of the fish according to the identification of the fish in the N frames of fish images;
and the evaluation module is used for evaluating the growth trend of the fish according to the movement track of the fish.
In a third aspect, the present application provides an electronic device, comprising: the device comprises a processor, a communication assembly, a memory and a communication bus, wherein the processor, the communication assembly and the memory are communicated with each other through the communication bus; the memory is used for storing a computer program; the processor is used for executing the program stored in the memory to realize the fish growth evaluation method.
In a fourth aspect, the present application provides a computer readable storage medium storing a computer program which when executed by a processor implements the method of assessing growth of fish.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method provided by the embodiment of the application, the continuous N frames of fish images are obtained, the characteristic extraction operation of the fish is respectively carried out on each two continuous frames of fish images, the same fish in the two continuous frames of fish images is identified, the time continuity of the images is ensured, the accuracy of matching the same fish is further ensured, in addition, the same identification is allocated to the same fish, each fish is tracked according to the identification of the fish in the N frames of fish images, the motion track of the fish is obtained, the growth trend of the fish is estimated according to the motion track of the fish, the growth trend of the fish is estimated through continuous tracking, and relatively comprehensive fish information can be obtained relative to the way of analyzing the single-frame image estimation, so that the estimation result obtained based on the comprehensive fish information is more accurate. In addition, the evaluation mode considers the continuity of time sequence, and the same identification is allocated to the same fish for tracking, so that the problem that the target is easily missed by means of rule matching is avoided, the tracking effect is improved, and the result of evaluation based on the motion trail of the fish obtained by tracking is more accurate.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic diagram of a fish growth evaluation flow in an embodiment of the application;
FIG. 2 is a schematic diagram of a fish feature extraction operation according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a training process of a detection tracking model according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a fish growth evaluation apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a fish growth evaluation method which can be applied to camera equipment of fish images, intelligent terminals without camera functions, and servers. The specific implementation of the method is shown in fig. 1:
step 101, acquiring continuous N frames of fish images, wherein N is an integer greater than 1.
Specifically, continuous N frames of fish images are captured by a camera, wherein the camera is disposed at a fixed location, and one or more cameras may be disposed. The user sets the shooting time of the camera according to actual needs so that the camera shoots at regular time, and of course, for the accuracy of the growth evaluation of the fish, the time interval between two adjacent shooting can be set to not exceed a set value.
Step 102, respectively carrying out the following processing on each two continuous frames of fish images: extracting the characteristics of fish in two continuous fish images, identifying the same fish in the two continuous fish images, and distributing the same identification for the same fish.
In one embodiment, for N consecutive frames of fish images, the following processing is performed on each two consecutive frames of fish images:
extracting characteristics of fish in two continuous frames of fish images, wherein the characteristics of the fish comprise: width and height of fish, shape of fish, color of fish, etc.; respectively comparing the characteristics of the fish extracted from the two continuous frames of fish images to obtain the position information of each similar fish in the two continuous frames of fish images; determining the offset of the center points of the fishes with similar characteristics according to the position information; when the offset of the center point of the fish is in a preset range, judging that the fish with similar characteristics is the same fish; the center point of the fish is the center point of a circumscribed rectangle of the fish, two sides of the circumscribed rectangle are parallel to a horizontal axis, and the horizontal axis is established by taking a camera as a reference.
In one embodiment, the detection tracking model is used for extracting the characteristics of the fish and obtaining the position information of the fish and the offset of the center point of the fish, and the specific implementation process is as follows: inputting two continuous frames of fish images into a detection tracking model; extracting the characteristics of fish in two continuous frames of fish images through the detection tracking model, respectively comparing the characteristics of the fish extracted in the two continuous frames of fish images, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, determining the offset of the center point of the fish with similar characteristics according to the position information, and judging that the fish with similar characteristics is the same fish when the offset of the center point of the fish is in a preset range; the detection tracking model outputs characteristics of the fish, a center point of the fish and an offset of the center point of the same fish.
In one embodiment, in order to clearly describe the specific training process of the detection tracking model, a target detection algorithm, namely a centrnet algorithm, is introduced, and the tracking process is described in combination with the centrnet algorithm, specifically as follows:
a frame of fish image is input into a centrnet algorithm model, as shown in fig. 2, image features of the fish image are extracted through a deep feature fusion network (Deep Layer Aggregation, abbreviated as DLA) for downsampling, and 4 times, 8 times, 16 times and 32 times of downsampling are taken as examples for illustration, that is, 4 times of image features, 8 times of image features, 16 times of image features and 32 times of image features of the downsampling of the fish image are respectively extracted by using the deep feature fusion network. And repeatedly carrying out up-sampling processing on the fish image subjected to down-sampling based on the image characteristics obtained by the down-sampling, extracting the image characteristics subjected to up-sampling, carrying out characteristic fusion operation, obtaining fusion characteristics when the down-sampling is 4 times, and taking the fusion characteristics as the characteristics of fish in the fish image. Wherein the upsampling operation is formed by both a deformable convolution and a transposed convolution to make the obtained image features more accurate.
In addition, the following fusion characteristics at 4 times sampling were taken as characteristics of the fish in the fish image because: the fusion features of the 4 times downsampling are fused with the high-level semantic features and the bottom-layer simple features of the fish image, and the feature resolution of the 4 times downsampling is higher. Of course, the fusion characteristics at 4 times of downsampling are not absolute as the characteristics of the fish in the fish image, and the user can determine the optimal fusion characteristics according to the resolution of the image and the size of the image.
Further, the CenterNet algorithm model determines whether the target is a fish according to the fusion characteristics, and outputs the center point of the fish and the characteristics of the fish after determining that the target is the fish. When a frame of fish image is input to the centrnet algorithm model, the input data further comprises preset mark data, and the preset mark data comprises: the true characteristics of the fish and the true center point of the fish. According to the real characteristics of the fish and the real center point of the fish, comparing the output result of the central Net algorithm model, calculating to obtain the calculation result of the loss function, reversely transmitting the gradient to the central Net algorithm model, optimizing the central Net algorithm model, inputting the next frame, repeatedly executing the training process until the calculation result of the loss function tends to be stable, and taking the central Net algorithm model as the final central Net algorithm model. In the following, the width, height and pixel data of the fish are mentioned as characteristics of the fish, but the characteristics of the fish are not limited to these two points.
Wherein, the calculation result of the loss function comprises: classification loss, center point offset loss, and target width loss and height loss. The method comprises the following steps:
classification loss: and judging whether the pixels in the acquired fish image correspond to fish according to the types of the pixels. The following description will take a fish image obtained by downsampling a fish image by 4 times as an example: after the characteristics of the fish are extracted from the fish image after 4 times of downsampling, the fish in the fish image is determined, gaussian blur operation is carried out on the region where the fish is located, and meanwhile, the type of each pixel is predicted by the CenterNet algorithm model. The extracted characteristic data of the fish includes a large amount of pixel data, for example, water is used as a category corresponding to some pixels, fish is used as a category corresponding to some pixels, sundries are used as a category corresponding to some pixels, and the like, the category of the pixels predicted according to the centrnet algorithm model is compared with the real category corresponding to the pixels in the fish image, the probability of success when the predicted pixels are fish is determined, and the classification loss is calculated by using a focal loss function. The focal loss function is specifically as follows:
where γ is a learnable hyper-parameter, y' represents the probability of success when the predicted pixel class is fish, and the smaller L represents the smaller the classification loss.
Center point offset loss: taking a fish image obtained by downsampling a fish image by 4 times as an example, the following description will be given: because the center point of the fish image after 4 times of downsampling is output by the CenterNet algorithm model, at the moment, 4 times of upsampling is carried out on the fish image after 4 times of downsampling, the fish image returns to the original fish image, at the moment, the center point offset loss is brought, and the center point offset loss is compensated according to the center point offset loss calculation result. Specifically, calculating the center point offset loss through a smoothL 1 loss function:
wherein x=f (x) -y, f (x) is the true center point of the fish, y is the center point of the fish output by the centrnet algorithm model, and x is the distance difference between the true center point of the fish and the center point of the fish output by the centrnet algorithm model.
Target wide and high losses: taking a fish image obtained by downsampling a fish image by 4 times as an example, the following description will be given: since the output of the central net algorithm model is the width of the fish image after 4 times of downsampling, at this time, the downsampled fish image after 4 times is upsampled by 4 times and returns to the original fish image, at this time, a target width loss is brought, the target width loss is compensated according to the target width loss calculation result, specifically, the calculation of the target width loss is performed through a smoothl 1 loss function, and here, the width is illustrated as an example:
wherein, x=f (x) -y, f (x) is the real width of the fish, y is the width of the fish output by the central net algorithm model, x is the distance difference between the real width of the fish and the width of the fish output by the central net algorithm model, and the calculation of the height is the same as that of the width, namely the width is replaced by the height.
The following calculation results of classification loss, center point offset loss, target width loss and high loss are calculated, and when the sum of the calculation results tends to be stable, the center net algorithm model training is completed.
The centrnet algorithm is introduced, so that the centrnet algorithm is led out, the implementation of the centrnet algorithm is carried out based on the centrnet algorithm, and the centrnet algorithm is an improved algorithm based on the centrnet algorithm. The two are different: the input of the CenterNet algorithm is an image, and the output is the target width, the target height and the target center point; the input of the centrtrack algorithm is the hot spot map characteristics of two adjacent images and any frame of images, and the output is the target width and height, the offset of the target center point and the center point of the same target. The CenterNet algorithm is used for detecting targets in the current frame image only, and is discontinuous in time, and the CenterTrack algorithm is used for detecting two continuous frame images, so that the problem of discontinuous in time is solved. The application uses the improved centrtrack algorithm to train the detection tracking model for the root algorithm.
In one embodiment, the training process of the detection tracking model is as shown in fig. 3:
step 301, acquiring a sample image set, where the sample image set includes Q consecutive sample images and preset mark data, and S sample images form a group of sample images, S is smaller than or equal to Q, and the preset mark data includes: the true characteristics of the fish, the true center point of the fish, and the offset of the true center point of the same fish.
The real characteristics of the fish include the width and height of the fish, and the embodiment is described by taking the width and height of the fish as an example, but the characteristics of the fish are not limited to the width and height of the fish, and also include the outline of the fish, the texture of the fish, the shape of the fish, and the like.
In one specific embodiment, continuous fish images are taken as sample images by using a camera, and fish in the sample images are framed by using Labelme, specifically expressed as: adding an external rectangle to the fish in the sample image, and adding preset marking data to the external rectangle.
Step 302, performing the following training procedure for each group of sample images in the sample image set, respectively: inputting a group of sample images into an initial detection tracking model, and respectively carrying out the following processing on each two continuous frames of sample images: and obtaining the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish which are output by the initial detection tracking model.
The characteristics of the fish to be output include the width and height of the fish, and the present embodiment is described by taking the width and height of the fish as an example, but the characteristics of the fish are not limited to the width and height of the fish, and include the contour of the fish, the texture of the fish, the shape of the fish, and the like.
In a specific embodiment, N continuous sample images are input into an initial detection tracking model, and the characteristics of the fish are output through the detection tracking model, wherein the extraction mode of the fusion characteristics is the same as the processing mode of a central net algorithm model, and the fusion characteristics of two continuous fish images are extracted through DLA. Taking 4 times, 8 times, 16 times and 32 times of extraction downsampling as an example for illustration, the specific steps are as follows: respectively downsampling the two continuous frames of sample images, and extracting first characteristics of fish of each of the two continuous frames of downsampled sample images; and (3) up-sampling the sample image after down-sampling, extracting second characteristics of the fish of each of the two continuous frames of sample images after up-sampling, and performing characteristic fusion operation to obtain fusion characteristics.
In addition, the initial detection tracking model performs initialization processing on any one of two continuous frames of sample images to obtain a hotspot graph characteristic of the frame of sample images, and obtains the characteristic of the fish according to the fusion characteristic and the hotspot graph characteristic, wherein the hotspot graph characteristic is a single-channel characteristic generated by using a Gaussian rendering function.
Further, after the characteristics of the fish are obtained, the characteristics of the fish extracted from the two continuous frames of sample images are respectively compared, the position information of the fish with similar characteristics in the two continuous frames of sample images is obtained, the offset of the center point of the fish with similar characteristics is determined according to the position information, and when the offset of the center point of the fish is within a preset range, the fish with similar characteristics is judged to be the same fish. Finally, the initial detection tracking model outputs the width and height of the fish, the center point of the fish and the offset of the center point of the same fish.
Step 303, calculating to obtain a calculation result of the loss function according to the output result of the initial detection tracking model and the preset mark data, reversely propagating the gradient to the initial detection tracking model, acquiring a next group of sample images from the sample image set after optimizing the initial detection tracking model, and repeatedly executing the training process until the calculation result of the loss function tends to be stable, and taking the initial detection tracking model as a final detection tracking model.
In a specific embodiment, the calculation results of the loss function are obtained by calculation, and the gradient is continuously back-propagated to the initial detection tracking model to optimize the initial detection tracking model and determine the final detection tracking model, which is specifically implemented as follows: calculating target classification loss to obtain a first result; calculating the center point offset loss to obtain a second result; calculating target wide loss and high loss to obtain a third result; and calculating the midpoint offset regression loss of the same target to obtain a fourth result. And summing the first result, the second result, the third result and the fourth result to obtain a summation result, and detecting that the tracking model is successfully trained when the summation result tends to be stable.
The calculation of the target classification loss, the center point offset loss and the target width loss and the target height loss are the same as the calculation of the loss function of the CenterNet algorithm model. The midpoint offset regression loss of the same target is specifically calculated by a smooth L1 loss function:
wherein, x=f (x) -y, f (x) is the true offset of the same fish, y is the offset of the same fish output by the detection tracking model, and x is the distance difference between the true offset of the same fish and the offset of the same fish output by the centrnet algorithm model.
Specifically, in the training process, the contrast difference of the hot spot images, for example, the difference of the enhanced color shades, is enhanced, so that the characteristic difference of the sample image is clearer, and the detection tracking model is more robust. The center point of the fish of the previous frame of sample images of two frames of continuous sample images can be locally dithered by adding Gaussian noise to each center point, so that the positioning of the center point and the matching of the same fish can be better performed. And the sample image for extracting the hotspot graph features can be randomly rendered with a certain probability to serve as a false positive sample image, and the sample image for randomly removing the hotspot graph features with a certain probability is used as a false negative sample, so that the accuracy, the difference and the sensitivity of the detection tracking model are improved.
In a specific embodiment, after extracting the characteristics of the fish in the two continuous frames of fish images, the fish can be classified according to the characteristics of the fish, and the same type of identification can be allocated to the same type of fish. By analyzing the characteristics of the fishes, the fishes are classified, and the same type of identification is allocated to the fishes of the same type, so that when the fishes are mixedly cultured, the respective growth trends of the fishes of different types can be analyzed according to the motion tracks of the fishes of different types, and the overall analysis of all the mixedly cultured fishes can be carried out according to the respective growth trends of the fishes of different types, thereby providing effective reference data for the total fish mixedly culture.
And 103, obtaining the motion trail of the fish according to the identification of the fish in the N frames of fish images.
Specifically, after the detection tracking model outputs the offset of the center point of the same fish, the same fish is related in time through a greedy matching algorithm. Assuming that the center point of any fish in the current frame of fish image is P, and the output offset is d, matching the fish in the range of d in the previous frame of fish image relative to the current frame of fish image by the fish in the P position, if matching is successful, adding the same identification to the same fish successfully matched, and obtaining the motion trail of the fish according to the identification of the fish; otherwise, it is necessary to assign a new identification to a new fish present in the fish image and to generate a new motion profile.
And 104, evaluating the growth trend of the fish according to the motion trail of the fish.
In a specific embodiment, according to the motion trail of the fish, obtaining the fish with abnormal motion and filtering the motion trail of the fish with abnormal motion, wherein the fish which does not form the motion trail or has the excessively short formed motion trail is defined as the abnormal fish, and according to the motion trail of the fish after filtering, the growth trend of the fish is estimated. Of course, in an ideal case, if there is no abnormal fish, the growth trend of the fish is estimated directly by using all the motion trajectories of the fish. Further, aquaculture farmers determine the feeding condition of the fish and the growth environment of the fish according to the growth trend of the fish.
According to the method provided by the embodiment of the application, the continuous N frames of fish images are obtained, the characteristic extraction operation of the fish is respectively carried out on each two continuous frames of fish images, the same fish in the two continuous frames of fish images is identified, the time continuity of the images is ensured, the accuracy of matching the same fish is further ensured, in addition, the same identification is allocated to the same fish, each fish is tracked according to the identification of the fish in the N frames of fish images, the motion track of the fish is obtained, the growth trend of the fish is estimated according to the motion track of the fish, the growth trend of the fish is estimated through continuous tracking, and relatively comprehensive fish information can be obtained relative to the way of analyzing the single-frame image estimation, so that the estimation result obtained based on the comprehensive fish information is more accurate. In addition, the evaluation mode considers the continuity of time sequence, and the same identification is allocated to the same fish for tracking, so that the problem that the target is easily missed by means of rule matching is avoided, the tracking effect is improved, and the result of evaluation based on the motion trail of the fish obtained by tracking is more accurate.
The embodiment of the application also provides a device for evaluating the growth of fish, the specific implementation of the device can be referred to the description of the embodiment part of the method, and the repetition is omitted, as shown in fig. 4, the device mainly comprises:
the first acquiring module 401 is configured to acquire consecutive N frames of fish images, where N is an integer greater than 1.
The extracting module 402 is configured to perform the following processing on each two consecutive frames of fish images: extracting the characteristics of fish in two continuous fish images, identifying the same fish in the two continuous fish images, and distributing the same identification for the same fish.
The second obtaining module 403 is configured to obtain a motion trail of the fish according to the identification of the fish in the N frames of fish images.
And the evaluation module 404 is used for evaluating the growth trend of the fish according to the movement track of the fish.
Specifically, the evaluation module 404 is specifically configured to obtain a fish with abnormal movement according to a movement track of the fish, and filter the movement track of the fish with abnormal movement; and evaluating the growth trend of the fish according to the motion trail of the fish after filtering.
Specifically, the extracting module 402 is configured to extract the characteristics of the fish in the two continuous frames of fish images, obtain the position information of each fish with similar characteristics in the two continuous frames of fish images, and determine the offset of the center point of the fish with similar characteristics according to the position information; when the offset of the center point of the fish is in a preset range, judging that the fish with similar characteristics is the same fish; wherein the center point of the fish is the center point of the circumscribed rectangle of the fish.
According to the device provided by the embodiment of the application, the first acquisition module 401 is used for acquiring continuous N frames of fish images, the extraction module 402 is used for carrying out characteristic extraction operation on each two continuous frames of fish images respectively, the same fish in the two continuous frames of fish images is identified, the time continuity of the images is guaranteed, the accuracy of matching the same fish is further guaranteed, in addition, the same identification is allocated to the same fish, each fish is tracked according to the identification of the fish in the N frames of fish images through the second acquisition module 403, the motion track of the fish is obtained, and finally, the evaluation module 404 is used for evaluating the growth trend of the fish according to the motion track of the fish. In addition, the evaluation mode considers the continuity of time sequence, and the same identification is allocated to the same fish for tracking, so that the problem that the target is easily missed by means of rule matching is avoided, the tracking effect is improved, and the result of evaluation based on the motion trail of the fish obtained by tracking is more accurate.
Based on the same conception, the embodiment of the application also provides an electronic device, as shown in fig. 5, which mainly comprises: processor 501, communication component 502, memory 503, and communication bus 504, wherein processor 501, communication component 502, and memory 503 communicate with each other via communication bus 504. The memory 503 stores a program executable by the processor 501, and the processor 501 executes the program stored in the memory 503 to implement the following steps: acquiring continuous N frames of fish images, wherein N is an integer greater than 1; the following processing is respectively carried out on each two continuous frames of fish images: extracting the characteristics of fish in two continuous fish images, identifying the same fish in the two continuous fish images, and distributing the same identification for the same fish; according to the identification of the fish in the N frames of fish images, obtaining the motion trail of the fish; and evaluating the growth trend of the fish according to the movement track of the fish.
The communication bus 504 mentioned in the above electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated to PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated to EISA) bus, or the like. The communication bus 504 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 5, but not only one bus or one type of bus.
The communication component 502 is used for communication between the electronic device and other devices described above.
The memory 503 may include a random access memory (Random Access Memory, simply referred to as RAM) or may include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor 501.
The processor 501 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a digital signal processor (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA), or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In a further embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to perform the fish growth assessment method described in the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, by a wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, microwave, etc.) means from one website, computer, server, or data center to another. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape, etc.), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the application to enable those skilled in the art to understand or practice the application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A method for assessing the growth of fish, comprising:
acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
the following processing is respectively carried out on each two continuous frames of fish images: extracting the characteristics of fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish;
according to the identification of the fish in the N frames of fish images, obtaining the motion trail of the fish;
and evaluating the growth trend of the fish according to the movement track of the fish.
2. The method for evaluating the growth of fish according to claim 1, wherein evaluating the growth trend of the fish based on the movement trace of the fish comprises:
according to the motion trail of the fish, obtaining the fish with abnormal motion, and filtering the motion trail of the fish with abnormal motion;
and evaluating the growth trend of the fish according to the motion trail of the fish after filtering.
3. The method of claim 2, wherein extracting features of fish in the two consecutive frames of fish images, identifying the same fish in the two consecutive frames of fish images, comprises:
extracting the characteristics of the fish in the two continuous frames of fish images, acquiring the position information of the fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
when the offset of the center point of the fish is in a preset range, judging that the fish with similar characteristics is the same fish;
wherein the center point of the fish is the center point of the circumscribed rectangle of the fish.
4. A fish growth assessment method according to claim 3, wherein extracting the characteristics of the fish in the two consecutive frames of fish images, obtaining the position information of each of the fish with similar characteristics in the two consecutive frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information, comprises:
inputting the continuous two-frame fish images into a detection tracking model;
extracting the characteristics of the fish in the two continuous frames of fish images through the detection tracking model, acquiring the position information of each fish with similar characteristics in the two continuous frames of fish images, and determining the offset of the center point of the fish with similar characteristics according to the position information;
and acquiring the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish, which are output by the detection tracking model.
5. The method of claim 4, wherein the training process of the detection tracking model comprises:
acquiring a sample image set, wherein the sample image set comprises Q continuous sample images and preset marking data, S sample images form a group of sample images, S is smaller than or equal to Q, and the preset marking data comprises: the true characteristics of the fish, the true center point of the fish, and the offset of the true center point of the same fish;
the following training process is performed on each group of sample images in the sample image set respectively:
inputting the group of sample images into an initial detection tracking model, and respectively carrying out the following processing on each two continuous frames of sample images: obtaining the characteristics of the fish, the center point of the fish and the offset of the center point of the same fish, which are output by the initial detection tracking model;
and calculating to obtain a calculation result of the loss function according to the output result of the initial detection tracking model and the preset mark data, reversely transmitting the gradient to the initial detection tracking model, acquiring the next group of sample images from the sample image set after optimizing the initial detection tracking model, and repeatedly executing the training process until the calculation result of the loss function tends to be stable, and taking the initial detection tracking model as the final detection tracking model.
6. The method of claim 5, wherein obtaining the characteristics of the fish output by the detection tracking model comprises:
respectively downsampling the two continuous frames of sample images, and extracting first characteristics of fish of each of the two downsampled continuous frames of sample images;
upsampling the downsampled two consecutive frames of sample images to extract respective second characteristics of fish of the upsampled two consecutive frames of sample images;
acquiring the hot spot map features of any one of the two continuous sample images;
and obtaining the characteristics of the fish according to the first characteristics, the second characteristics and the hotspot graph characteristics.
7. The method of assessing the growth of fish according to any one of claims 1-6, further comprising, after extracting the features of the fish in the two consecutive frames of fish images:
and classifying the fish according to the characteristics of the fish, and distributing the same type of identification to the same type of fish.
8. The method of claim 5, wherein before inputting the set of sample images into the initial detection tracking model, further comprising:
adding an external rectangle to the fish in the sample image, and distributing the preset marking data for the external rectangle.
9. A fish growth assessment device, comprising:
the first acquisition module is used for acquiring continuous N frames of fish images, wherein N is an integer greater than 1;
the extraction module is used for respectively carrying out the following processing on each two continuous frames of fish images: extracting the characteristics of fish in the two continuous frames of fish images, identifying the same fish in the two continuous frames of fish images, and distributing the same identification for the same fish;
the second acquisition module is used for acquiring the motion trail of the fish according to the identification of the fish in the N frames of fish images;
and the evaluation module is used for evaluating the growth trend of the fish according to the movement track of the fish.
10. An electronic device, comprising: the device comprises a processor, a communication assembly, a memory and a communication bus, wherein the processor, the communication assembly and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor for executing the program stored in the memory, implementing the fish growth assessment method according to any one of claims 1 to 8.
11. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the fish growth assessment method of any one of claims 1-8.
CN202010608682.XA 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium Active CN111753775B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010608682.XA CN111753775B (en) 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010608682.XA CN111753775B (en) 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111753775A CN111753775A (en) 2020-10-09
CN111753775B true CN111753775B (en) 2023-09-26

Family

ID=72678163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010608682.XA Active CN111753775B (en) 2020-06-29 2020-06-29 Fish growth assessment method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111753775B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232432B (en) * 2020-10-26 2023-04-11 西安交通大学 Security check X-ray image target detection and identification method based on improved central point detection
TWI801911B (en) * 2021-06-18 2023-05-11 國立臺灣海洋大學 Aquatic organism identification method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6160759A (en) * 1999-04-19 2000-12-12 Nestler; John Michael Method for determining probable response of aquatic species to selected components of water flow fields
CN106561532A (en) * 2016-11-08 2017-04-19 深圳技师学院 Method and device for monitoring activity of fish
CN108875647A (en) * 2018-06-22 2018-11-23 成都睿畜电子科技有限公司 A kind of motion track monitoring method and system based on livestock identity
CN109271694A (en) * 2018-09-06 2019-01-25 西安理工大学 Habitat recognition methods based on fish individual dynamic Simulation Techniques
TWI661770B (en) * 2018-05-31 2019-06-11 National Chin-Yi University Of Technology Intelligent deep learning agricultural and fishery training system
CN110476871A (en) * 2019-09-17 2019-11-22 浙江傲宋智能科技有限公司 A kind of cultured fishes growth monitoring system
CN110942045A (en) * 2019-12-05 2020-03-31 安徽信息工程学院 Intelligent fish tank feeding system based on machine vision
CN111325181A (en) * 2020-03-19 2020-06-23 北京海益同展信息科技有限公司 State monitoring method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10599922B2 (en) * 2018-01-25 2020-03-24 X Development Llc Fish biomass, shape, and size determination

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6160759A (en) * 1999-04-19 2000-12-12 Nestler; John Michael Method for determining probable response of aquatic species to selected components of water flow fields
CN106561532A (en) * 2016-11-08 2017-04-19 深圳技师学院 Method and device for monitoring activity of fish
TWI661770B (en) * 2018-05-31 2019-06-11 National Chin-Yi University Of Technology Intelligent deep learning agricultural and fishery training system
CN108875647A (en) * 2018-06-22 2018-11-23 成都睿畜电子科技有限公司 A kind of motion track monitoring method and system based on livestock identity
CN109271694A (en) * 2018-09-06 2019-01-25 西安理工大学 Habitat recognition methods based on fish individual dynamic Simulation Techniques
CN110476871A (en) * 2019-09-17 2019-11-22 浙江傲宋智能科技有限公司 A kind of cultured fishes growth monitoring system
CN110942045A (en) * 2019-12-05 2020-03-31 安徽信息工程学院 Intelligent fish tank feeding system based on machine vision
CN111325181A (en) * 2020-03-19 2020-06-23 北京海益同展信息科技有限公司 State monitoring method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
计算机视觉技术在水产养殖中的应用与展望;徐建瑜, 崔绍荣, 苗香雯, 刘鹰;农业工程学报(08);全文 *

Also Published As

Publication number Publication date
CN111753775A (en) 2020-10-09

Similar Documents

Publication Publication Date Title
CN110598558B (en) Crowd density estimation method, device, electronic equipment and medium
CN111046880B (en) Infrared target image segmentation method, system, electronic equipment and storage medium
CN109035304B (en) Target tracking method, medium, computing device and apparatus
US10963676B2 (en) Image processing method and apparatus
CN112329702B (en) Method and device for rapid face density prediction and face detection, electronic equipment and storage medium
CN111753775B (en) Fish growth assessment method, device, equipment and storage medium
CN110825900A (en) Training method of feature reconstruction layer, reconstruction method of image features and related device
Ngugi et al. A new approach to learning and recognizing leaf diseases from individual lesions using convolutional neural networks
CN111597937B (en) Fish gesture recognition method, device, equipment and storage medium
CN110610123A (en) Multi-target vehicle detection method and device, electronic equipment and storage medium
CN112417955A (en) Patrol video stream processing method and device
CN114581709A (en) Model training, method, apparatus, and medium for recognizing target in medical image
Feng et al. A novel saliency detection method for wild animal monitoring images with WMSN
CN110516731B (en) Visual odometer feature point detection method and system based on deep learning
Fatemeh Razavi et al. Integration of colour and uniform interlaced derivative patterns for object tracking
CN116977895A (en) Stain detection method and device for universal camera lens and computer equipment
CN115358952B (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN117058232A (en) Position detection method for fish target individuals in cultured fish shoal by improving YOLOv8 model
CN115862119A (en) Human face age estimation method and device based on attention mechanism
Nguyen et al. Joint image deblurring and binarization for license plate images using deep generative adversarial networks
Wu et al. Super-resolution fusion optimization for poultry detection: a multi-object chicken detection method
CN114463764A (en) Table line detection method and device, computer equipment and storage medium
CN114511702A (en) Remote sensing image segmentation method and system based on multi-scale weighted attention
CN113256556A (en) Image selection method and device
CN109934045B (en) Pedestrian detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant