CN113159075A - Insect identification method and device - Google Patents

Insect identification method and device Download PDF

Info

Publication number
CN113159075A
CN113159075A CN202110506217.XA CN202110506217A CN113159075A CN 113159075 A CN113159075 A CN 113159075A CN 202110506217 A CN202110506217 A CN 202110506217A CN 113159075 A CN113159075 A CN 113159075A
Authority
CN
China
Prior art keywords
insect
image
belongs
species
classifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110506217.XA
Other languages
Chinese (zh)
Inventor
汪建伟
邓科研
杨庆寅
李珏闻
郭冰洁
张真
刘星月
王立宇
贾莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Insect Police Technology Co ltd
Original Assignee
Beijing Insect Police Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Insect Police Technology Co ltd filed Critical Beijing Insect Police Technology Co ltd
Priority to CN202110506217.XA priority Critical patent/CN113159075A/en
Publication of CN113159075A publication Critical patent/CN113159075A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for identifying insects, wherein the method comprises the following steps: an insect data set is prepared, in which the data tags should be accurate to the species to which the insect belongs. By using the entomology knowledge, the order and family labels of the insect images are obtained according to the species labels of the insect images. Inputting the insect image into a feature extractor for feature extraction, wherein the feature extractor adopts a convolutional neural network, obtains a plurality of levels of features through the feature extractor, and predicts the label of the insect image according to the multi-level features. In the training process, the labels of the three levels of the object, the department and the species are predicted and calculated to obtain three corresponding losses, and the final loss obtained by calculating the weighted sum of the three loss values according to a certain proportion is used for back propagation. According to the method and the device, the prediction accuracy of the three-level labels of the insect image classification task such as the categories, the departments and the species can be improved.

Description

Insect identification method and device
Technical Field
The invention relates to the field of artificial intelligence, in particular to an insect identification method and device.
Background
Pest disasters are great threats to agriculture and forestry all over the world, and serious pest disasters can cause great loss of agriculture and forestry and even destroy local ecological balance. Some of the pest disasters occur because the symptoms of the pest disasters cannot be found early and the treatment is late when the pest disasters are already on scale.
At present, most agricultural and forestry practitioners in China do not have professional entomology knowledge and the capability of judging and discovering species invasion and pest disasters, and even if insects possibly damaging the agricultural and forestry are encountered, the insects are difficult to detect, so that effective early warning is difficult to be made on the pest disasters. In such a background, an accurate and efficient insect image recognition scheme is needed to cope with the current insect disaster problem.
Disclosure of Invention
The invention mainly aims to provide an insect identification method and device to solve the problem that insects cannot be effectively identified in the prior art.
In order to achieve the above object, according to one aspect of the present invention, there is provided an insect identification method including: collecting a target image of an insect to be identified; predicting the species to which the insect to be identified belongs in the target image through a target model, wherein the target model is obtained by training an insect image carrying an insect label, and the insect label comprises the order, the family and the species to which the insect belongs in the insect image; and deducing the family to which the insect to be identified belongs and the order to which the insect to be identified belongs by using the predicted species to which the insect to be identified belongs.
Optionally, before predicting the species to which the insect to be identified belongs in the target image through the target model, the method further includes: acquiring an insect data set, wherein a plurality of insect images with insect labels are stored in the insect data set; and training the original model by using the insect data set to obtain a target model.
Optionally, the raw model comprises a feature extractor, an order classifier, a family classifier, and a species classifier, wherein training the raw model using the insect dataset comprises: extracting image features from the insect image by using a feature extractor; respectively inputting image features extracted from insect images into an order classifier, a department classifier and a species classifier; acquiring the order of the insect in the insect image predicted by the order classifier, the family of the insect in the insect image predicted by the family classifier and the species of the insect in the insect image predicted by the species classifier; parameters in the original model are optimized by comparing the order to which the insect predicted by the order classifier belongs with the order to which the insect recorded in the insect tag of the insect image belongs, the family to which the insect predicted by the family classifier belongs with the family to which the insect recorded in the insect tag of the insect image belongs, the species to which the insect predicted by the classifier belongs with the species to which the insect recorded in the insect tag of the insect image belongs.
Optionally, optimizing the parameters in the original model by comparing the order to which the insect predicted by the order classifier belongs with the order to which the insect recorded in the insect tag of the insect image belongs, the family to which the insect predicted by the family classifier belongs with the family to which the insect recorded in the insect tag of the insect image belongs, the species to which the insect predicted by the classifier belongs with the species to which the insect recorded in the insect tag of the insect image belongs includes: determining a subject prediction loss by comparing the subject to which the insect predicted by the subject classifier belongs with the subject to which the insect recorded in the insect tag of the insect image belongs, determining a subject prediction loss by comparing the subject classifier predicted by the subject classifier with the subject to which the insect recorded in the insect tag of the insect image belongs, determining a species prediction loss by comparing the species classifier predicted by the species classifier with the species to which the insect recorded in the insect tag of the insect image belongs; carrying out weighted summation on the target prediction loss, the subject prediction loss and the seed prediction loss to obtain the final loss; the final loss is used to calculate the gradient and back-propagation of the parameter values is performed in the original model.
Optionally, prior to training the original model using the insect data set, the method further comprises: zooming the insect image in the insect data set according to a preset size; processing the pixel values in the scaled insect image according to the following formula to obtain an insect image with updated pixel values,
Figure BDA0003058544020000021
wherein, f (x) represents the updated pixel value, x is the original pixel value of the image, μ is the mean value of the image pixel, and σ is the variance of the image pixel.
In order to achieve the above object, according to an aspect of the present invention, there is also provided an identification device of insects, including: the acquisition unit is used for acquiring a target image of the insect to be identified; the prediction unit is used for predicting the species to which the insects belong in the target image through the target model, wherein the target model is obtained by training an insect image carrying an insect label, and the insect label comprises the order to which the insects belong, the family to which the insects belong and the species to which the insects belong in the insect image; and the identification unit is used for deducing the family to which the insect belongs in the target image and the order to which the insect belongs in the target image by using the species to which the insect belongs in the predicted target image.
According to another aspect of the present invention, there is also provided an electronic device including: at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present invention, there is also provided a non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the above-described method.
By applying the technical scheme of the invention, the corresponding target label and the corresponding department label can be deduced according to the insect image seed label; inputting the insect image into a feature extractor to extract features of different levels; in the training process, the loss is calculated by using the prediction results of the labels of the three levels of the object, the family and the seed, the weighted sum of the three losses is calculated to serve as the final loss, and the model is optimized by a back propagation algorithm; when using model prediction, only the prediction result of the species output by the model is used, and the order and the family to which the image belongs are estimated by using the output of the species. The scheme can improve the accuracy of insect image recognition and reduce the error degree of model prediction: even if the insect image belongs to the species with wrong prediction, the accuracy of the prediction result belonging to the order and the family is higher than that of the method without the technology.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 shows a flow diagram of an insect image recognition method according to an embodiment of the invention;
FIG. 2 shows a schematic view of an insect image according to an embodiment of the invention;
FIG. 3 shows a schematic diagram of an insect image recognition model according to an embodiment of the invention;
FIG. 4 shows a schematic diagram of insect image recognition model training according to an embodiment of the invention;
FIG. 5 illustrates a schematic diagram of insect image recognition model prediction according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of an alternative data consistency verification device, according to an embodiment of the present application, an
Fig. 7 is a block diagram of a terminal according to an embodiment of the present application.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances for describing embodiments of the invention herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
The insect image classification task may be viewed as a subtask under the image classification task. Compared with the traditional image classification task, the subtask has the following two characteristics: 1) many different kinds of insects have similar appearances; 2) according to biological classification, insects naturally have multiple levels of tags, and each insect has its unique corresponding tag of three levels of order, family and species.
For the first feature of the insect image classification task, the insect image classification task can be regarded as a branch of image classification, namely, a fine-grained image classification task. According to its second feature, it can be considered as a multi-level label classification task.
The deep learning method is a main method for solving the image classification task, and the performance of the deep learning method exceeds that of other machine learning image classification methods. In a method of image processing using a deep learning method, image features may be extracted using a convolutional neural network. The insect image classification task, which is a situation with small difference among the categories, is called a fine-grained image classification task in the field of image processing. Therefore, the fine-grained image classification method is more suitable for the insect image classification task than the common image classification method. The current fine-grained image classification method usually enhances the representation capability by extracting multi-level features in the image, and improves the classification performance. The PMG model is one of the best fine-grained classification methods at present. However, the fine-grained image classification method does not consider the situation that the sample has multi-level labels, so that the prior knowledge of entomology is not fully utilized. In the field of insect image recognition, how to improve the performance of the model by utilizing sufficient priori knowledge is a problem to be solved.
In view of the above-mentioned problems, according to an aspect of embodiments of the present application, there is provided an embodiment of a method for identifying insects. As shown in fig. 1:
step S101, preparing an insect data set, wherein the label of the insect image in the data set is accurate to the species to which the insect belongs.
The labels of the insect images are accurate to the species to which the insects belong, and the labels of each image of the species are expanded into three levels of the order, the family and the species by utilizing the entomology knowledge.
And S102, inputting the marked insect data into a feature extractor for feature extraction, and extracting multi-level features of different levels.
Optionally, before inputting the labeled insect data into the feature extractor, all the images are scaled to the same size, and then pixels of the scaled insect images are normalized according to the mean and variance thereof, so as to obtain the scaled and normalized insect images.
And predicting the insect image label by using the classifier through the features extracted by the feature extractor, setting three classifiers which are respectively a purpose classifier, a department classifier and a species classifier, wherein the input of the three classifiers is the same feature, and predicting the purpose, the department and the species to which the insect image belongs respectively and independently.
Step S103, the labels of the three levels of the order, the family and the species of the insect are predicted through the extracted multi-level features, and an example of a training sample is shown in figure 2.
And step S104, calculating loss by using three prediction results of an objective, a family and a seed in the training process and optimizing the model by using a back propagation algorithm.
Calculating loss by using three prediction results of an objective, a family and a seed, calculating the weighted sum of the three loss values according to a certain proportion to obtain final loss, and performing back propagation by using a final loss calculation gradient.
Step S105, collecting a target image of the insect to be identified when model prediction is used.
And S106, predicting the species of the insect to be identified in the target image through the target model, wherein the target model is obtained by training an insect image carrying an insect label, and the insect label comprises the order, the family and the species of the insect in the insect image.
And step S106, deducing the family to which the insect to be identified belongs and the purpose to which the insect to be identified belongs by using the predicted species to which the insect to be identified belongs. That is, only the labels of the species output by the model are used, and the labels of the subject and the family to which the image belongs are inferred by using the output labels of the species.
When the above steps are performed, steps S101 to S104 are optional.
By applying the technical scheme of the invention, the corresponding target label and the corresponding department label can be deduced according to the insect image seed label; inputting the insect image into a feature extractor to extract features of different levels; in the training process, the loss is calculated by using the prediction results of the labels of the three levels of the object, the family and the seed, the weighted sum of the three losses is calculated to serve as the final loss, and the model is optimized by a back propagation algorithm; when using model prediction, only the prediction result of the species output by the model is used, and the order and the family to which the image belongs are estimated by using the output of the species. The scheme can improve the accuracy of insect image recognition and reduce the error degree of model prediction: even if the insect image belongs to the species with wrong prediction, the accuracy of the prediction result belonging to the order and the family is higher than that of the method without the technology.
As an alternative example, the following detailed description is provided to further describe the technical solution of the present application in conjunction with the following specific embodiments:
fig. 1 shows an insect image recognition method according to an embodiment of the present invention, including:
scaling the insect image to 448 x 448 pixels size; standardizing the image according to the mean value and variance of the image pixels, wherein the standardization method is shown as the following formula;
Figure BDA0003058544020000052
where x is the image pixel value, μ is the image pixel mean, and σ is the image pixel variance.
Inputting the standardized image into a feature extractor, and extracting a plurality of hierarchical features; and inputting the extracted multilayer features into a classifier to obtain a classification result.
In this embodiment, the feature extractor may be any kind of convolutional neural network. Because the difference between the insect image classification task classes is small, a fine-grained classification network is preferably adopted as the feature extractor.
Fig. 3 shows a model structure when a PMG model, which is one of the most advanced fine-grained classification models at present, is employed as a feature extractor. As shown in fig. 3, the PMG network uses resenext as the base network, and the internal network structure of resenext is shown in table 1 below, where each layer of convolutional network is followed by batch normalization and RELU activation:
TABLE 1
Figure BDA0003058544020000051
Figure BDA0003058544020000061
In order to extract multi-level features, conv3, conv4 and conv5 block feature outputs are extracted respectively and further processed by pmg _ conv blocks, wherein each pmg _ conv block comprises a 1 × 1 convolution layer and a 3 × 3 convolution layer, and each convolution layer is supplemented with batch normalization and RELU activation functions.
The process of feature extraction using pmg networks is shown in FIG. 3. The PMG model is used in combination with the multi-level classification loss designed by the application, and the loss function uses a cross-entropy loss function, so that the best effect in all the attempts can be achieved. The following is the multi-level fractional loss calculation proposed by the present application:
Figure BDA0003058544020000062
wherein x is an insect image sample, y1、y2、y3Specimen-corresponding eye species tag, Pi(x) Three prediction models corresponding to the species of the subject, αiThe value is taken empirically for the corresponding loss weight. Calculating the gradient of the model parameters according to the loss, and updating the model parameters by a gradient descent method, wherein the updating formula is as follows:
Figure BDA0003058544020000063
wherein theta is a model parameter, theta is a loss function about J (theta), and alpha is a learning rate, and is taken according to experience.
FIG. 4 shows a training process of a batch of data using a PMG model as a feature extractor according to the present embodiment, including:
in order to make the model learn the features with different granularities, the original input image is respectively divided into 64 blocks, 16 blocks and 4 blocks with the same size, and then random recombination is respectively carried out, the random recombination can remove the association among different image blocks, so that the model is focused on extracting the features of the interior of each image block but not the whole image, and thus the original input image and the recombined image are included to obtain 4 input images.
The 4 gradient dips are performed within a batch, for which 4 gradient dips different input images are used as model inputs and the loss is calculated each time using a different set of classifiers, the input images corresponding to the classifiers used being as shown in fig. 4.
Fig. 5 shows a prediction method in the case of using the PMG model as a feature extractor in the present embodiment, in which prediction is performed using only a seed classifier having connections of multi-level features as inputs.
Through experimental detection, the insect image identification method provided by the application surpasses the performance of only using a PMG method on three classification indexes on a test set, and the reason is that in the training process, a model not only learns how to classify the belonged species, but also learns how to judge the belonged family and the order of the insects. In the test process, the accuracy of the model using the multi-layer classification loss in the department and the order is higher than that of the model without the multi-layer classification loss under the condition that the classification accuracy of the seeds is the same. And the model using the multi-level classification loss has faster convergence speed, which improves the training efficiency of the model.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present application.
According to another aspect of the embodiments of the present application, there is also provided a data consistency verification apparatus for implementing the data consistency verification method. FIG. 6 is a schematic diagram of an alternative data consistency verification apparatus according to an embodiment of the present application, which may include, as shown in FIG. 6:
the acquisition unit 61 is used for acquiring a target image of the insect to be identified;
the prediction unit 63 is configured to predict a species to which the insect belongs in the target image through a target model, where the target model is obtained by training an insect image carrying an insect tag, and the insect tag includes a purpose to which the insect belongs, a family to which the insect belongs, and a species to which the insect belongs in the insect image;
an identifying unit 65, configured to infer a family to which the insect belongs in the target image and an order to which the insect belongs in the target image, using the predicted species to which the insect belongs in the target image.
Optionally, the apparatus of the present application may further comprise:
an acquisition unit, configured to acquire an insect data set before predicting a seed to which an insect belongs in the target image by using a target model, wherein a plurality of insect images having insect tags are stored in the insect data set;
and the training unit is used for training the original model by using the insect data set to obtain a target model.
The original model comprises a feature extractor, an order classifier, a family classifier and a species classifier, wherein the training unit is further configured to: extracting image features from the insect image using the feature extractor; inputting image features extracted from the insect image into the order classifier, the family classifier and the species classifier, respectively; acquiring the order to which the insects belong in the insect image predicted by the order classifier, the family to which the insects belong in the insect image predicted by the family classifier, and the species to which the insects belong in the insect image predicted by the species classifier; optimizing parameters in the original model by comparing the order to which the insect predicted by the order classifier belongs with the order to which the insect recorded in the insect label of the insect image belongs, the family to which the insect predicted by the family classifier belongs with the family to which the insect recorded in the insect label of the insect image belongs, the species to which the insect predicted by the species classifier belongs, and the species to which the insect recorded in the insect label of the insect image belongs.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules described above as a part of the apparatus may operate in a hardware environment as shown in fig. 1, and may be implemented by software or hardware.
It should be noted here that the modules described above are the same as the examples and application scenarios implemented by the corresponding steps, but are not limited to the disclosure of the above embodiments. It should be noted that the modules as a part of the apparatus may run in a corresponding hardware environment, and may be implemented by software, or may be implemented by hardware, where the hardware environment includes a network environment.
According to another aspect of the embodiments of the present application, there is also provided a server or a terminal for implementing the above verification method for data consistency.
Fig. 7 is a block diagram of a terminal according to an embodiment of the present application, and as shown in fig. 7, the terminal may include: one or more processors 201 (only one shown), memory 203, and transmission means 205, as shown in fig. 7, the terminal may further comprise an input-output device 207.
The memory 203 may be configured to store software programs and modules, such as program instructions/modules corresponding to the data consistency verification method and apparatus in the embodiment of the present application, and the processor 201 executes various functional applications and data processing by running the software programs and modules stored in the memory 203, that is, the data consistency verification method described above is implemented. The memory 203 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 203 may further include memory located remotely from the processor 201, which may be connected to the terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 205 is used for receiving or sending data via a network, and can also be used for data transmission between a processor and a memory. Examples of the network may include a wired network and a wireless network. In one example, the transmission device 205 includes a Network adapter (NIC) that can be connected to a router via a Network cable and other Network devices to communicate with the internet or a local area Network. In one example, the transmission device 205 is a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
Wherein the memory 203 is specifically used for storing application programs.
The processor 201 may call the application stored in the memory 203 via the transmission means 205 to perform the following steps:
collecting a target image of an insect to be identified;
predicting the species to which the insect to be identified belongs in the target image through a target model, wherein the target model is obtained by training an insect image carrying an insect label, and the insect label comprises the order, the family and the species to which the insect belongs in the insect image;
and deducing the family to which the insect to be identified belongs and the order to which the insect to be identified belongs by using the predicted species to which the insect to be identified belongs.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
It can be understood by those skilled in the art that the structure shown in fig. 7 is only an illustration, and the terminal may be a terminal device such as a smart phone (e.g., an Android phone, an iOS phone, etc.), a tablet computer, a palm computer, and a Mobile Internet Device (MID), a PAD, etc. Fig. 7 is a diagram illustrating a structure of the electronic device. For example, the terminal may also include more or fewer components (e.g., network interfaces, display devices, etc.) than shown in FIG. 7, or have a different configuration than shown in FIG. 7.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program instructing hardware associated with the terminal device, where the program may be stored in a computer-readable storage medium, and the storage medium may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
Embodiments of the present application also provide a storage medium. Alternatively, in the present embodiment, the storage medium may be used for a program code of a verification method for performing data consistency.
Optionally, in this embodiment, the storage medium may be located on at least one of a plurality of network devices in a network shown in the above embodiment.
Optionally, in this embodiment, the storage medium is configured to store program code for performing the following steps:
collecting a target image of an insect to be identified;
predicting the species to which the insect to be identified belongs in the target image through a target model, wherein the target model is obtained by training an insect image carrying an insect label, and the insect label comprises the order, the family and the species to which the insect belongs in the insect image;
and deducing the family to which the insect to be identified belongs and the order to which the insect to be identified belongs by using the predicted species to which the insect to be identified belongs.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments, and this embodiment is not described herein again.
Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The integrated unit in the above embodiments, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in the above computer-readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including instructions for causing one or more computer devices (which may be personal computers, servers, network devices, or the like) to execute all or part of the steps of the method described in the embodiments of the present application.
In the above embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed client may be implemented in other manners. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, units or modules, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.

Claims (10)

1. A method of identifying an insect, comprising:
collecting a target image of an insect to be identified;
predicting the species to which the insect to be identified belongs in the target image through a target model, wherein the target model is obtained by training an insect image carrying an insect label, and the insect label comprises the order, the family and the species to which the insect belongs in the insect image;
and deducing the family to which the insect to be identified belongs and the order to which the insect to be identified belongs by using the predicted species to which the insect to be identified belongs.
2. The method of claim 1, wherein prior to predicting, by a target model, a species in the target image to which the insect to be identified belongs, the method further comprises:
obtaining an insect dataset, wherein a plurality of said insect images having insect tags are stored in said insect dataset;
and training an original model by using the insect data set to obtain the target model.
3. The method of claim 2, wherein the raw model comprises a feature extractor, an order classifier, a family classifier, and a species classifier, wherein training the raw model using the insect dataset comprises:
extracting image features from the insect image using the feature extractor;
inputting image features extracted from the insect image into the order classifier, the family classifier and the species classifier, respectively;
acquiring the order to which the insects belong in the insect image predicted by the order classifier, the family to which the insects belong in the insect image predicted by the family classifier, and the species to which the insects belong in the insect image predicted by the species classifier;
optimizing parameters in the original model by comparing the order to which the insect predicted by the order classifier belongs with the order to which the insect recorded in the insect label of the insect image belongs, the family to which the insect predicted by the family classifier belongs with the family to which the insect recorded in the insect label of the insect image belongs, the species to which the insect predicted by the species classifier belongs, and the species to which the insect recorded in the insect label of the insect image belongs.
4. The method of claim 3, wherein optimizing the parameters in the original model by comparing the order to which the insect predicted by the order classifier belongs to the order to which the insect recorded in the insect label of the insect image belongs, the family to which the insect predicted by the family classifier belongs to the family to which the insect recorded in the insect label of the insect image belongs, the species to which the insect predicted by the species classifier belongs to the species to which the insect recorded in the insect label of the insect image belongs comprises:
determining an order predicted loss by comparing the order to which the insect predicted by the order classifier belongs with the order to which the insect recorded in the insect tag of the insect image belongs, determining a family predicted loss by comparing the family to which the insect predicted by the family classifier belongs with the family to which the insect recorded in the insect tag of the insect image belongs, determining a species predicted loss by comparing the species to which the insect predicted by the species classifier belongs with the species to which the insect recorded in the insect tag of the insect image belongs;
performing weighted summation on the target prediction loss, the subject prediction loss and the seed prediction loss to obtain a final loss;
the final loss is used to calculate gradients and back-propagation of parameter values in the original model.
5. The method of claim 2, wherein prior to training an original model using the insect data set, the method further comprises:
scaling the insect image in the insect data set according to a preset size; and then, carrying out standardization treatment on pixels of the scaled insect image according to the mean value and the variance of the pixels to obtain the scaled and standardized insect image.
6. An insect identification device, comprising:
the acquisition unit is used for acquiring a target image of the insect to be identified;
the prediction unit is used for predicting the species to which the insects belong in the target image through a target model, wherein the target model is obtained by training an insect image carrying an insect label, and the insect label comprises the order, the family and the species to which the insects belong in the insect image;
and the identification unit is used for deducing the family to which the insect belongs in the target image and the order to which the insect belongs in the target image by using the predicted species to which the insect belongs in the target image.
7. The apparatus of claim 6, further comprising:
an acquisition unit, configured to acquire an insect data set before predicting a seed to which an insect belongs in the target image by using a target model, wherein a plurality of insect images having insect tags are stored in the insect data set;
and the training unit is used for training the original model by using the insect data set to obtain a target model.
8. The apparatus of claim 7, wherein the raw model comprises a feature extractor, an order classifier, a family classifier, and a species classifier, wherein the training unit is further configured to:
extracting image features from the insect image using the feature extractor;
inputting image features extracted from the insect image into the order classifier, the family classifier and the species classifier, respectively;
acquiring the order to which the insects belong in the insect image predicted by the order classifier, the family to which the insects belong in the insect image predicted by the family classifier, and the species to which the insects belong in the insect image predicted by the species classifier;
optimizing parameters in the original model by comparing the order to which the insect predicted by the order classifier belongs with the order to which the insect recorded in the insect label of the insect image belongs, the family to which the insect predicted by the family classifier belongs with the family to which the insect recorded in the insect label of the insect image belongs, the species to which the insect predicted by the species classifier belongs, and the species to which the insect recorded in the insect label of the insect image belongs.
9. An electronic device, comprising:
at least one processor, and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202110506217.XA 2021-05-10 2021-05-10 Insect identification method and device Pending CN113159075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110506217.XA CN113159075A (en) 2021-05-10 2021-05-10 Insect identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110506217.XA CN113159075A (en) 2021-05-10 2021-05-10 Insect identification method and device

Publications (1)

Publication Number Publication Date
CN113159075A true CN113159075A (en) 2021-07-23

Family

ID=76874162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110506217.XA Pending CN113159075A (en) 2021-05-10 2021-05-10 Insect identification method and device

Country Status (1)

Country Link
CN (1) CN113159075A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871885A (en) * 2019-01-28 2019-06-11 南京林业大学 A kind of plants identification method based on deep learning and Plant Taxonomy
KR20200095254A (en) * 2019-01-31 2020-08-10 (주)엔에스데블 Medical image tagging and categorization system and method using Multi-label classification
CN111832642A (en) * 2020-07-07 2020-10-27 杭州电子科技大学 Image identification method based on VGG16 in insect taxonomy
CN111931581A (en) * 2020-07-10 2020-11-13 威海精讯畅通电子科技有限公司 Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871885A (en) * 2019-01-28 2019-06-11 南京林业大学 A kind of plants identification method based on deep learning and Plant Taxonomy
KR20200095254A (en) * 2019-01-31 2020-08-10 (주)엔에스데블 Medical image tagging and categorization system and method using Multi-label classification
CN111832642A (en) * 2020-07-07 2020-10-27 杭州电子科技大学 Image identification method based on VGG16 in insect taxonomy
CN111931581A (en) * 2020-07-10 2020-11-13 威海精讯畅通电子科技有限公司 Agricultural pest identification method based on convolutional neural network, terminal and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹香滢等: "基于科优先策略的植物图像识别", 计算机应用, no. 11 *
陈小琳等: "昆虫图像自动鉴别技术", 昆虫知识, no. 02 *

Similar Documents

Publication Publication Date Title
CN110046631B (en) System and method for automatically inferring changes in spatiotemporal images
CN108427708B (en) Data processing method, data processing apparatus, storage medium, and electronic apparatus
CN110147722A (en) A kind of method for processing video frequency, video process apparatus and terminal device
CN111898739B (en) Data screening model construction method, data screening method, device, computer equipment and storage medium based on meta learning
JP7076681B2 (en) Image processing methods and equipment, and training methods for neural network models
CN108564102A (en) Image clustering evaluation of result method and apparatus
EP3924876A1 (en) Automated unsupervised localization of context sensitive events in crops and computing extent thereof
CN112581438B (en) Slice image recognition method and device, storage medium and electronic equipment
CN111931809A (en) Data processing method and device, storage medium and electronic equipment
CN111104954A (en) Object classification method and device
CN111160096A (en) Method, device and system for identifying poultry egg abnormality, storage medium and electronic device
CN114419363A (en) Target classification model training method and device based on label-free sample data
CN112966758B (en) Crop disease, insect and weed identification method, device and system and storage medium
CN112016617B (en) Fine granularity classification method, apparatus and computer readable storage medium
CN113939831A (en) Understanding deep learning models
CN109784403B (en) Method for identifying risk equipment and related equipment
Parez et al. Towards Sustainable Agricultural Systems: A Lightweight Deep Learning Model for Plant Disease Detection.
CN114299304A (en) Image processing method and related equipment
Raja Kumar et al. Novel segmentation and classification algorithm for detection of tomato leaf disease
CN117253192A (en) Intelligent system and method for silkworm breeding
CN112132231A (en) Object identification method and device, storage medium and electronic equipment
CN111414922B (en) Feature extraction method, image processing method, model training method and device
CN108764289B (en) Method and system for classifying UI (user interface) abnormal pictures based on convolutional neural network
CN113159075A (en) Insect identification method and device
CN111104952A (en) Method, system and device for identifying food types and refrigerator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination