CN113449674B - Pig face identification method and system - Google Patents

Pig face identification method and system Download PDF

Info

Publication number
CN113449674B
CN113449674B CN202110783674.3A CN202110783674A CN113449674B CN 113449674 B CN113449674 B CN 113449674B CN 202110783674 A CN202110783674 A CN 202110783674A CN 113449674 B CN113449674 B CN 113449674B
Authority
CN
China
Prior art keywords
pig face
pig
trained
position information
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110783674.3A
Other languages
Chinese (zh)
Other versions
CN113449674A (en
Inventor
蔡艳婧
高小虎
曹春梅
陆健
周晓珏
谢书俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Vocational College of Business
Original Assignee
Jiangsu Vocational College of Business
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Vocational College of Business filed Critical Jiangsu Vocational College of Business
Priority to CN202110783674.3A priority Critical patent/CN113449674B/en
Publication of CN113449674A publication Critical patent/CN113449674A/en
Application granted granted Critical
Publication of CN113449674B publication Critical patent/CN113449674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a pig face identification method and a pig face identification system, wherein a pig face image is obtained; predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model; extracting a second pig face feature of the pig face image through a convolutional neural network; fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information; and identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model. The accuracy of identifying the identity information of the pig based on the pig face characteristic information is high.

Description

Pig face identification method and system
Technical Field
The invention relates to the technical field of computers, in particular to a pig face identification method and system.
Background
With the development of science and technology, artificial intelligence technology is applied in the fields of medical treatment, military, agriculture and the like. In rural economy, it is primarily dependent on the aquaculture industry. Improving the management of the breeding industry is an effective measure for bringing the rural economy to the throne. For the pig breeding industry in the breeding industry, if each pig is tracked for life, the method has very important significance for food safety and pig raising management.
However, since pigs grow fast and change their faces rapidly, it is difficult to identify and track their life-long. At present, the pig face identification is mainly realized by relying on a neural network. The traditional neural network model has poor adaptability, and once the pig face changes, the accuracy of the model for identifying the pig face is low.
Disclosure of Invention
The invention aims to provide a pig face identification method and a pig face identification system, which are used for solving the problems in the prior art.
In a first aspect, an embodiment of the present invention provides a pig face identification method, where the method includes:
obtaining a pig face image;
predicting a first pig face characteristic based on the pig face image through the pre-trained pig face prediction model;
extracting a second pig face characteristic of the pig face image through a convolutional neural network;
fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information;
and identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model.
Optionally, the training method of the pig face prediction model includes:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a first cyclic neural network by using the output of the trained first spectral clustering model;
training a second recurrent neural network by using the output of the trained second spectral clustering model;
taking the sum of a first loss function of the trained first recurrent neural network and a second loss function of the trained second recurrent neural network as a third loss function;
training the long-time memory network based on the output of the trained second recurrent neural network and the third loss function;
forming the trained pig face prediction model by using the trained second spectral clustering model, the trained second recurrent neural network and the trained long-time memory network;
in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained second recurrent neural network; and taking the output of the trained second recurrent neural network as the input of the trained long-time memory network.
Optionally, the training method of the pig face prediction model includes:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a long-time and short-time memory network by using the output of the trained first spectral clustering model;
forming the trained pig face prediction model by using the trained second spectral clustering model and the trained long-time memory network;
and in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained long-time and short-time memory network.
Optionally, the first pig face feature and the second pig face feature are fused to obtain pig face feature information, including:
mapping the first pig face features to a space where the second pig face features are located to obtain first mapping features;
and obtaining average characteristics of the first mapping characteristics and the second pig face characteristics, and taking the average characteristics as the pig face characteristic information.
Optionally, the pig face recognition model is a convolutional neural network.
In a second aspect, an embodiment of the present invention further provides a pig face identification system, where the system includes:
the acquisition image module is used for acquiring a pig face image;
the prediction module is used for predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model;
the extraction module is used for extracting a second pig face feature of the pig face image through a convolutional neural network;
the fusion module is used for fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information;
and the identification module is used for identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model.
Optionally, the training method of the pig face prediction model includes:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a first cyclic neural network by using the output of the trained first spectral clustering model;
training a second recurrent neural network by using the output of the trained second spectral clustering model;
taking the sum of a first loss function of the trained first recurrent neural network and a second loss function of the trained second recurrent neural network as a third loss function;
training the long-time memory network based on the output of the trained second recurrent neural network and the third loss function;
forming a trained pig face prediction model by using a trained second spectral clustering model, a second recurrent neural network and a trained long-time memory network;
in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained second recurrent neural network; and taking the output of the trained second recurrent neural network as the input of the trained long-time memory network.
Optionally, the training method of the pig face prediction model includes:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a long-time and short-time memory network by using the output of the trained first spectral clustering model;
forming the trained pig face prediction model by using the trained second spectral clustering model and the trained long-time memory network;
and in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained long-time and short-time memory network.
Optionally, the first pig face feature and the second pig face feature are fused to obtain pig face feature information, including:
mapping the first pig face features to a space where the second pig face features are located to obtain first mapping features;
and obtaining average characteristics of the first mapping characteristics and the second pig face characteristics, and taking the average characteristics as the pig face characteristic information.
Optionally, the pig face recognition model is a convolutional neural network.
Compared with the prior art, the embodiment of the invention achieves the following beneficial effects:
the embodiment of the invention provides a pig face identification method and a pig face identification system, wherein the pig face identification method comprises the following steps: obtaining a pig face image; predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model; extracting a second pig face characteristic of the pig face image through a convolutional neural network; fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information; and identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model.
By adopting the scheme, because the pig face changes quickly due to the short growth period of the pig, the identification information of the pig is not accurate by directly extracting the characteristic identification of the pig face based on the traditional pig face identification mode, but the change of the pig face is traceable in the growth process, therefore, the characteristic of the pig face (first pig face characteristic) is predicted firstly, and then the actually extracted pig face characteristic (second pig face characteristic) is combined for fusion, the obtained pig face characteristic information can represent the real characteristic of more pig faces, and therefore, the accuracy of the identification information of the pig based on the pig face characteristic information is high. And extracting the second pig face features of the pig face image by adopting a Convolutional Neural Network (CNN), so that the useful pig face features in the pig face image can be extracted to the maximum extent.
Drawings
Fig. 1 is a flowchart of a pig face identification method according to an embodiment of the present invention.
Fig. 2 is a schematic block structure diagram of an electronic device according to an embodiment of the present invention.
The mark in the figure is: a bus 500; a receiver 501; a processor 502; a transmitter 503; a memory 504; a bus interface 505.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings.
Examples
The embodiment of the invention provides a pig face identification method, as shown in fig. 1, the method comprises the following steps:
s101: a pig face image is obtained. The pig face image is shot by a camera device arranged on the pigsty feeding opening. The pig face image comprises pig faces.
S102: and predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model.
S103: and extracting a second pig face characteristic of the pig face image through a convolutional neural network.
S104: fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information;
s105: and identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model.
By adopting the scheme, because the pig face changes quickly due to the short growth period of the pig, the identification information of the pig is not accurate by directly extracting the characteristic identification of the pig face based on the traditional pig face identification mode, but the change of the pig face is traceable in the growth process, therefore, the characteristic of the pig face (first pig face characteristic) is predicted firstly, and then the actually extracted pig face characteristic (second pig face characteristic) is combined for fusion, the obtained pig face characteristic information can represent the real characteristic of more pig faces, and therefore, the accuracy of the identification information of the pig based on the pig face characteristic information is high. And extracting the second pig face features of the pig face image by adopting a Convolutional Neural Network (CNN), so that the useful pig face features in the pig face image can be extracted to the maximum extent.
In embodiments of the invention, the identity information of the pig is unique, for example the pig number.
Optionally, the training method of the pig face prediction model includes:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a first Recurrent Neural Network (RNN) according to the output of the trained first spectral clustering model;
training a second Recurrent Neural Network (RNN) with the trained output of the second spectral clustering model;
taking the sum of a first loss function of the trained first recurrent neural network and a second loss function of the trained second recurrent neural network as a third loss function; specifically, the first loss function is softmax loss, and the second loss function is center loss.
Training a Long Short-term Memory network (LSTM) based on the output of the trained second recurrent neural network and the third loss function;
forming the trained pig face prediction model by using the trained second spectral clustering model, the trained second recurrent neural network and the trained long-time memory network;
in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained second cyclic neural network; and taking the output of the trained second recurrent neural network as the input of the trained long-time memory network.
Optionally, predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model specifically comprises: inputting the pig face image into a trained second spectral clustering model, and taking the output of the trained second spectral clustering model as the input of a trained second cyclic neural network; and taking the output of the trained second cyclic neural network as the input of the trained long and short time memory network, and taking the output of the long and short time memory network as the first pig face characteristic.
Optionally, the training method of the pig face prediction model includes:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a long-time and short-time memory network by using the output of the trained first spectral clustering model;
forming a trained pig face prediction model by using a trained second spectral clustering model and a trained long-time and short-time memory network;
and in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained long-time and short-time memory network.
Optionally, predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model specifically comprises: and inputting the pig face image into a trained second spectral clustering model, taking the output of the trained second spectral clustering model as the input of a trained long-and-short time memory network, and taking the output of the long-and-short time memory network as the first pig face characteristic.
Because the growth change of the pig face is continuous and not in a fracture type change, the first pig face characteristic obtained by the scheme integrates the pig face characteristics of the pig in the previous growth period, and the pig face information (second historical pig face image) of the pig in a more recent period is also considered in an important mode, so that the representation accuracy of the first pig face characteristic of the pig obtained through prediction on the real characteristics of the pig is improved, and the accuracy of pig face identification is further improved.
Fuse first pig face characteristic with second pig face characteristic, obtain pig face characteristic information, include:
mapping the first pig face features to a space where the second pig face features are located to obtain first mapping features;
and obtaining average characteristics of the first mapping characteristics and the second pig face characteristics, and taking the average characteristics as the pig face characteristic information.
Optionally, the first pig face feature is a predicted feature point of the pig face, the second pig face feature is a detected feature point of the pig face, and the feature point may be a point with obvious characteristics, such as an eye, a nose, a mouth corner, and the like of the pig. The first pig-face feature may include a plurality of feature points and the second pig-face feature may include a plurality of feature points.
Optionally, mapping the first pig face feature to a space where the second pig face feature is located to obtain a first mapping feature, which specifically includes:
mapping first position information (i, j) of feature points of the first pig face features to a space where the second pig face features are located to obtain first mapping information (i ', j'), and calculating a specific mapping memo mode according to the following formula to obtain:
Figure BDA0003157953870000061
wherein a is a constant parameter and a is a value rangeIs 1 to 2 128 An integer in between. Theta is the angle between the first and second pig-face features. Specifically, the mode of obtaining the included angle between first pig face characteristic and the second pig face characteristic is:
performing dimensionality reduction on the first pig face feature by a Principal Component Analysis (PCA) method to obtain a first feature vector; and performing dimensionality reduction on the second pig face feature by a Principal Component Analysis (PCA) method to obtain a second feature vector. And obtaining an included angle theta between the first pig face characteristic and the second pig face characteristic.
And after mapping and transforming all the feature points of the first pig face feature into the space where the second pig face feature is located, the first mapping information (i ', j') corresponding to all the mapping and transformation forms a first mapping feature.
Obtaining average characteristics of a first mapping characteristic and the second pig face characteristic, wherein the specific mode of taking the average characteristics as the pig face characteristic information is as follows:
matching all first mapping information (i ', j') in the first mapping features with all second position information (x, y) in the second pig face features (positions of feature points in the second pig face features), wherein the specific matching mode is as follows:
and obtaining a matching index of the first position information (i, j) and the second position information (x, y), and if the matching index is greater than 1, determining that the first position information (i, j) and the second position information (x, y) are successfully matched.
The specific way of obtaining the matching index of the first position information (i, j) and the second position information (x, y) is as follows:
obtaining a first distance d1 from the first position information (i, j) to the second position information (x, y), and obtaining a second distance d2 from the first mapping information (i ', j') obtained by mapping the first position information (i, j) to the second position information (x, y).
The matching index is obtained by the following calculation method:
Figure BDA0003157953870000071
the first distance d1 and the second distance d2 may be euler distances, and match refers to a matching index.
After the first position information (i, j) and the second position information (x, y) are successfully matched, obtaining the midpoint position information of the successfully matched first position information (i, j) and second position information (x, y):
Figure BDA0003157953870000072
(x0, y0) is midpoint position information.
And forming average characteristics by using the midpoint information of the first position information (i, j) and the second position information (x, y) which are successfully matched.
Optionally, the pig face recognition model is a Convolutional Neural Network (CNN).
The embodiment of the present application further provides an execution main body for executing the above steps, and the execution main body may be a pig face recognition system. A pig face identification system configured in an electronic device having data processing capabilities, the system comprising:
the acquisition image module is used for acquiring a pig face image;
the prediction module is used for predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model;
the extraction module is used for extracting a second pig face feature of the pig face image through a convolutional neural network;
the fusion module is used for fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information;
and the identification module is used for identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model.
With regard to the system in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present invention further provides an electronic device, as shown in fig. 2, including a memory 504, a processor 502, and a computer program stored on the memory 504 and executable on the processor 502, where the processor 502 implements the steps of any one of the pig face identification methods described above when executing the program.
Where in fig. 2 a bus architecture (represented by bus 500) is shown, bus 500 may include any number of interconnected buses and bridges, and bus 500 links together various circuits including one or more processors, represented by processor 502, and memory, represented by memory 504. The bus 500 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 505 provides an interface between the bus 500 and the receiver 501 and transmitter 503. The receiver 501 and the transmitter 503 may be the same element, i.e. a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 502 is responsible for managing the bus 500 and general processing, and the memory 504 may be used for storing data used by the processor 502 in performing operations.
In the embodiment of the invention, the pig face recognition system is installed in the robot, and the pig face recognition system can be stored in a memory in the form of a software functional module and can be processed and run by a processor.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a system will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the devices in an embodiment may be adaptively changed and arranged in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functionality of some or all of the components in an apparatus according to an embodiment of the invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (4)

1. A pig face identification method, the method comprising:
obtaining a pig face image;
predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model;
extracting a second pig face feature of the pig face image through a convolutional neural network;
fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information;
identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model;
the training method of the pig face prediction model comprises the following steps:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a first cyclic neural network by using the output of the trained first spectral clustering model;
training a second recurrent neural network by using the output of the trained second spectral clustering model;
taking the sum of a first loss function of the trained first recurrent neural network and a second loss function of the trained second recurrent neural network as a third loss function;
training the long-time memory network based on the output of the trained second recurrent neural network and the third loss function;
forming the trained pig face prediction model by using the trained second spectral clustering model, the trained second recurrent neural network and the trained long-time memory network;
in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained second cyclic neural network; and taking the output of the trained second recurrent neural network as the input of the trained long-time memory network.
2. The method of claim 1, wherein the fusing the first pig face features and the second pig face features to obtain pig face feature information comprises:
mapping first position information (i, j) of feature points of the first pig face features to a space where the second pig face features are located to obtain first mapping information;
mapping and transforming all feature points of the first pig face feature into a space where the second pig face feature is located, and then forming first mapping information (i ', j') corresponding to all mapping and transforming into first mapping features;
obtaining a matching index of the first position information (i, j) and the second position information (x, y), and if the matching index is greater than 1, determining that the first position information (i, j) and the second position information (x, y) are successfully matched;
the specific way to obtain the matching index of the first position information (i, j) and the second position information (x, y) is as follows:
obtaining a first distance d1 from the first position information (i, j) to the second position information (x, y), and obtaining a second distance d2 from the first mapping information (i ', j') obtained by mapping the first position information (i, j) to the second position information (x, y);
the matching index is obtained by the following calculation method:
Figure FDA0003813231580000021
the first distance d1 and the second distance d2 are Euler distances, and match refers to matching indexes;
after the first position information (i, j) and the second position information (x, y) are successfully matched, obtaining the midpoint position information of the successfully matched first position information (i, j) and second position information (x, y):
Figure FDA0003813231580000022
wherein, (x0, y0) is midpoint position information;
forming an average characteristic by using midpoint information of the first position information (i, j) and the second position information (x, y) which are successfully matched;
and taking the average characteristic as the pig face characteristic information.
3. A pig face identification system, the system comprising:
the acquisition image module is used for acquiring a pig face image;
the prediction module is used for predicting a first pig face characteristic based on the pig face image through a pre-trained pig face prediction model;
the extraction module is used for extracting a second pig face feature of the pig face image through a convolutional neural network;
the fusion module is used for fusing the first pig face characteristic and the second pig face characteristic to obtain pig face characteristic information;
the identification module is used for identifying the identity information of the pig based on the pig face characteristic information through a pre-trained pig face identification model;
the training method of the pig face prediction model comprises the following steps:
training a first spectral clustering model for a first historical pig face image set of a pig;
training a second spectral clustering model for a second historical pig face image set of the pig; shooting time of a plurality of pig face images in the first historical pig face image set is earlier than that of a plurality of pig face images in the second historical pig face image set;
training a first cyclic neural network by using the output of the trained first spectral clustering model;
training a second recurrent neural network by using the output of the trained second spectral clustering model;
taking the sum of a first loss function of the trained first recurrent neural network and a second loss function of the trained second recurrent neural network as a third loss function;
training the long-time memory network based on the output of the trained second recurrent neural network and the third loss function;
forming a trained pig face prediction model by using a trained second spectral clustering model, a second recurrent neural network and a trained long-time memory network;
in the trained pig face prediction model, the output of the trained second spectral clustering model is used as the input of the trained second cyclic neural network; and taking the output of the trained second recurrent neural network as the input of the trained long-time memory network.
4. The system of claim 3, wherein the fusing the first pig face features and the second pig face features to obtain pig face feature information comprises:
mapping first position information (i, j) of feature points of the first pig face features to a space where the second pig face features are located to obtain first mapping information;
mapping and transforming all feature points of the first pig face feature into a space where the second pig face feature is located, and then forming first mapping information (i ', j') corresponding to all mapping and transforming into first mapping features;
obtaining a matching index of the first position information (i, j) and the second position information (x, y), and if the matching index is greater than 1, determining that the first position information (i, j) and the second position information (x, y) are successfully matched;
the specific way to obtain the matching index of the first position information (i, j) and the second position information (x, y) is as follows:
obtaining a first distance d1 from the first position information (i, j) to the second position information (x, y), and obtaining a second distance d2 from the first mapping information (i ', j') obtained by mapping the first position information (i, j) to the second position information (x, y);
the matching index is obtained by the following calculation method:
Figure FDA0003813231580000041
the first distance d1 and the second distance d2 are Euler distances, and match refers to matching indexes;
after the first position information (i, j) and the second position information (x, y) are successfully matched, obtaining the midpoint position information of the successfully matched first position information (i, j) and second position information (x, y):
Figure FDA0003813231580000042
wherein, (x0, y0) is midpoint position information;
forming an average characteristic by using midpoint information of the first position information (i, j) and the second position information (x, y) which are successfully matched;
and taking the average characteristic as the pig face characteristic information.
CN202110783674.3A 2021-07-12 2021-07-12 Pig face identification method and system Active CN113449674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110783674.3A CN113449674B (en) 2021-07-12 2021-07-12 Pig face identification method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110783674.3A CN113449674B (en) 2021-07-12 2021-07-12 Pig face identification method and system

Publications (2)

Publication Number Publication Date
CN113449674A CN113449674A (en) 2021-09-28
CN113449674B true CN113449674B (en) 2022-09-30

Family

ID=77815872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110783674.3A Active CN113449674B (en) 2021-07-12 2021-07-12 Pig face identification method and system

Country Status (1)

Country Link
CN (1) CN113449674B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114926858B (en) * 2022-05-10 2024-06-28 吉林大学 Feature point information-based deep learning pig face recognition method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016110005A1 (en) * 2015-01-07 2016-07-14 深圳市唯特视科技有限公司 Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
CN109522812A (en) * 2018-10-23 2019-03-26 青岛小鸟看看科技有限公司 Face identification method and device, electronic equipment
CN110443228A (en) * 2019-08-20 2019-11-12 图谱未来(南京)人工智能研究院有限公司 A kind of method for pedestrian matching, device, electronic equipment and storage medium
CA3057010A1 (en) * 2018-09-28 2020-03-28 Element Ai Inc. Method and system for proactively increasing customer satisfaction
CN111639629A (en) * 2020-06-15 2020-09-08 安徽工大信息技术有限公司 Pig weight measuring method and device based on image processing and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664878A (en) * 2018-03-14 2018-10-16 广州影子控股股份有限公司 Pig personal identification method based on convolutional neural networks
CN110276416B (en) * 2019-07-02 2023-04-28 广东省智能机器人研究院 Rolling bearing fault prediction method
CN110728179A (en) * 2019-09-04 2020-01-24 天津大学 Pig face identification method adopting multi-path convolutional neural network
CN111666838B (en) * 2020-05-22 2023-04-18 吉林大学 Improved residual error network pig face identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016110005A1 (en) * 2015-01-07 2016-07-14 深圳市唯特视科技有限公司 Gray level and depth information based multi-layer fusion multi-modal face recognition device and method
CA3057010A1 (en) * 2018-09-28 2020-03-28 Element Ai Inc. Method and system for proactively increasing customer satisfaction
CN109522812A (en) * 2018-10-23 2019-03-26 青岛小鸟看看科技有限公司 Face identification method and device, electronic equipment
CN110443228A (en) * 2019-08-20 2019-11-12 图谱未来(南京)人工智能研究院有限公司 A kind of method for pedestrian matching, device, electronic equipment and storage medium
CN111639629A (en) * 2020-06-15 2020-09-08 安徽工大信息技术有限公司 Pig weight measuring method and device based on image processing and storage medium

Also Published As

Publication number Publication date
CN113449674A (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN111310775B (en) Data training method, device, terminal equipment and computer readable storage medium
CN107229947B (en) Animal identification-based financial insurance method and system
CN110135231B (en) Animal face recognition method and device, computer equipment and storage medium
CN110349147B (en) Model training method, fundus macular region lesion recognition method, device and equipment
CN109902705A (en) A kind of object detection model to disturbance rejection generation method and device
CN110689043A (en) Vehicle fine granularity identification method and device based on multiple attention mechanism
US20220375106A1 (en) Multi-target tracking method, device and computer-readable storage medium
CN111738403B (en) Neural network optimization method and related equipment
CN113298152B (en) Model training method, device, terminal equipment and computer readable storage medium
CN113449674B (en) Pig face identification method and system
Xue et al. Open set sheep face recognition based on Euclidean space metric
CN111488853A (en) Big data face recognition method and system for financial institution security system and robot
CN111126268A (en) Key point detection model training method and device, electronic equipment and storage medium
Masood et al. MaizeNet: A deep learning approach for effective recognition of maize plant leaf diseases
CN112149602A (en) Action counting method and device, electronic equipment and storage medium
CN114495241A (en) Image identification method and device, electronic equipment and storage medium
CN114882324A (en) Target detection model training method, device and computer readable storage medium
CN114511922A (en) Physical training posture recognition method, device, equipment and storage medium
CN111091099A (en) Scene recognition model construction method, scene recognition method and device
Chen et al. Recognition method of dairy cow feeding behavior based on convolutional neural network
CN114359781B (en) Intelligent recognition system for cloud-side collaborative autonomous learning
CN115795355A (en) Classification model training method, device and equipment
US11798265B2 (en) Teaching data correction method for training image, teaching data correction device and program
CN111401112B (en) Face recognition method and device
CN110609561A (en) Pedestrian tracking method and device, computer readable storage medium and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant