CN117934463A - Beef cattle carcass quality grading method based on optical test - Google Patents

Beef cattle carcass quality grading method based on optical test Download PDF

Info

Publication number
CN117934463A
CN117934463A CN202410324603.0A CN202410324603A CN117934463A CN 117934463 A CN117934463 A CN 117934463A CN 202410324603 A CN202410324603 A CN 202410324603A CN 117934463 A CN117934463 A CN 117934463A
Authority
CN
China
Prior art keywords
carcass
carcass surface
feature map
surface state
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202410324603.0A
Other languages
Chinese (zh)
Inventor
祝贺
邢艳霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Agriculture and Engineering University
Original Assignee
Shandong Agriculture and Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Agriculture and Engineering University filed Critical Shandong Agriculture and Engineering University
Priority to CN202410324603.0A priority Critical patent/CN117934463A/en
Publication of CN117934463A publication Critical patent/CN117934463A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a beef cattle carcass quality grading method based on optical testing, which relates to the technical field of image processing. Therefore, the surface information of the beef cattle carcasses can be rapidly and accurately obtained, and the characteristic parameters related to the beef cattle carcasses are objectively and scientifically extracted, so that the beef cattle carcasses are automatically rated, errors and fatigue of manual vision are avoided, the reliability and comparability of the rating are improved, and the efficiency and consistency of the rating are further improved.

Description

Beef cattle carcass quality grading method based on optical test
Technical Field
The application relates to the field of image processing, and more particularly relates to a beef cattle carcass quality grading method based on optical testing.
Background
The quality rating of beef cattle carcasses refers to a method for comprehensively evaluating the beef cattle carcasses according to the characteristics of the appearance, muscles, fat, bones and the like of the beef cattle carcasses so as to reflect the meat quality and economic value of the beef cattle carcasses. At present, beef cattle carcass quality grading mainly depends on artificial vision and experience judgment, and has the problems of strong subjectivity, low accuracy, low efficiency and the like.
In order to improve the objectivity, accuracy and efficiency of beef cattle carcass quality ratings, an optimized beef cattle carcass quality rating method is expected.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a beef cattle carcass quality grading method based on optical testing, which comprises the steps of scanning beef cattle carcasses by utilizing an optical sensor, obtaining information such as surface shape, color, texture and the like of the beef cattle carcasses, and grading the beef cattle carcasses to be evaluated by using an image processing and analysis technology. Therefore, the surface information of the beef cattle carcasses can be rapidly and accurately obtained, and the characteristic parameters related to the beef cattle carcasses are objectively and scientifically extracted, so that the beef cattle carcasses are automatically rated, errors and fatigue of manual vision are avoided, the reliability and comparability of the rating are improved, and the efficiency and consistency of the rating are further improved.
According to one aspect of the present application, there is provided a beef cattle carcass quality rating method based on optical testing, comprising:
acquiring a carcass surface state image of a beef cattle carcass to be detected through a camera;
Extracting carcass surface state features from the carcass surface state image; and
And generating a quality rating result based on the carcass surface state characteristics.
Compared with the prior art, the beef cattle carcass quality grading method based on the optical test provided by the application has the advantages that the beef cattle carcass is scanned by utilizing the optical sensor to acquire the information such as the surface shape, the color, the texture and the like, and then the beef cattle carcass to be evaluated is graded in quality by the image processing and analysis technology. Therefore, the surface information of the beef cattle carcasses can be rapidly and accurately obtained, and the characteristic parameters related to the beef cattle carcasses are objectively and scientifically extracted, so that the beef cattle carcasses are automatically rated, errors and fatigue of manual vision are avoided, the reliability and comparability of the rating are improved, and the efficiency and consistency of the rating are further improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a flow chart of a beef cattle carcass quality rating method based on optical testing according to an embodiment of the application;
fig. 2 is a system architecture diagram of a beef cattle carcass quality rating method based on optical testing according to an embodiment of the application;
fig. 3 is a flowchart of substep S2 of an optical test-based beef cattle carcass quality rating method according to an embodiment of the application;
FIG. 4 is a flowchart of substep S24 of an optical test-based beef cattle carcass quality rating method according to an embodiment of the application;
Fig. 5 is a flowchart of substep S3 of the beef cattle carcass quality rating method based on optical testing according to an embodiment of the application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
At present, beef cattle carcass quality grading mainly depends on artificial vision and experience judgment, and has the problems of strong subjectivity, low accuracy, low efficiency and the like. In order to improve the objectivity, accuracy and efficiency of beef cattle carcass quality ratings, an optimized beef cattle carcass quality rating method is expected.
In the technical scheme of the application, a beef cattle carcass quality grading method based on optical testing is provided. Fig. 1 is a flow chart of a beef cattle carcass quality rating method based on optical testing according to an embodiment of the application. Fig. 2 is a system architecture diagram of a beef cattle carcass quality rating method based on optical testing according to an embodiment of the application. As shown in fig. 1 and 2, the beef cattle carcass quality rating method based on the optical test according to the embodiment of the application comprises the following steps: s1, acquiring a carcass surface state image of a beef cattle carcass to be detected through a camera; s2, extracting carcass surface state features from the carcass surface state image; and S3, generating a quality rating result based on the carcass surface state characteristics.
In particular, the S1 is used for collecting a carcass surface state image of the beef cattle carcass to be detected through a camera. It will be appreciated that acquiring an image of the carcass surface condition of the beef cattle carcass to be detected acquired by the camera is the first step of the method, with the aim of providing input data for subsequent image processing and feature extraction. The camera is used for collecting the carcass surface state image of the beef cattle carcass to be detected, so that the appearance characteristics of the beef cattle carcass can be converted into digital image data, and the subsequent computer vision analysis is convenient. These images may contain information on the shape, color and texture of the carcass, which is of great importance for assessing carcass quality.
In particular, the S2 extracts carcass surface state features from the carcass surface state image. In particular, in one specific example of the present application, as shown in fig. 3, the S2 includes: s21, extracting a carcass surface shape feature map from the carcass surface state image; s22, extracting a carcass surface color feature map from the carcass surface state image; s23, extracting a carcass surface texture feature map from the carcass surface state image; and S24, determining the carcass surface state characteristic based on the carcass surface shape characteristic diagram, the carcass surface color characteristic diagram and the carcass surface texture characteristic diagram.
Specifically, the step S21 extracts a carcass surface shape feature map from the carcass surface state image. That is, in the technical scheme of the application, after the carcass surface state image of the beef carcass to be detected is obtained, the carcass surface state image is passed through a shape feature extractor based on a first convolutional neural network model to obtain a carcass surface shape feature map. Here, the object of passing the carcass surface state image through a shape feature extractor based on the first convolutional neural network model to obtain a carcass surface shape feature map is to extract carcass shape information, providing features on carcass shape for subsequent ratings. Shape characteristics are important for assessing beef carcass quality, as different shaped carcasses may be related to meat quality, fat distribution, etc. By using a convolutional neural network model, feature representations of different shapes can be learned from carcass surface state images. More specifically, each layer using the shape feature extractor based on the first convolutional neural network model performs, in forward transfer of the layer, on input data: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the basic local feature matrix of the convolution feature map to obtain a pooled feature map; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the shape feature extractor based on the first convolutional neural network model is the carcass surface shape feature map, and the input of the first layer of the shape feature extractor based on the first convolutional neural network model is the carcass surface state image.
Convolutional neural networks (Convolutional Neural Network, CNN) are a type of deep learning model that is particularly useful for processing data having a grid structure, such as images and video. The following is a general structure and a step-wise expansion of the convolutional neural network model: input layer: accepting input data, typically images, audio, text, or the like; convolution layer: the convolutional layer is one of the core components of the CNN. It extracts local features in the input data by applying a series of filters (also called convolution kernels). The convolution operation multiplies and sums the filter and the input data element by element to generate a feature map; activation function: after the convolutional layer, a nonlinear activation function, such as ReLU, is typically applied to introduce nonlinear characteristics; pooling layer: the pooling layer is used to reduce the spatial size of the feature map and preserve the most important features. Common pooling operations have maximum pooling and average pooling; full tie layer: the fully connected layer connects the outputs of the pooling layer to one or more fully connected layers for mapping features to final output categories or regression values. Each neuron in the fully connected layer is connected with all neurons of the previous layer; output layer: the output layer selects proper activation functions, such as softmax functions, for multi-classification tasks and linear activation functions for regression tasks according to different tasks; loss function: selecting proper loss functions according to different tasks, such as cross entropy loss functions for classifying tasks and mean square error loss functions for regression tasks; back propagation and optimization: the gradient of the model parameters is calculated from the loss function by a back propagation algorithm and a gradient descent optimization algorithm, and the parameters are updated to minimize the loss function.
Specifically, the step S22 extracts a carcass surface color profile from the carcass surface state image. In the same way, in the technical scheme of the application, the carcass surface shape characteristic diagram is passed through a color characteristic extractor based on a second convolution neural network model to obtain a carcass surface color characteristic diagram. The object of passing the carcass surface shape feature map through a color feature extractor based on a second convolutional neural network model to obtain a carcass surface color feature map is to extract the color information of the carcass, and to provide the feature related to the carcass color for evaluating the carcass quality. The color profile of the carcass is also very important for assessing beef carcass quality, as the color may reflect information on meat quality, fat content, vascularity, etc. By using a color feature extractor of the second convolutional neural network model, different color feature representations can be learned from the carcass surface shape feature map. More specifically, each layer using the color feature extractor based on the second convolutional neural network model performs, in forward transfer of the layer, on input data: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the color feature extractor based on the second convolutional neural network model is the carcass surface color feature map, and the input of the first layer of the color feature extractor based on the second convolutional neural network model is the carcass surface shape feature map.
Specifically, the step S23 extracts a carcass surface texture feature map from the carcass surface state image. That is, the carcass surface color feature map is passed through a texture feature extractor based on a third convolutional neural network model to obtain a carcass surface texture feature map. The object of the carcass surface color feature map to obtain the carcass surface texture feature map through the texture feature extractor based on the third convolution neural network model is to extract the texture information of the carcass and provide the feature related to the carcass texture for evaluating the carcass quality. Texture features of the carcass are also important for assessing beef carcass quality, as texture may reflect information on meat quality, uniformity, and fiber structure. By using a texture feature extractor of the third convolutional neural network model, feature representations of different textures can be learned from the carcass surface color feature map. Each layer of the texture feature extractor based on the third convolutional neural network model is used for respectively carrying out input data in forward transfer of the layer: carrying out convolution processing on input data to obtain a convolution characteristic diagram; pooling the convolution feature images based on the local feature matrix to obtain pooled feature images; performing nonlinear activation on the pooled feature map to obtain an activated feature map; wherein the output of the last layer of the texture feature extractor based on the third convolutional neural network model is the carcass surface texture feature map, and the input of the first layer of the texture feature extractor based on the third convolutional neural network model is the carcass surface color feature map.
Specifically, the S24 determines the carcass surface state feature based on the carcass surface shape feature map, the carcass surface color feature map, and the carcass surface texture feature map. In particular, in one specific example of the present application, as shown in fig. 4, the S24 includes: s241, carrying out characteristic diagram strengthening on the carcass surface shape characteristic diagram, the carcass surface color characteristic diagram and the carcass surface texture characteristic diagram to obtain a strengthened carcass surface shape characteristic diagram, a strengthened carcass surface color characteristic diagram and a strengthened carcass surface texture characteristic diagram; and S242, fusing the surface shape characteristic diagram, the surface color characteristic diagram and the surface texture characteristic diagram of the reinforced carcass to obtain the surface state characteristic of the carcass.
More specifically, the step S241 performs feature map enhancement on the carcass surface shape feature map, the carcass surface color feature map, and the carcass surface texture feature map to obtain an enhanced carcass surface shape feature map, an enhanced carcass surface color feature map, and an enhanced carcass surface texture feature map. That is, in the technical scheme of the present application, the characteristic pattern enhancement is performed on the carcass surface shape characteristic pattern, the carcass surface color characteristic pattern and the carcass surface texture characteristic pattern by using a characteristic pattern enhancer based on a heavy parameterization layer to obtain an enhanced carcass surface shape characteristic pattern, an enhanced carcass surface color characteristic pattern and an enhanced carcass surface texture characteristic pattern. Here, the purpose of the feature map enhancement of the carcass surface shape feature map, carcass surface color feature map and carcass surface texture feature map using the feature map enhancer based on the re-parameterization layer is to enhance the expressive power of these feature maps and to increase their contribution to carcass quality evaluation. Specifically, the feature map enhancer based on the re-parameterization layer skillfully plays a role in enhancing data in a semantic feature space by introducing re-parameterization skills. In one specific example, feature map enhancement is performed on the carcass surface shape feature map, the carcass surface color feature map, and the carcass surface texture feature map to obtain an enhanced carcass surface shape feature map, an enhanced carcass surface color feature map, and an enhanced carcass surface texture feature map, comprising: using a feature map enhancer based on a heavy parameterization layer to conduct feature map enhancement on the carcass surface shape feature map, the carcass surface color feature map and the carcass surface texture feature map according to the following formula to obtain the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map; wherein, the formula is:
wherein, Representing the mean value of the feature map,/>Representing the variance of the feature map,/>Is randomly sampled from a Gaussian distribution,/>Characteristic values representing respective positions of the enhanced carcass surface shape profile, the enhanced carcass surface color profile, and the enhanced carcass surface texture profile,/>Representing multiplication by location.
More specifically, the step S242 fuses the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map, and the enhanced carcass surface texture feature map to obtain the carcass surface state feature. Further, a feature sparse conversion fusion module is used for fusing the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map to obtain a carcass surface state multi-scale feature map. The purpose of fusing the characteristic patterns is to fuse the information of the characteristic patterns to obtain a more comprehensive and richer multi-scale characteristic pattern of the surface state of the carcass. In beef cattle carcass quality assessment, the shape, color and texture characteristics of the carcass surface are all important assessment indicators. These feature maps typically have different scales and resolutions and contain different levels of information. In order to comprehensively utilize the information of the feature images, the feature sparse conversion fusion module can fuse the feature images to generate a carcass surface state multi-scale feature image. The feature sparse conversion fusion module can combine the enhanced shape feature map, the color feature map and the texture feature map through different fusion strategies such as weighted summation, splicing and the like. This allows the information of the different feature maps to be supplemented and enhanced with each other, providing a more comprehensive, representative representation of the features. In a specific example of the present application, the feature sparse conversion fusion module is used to fuse the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map with the following formula to obtain a carcass surface state multi-scale feature map as the carcass surface state feature; wherein, the formula is:
wherein, 、/>And/>Conversion matrices of high-level features X, Y and Z respectively, wherein X, Y and Z are feature vectors obtained by expanding the surface shape feature map, the surface color feature map and the surface texture feature map of the reinforced carcass respectivelyThe function is reconstructed for the dimension. And finally, the carcass surface state multi-scale feature map is passed through a quality rating device based on a classifier to obtain a quality rating result, wherein the quality rating result is used for representing a quality rating label of the beef carcass to be detected.
It should be noted that, in other specific examples of the present application, the carcass surface state feature may also be determined by other means based on the carcass surface shape feature map, the carcass surface color feature map, and the carcass surface texture feature map, for example: preprocessing the carcass surface shape feature map, the carcass surface color feature map and the carcass surface texture feature map to improve the feature extraction effect; extracting corresponding features from the pretreated carcass surface shape feature map, carcass surface color feature map and carcass surface texture feature map respectively; and fusing the features extracted from the different feature maps to obtain the carcass surface state features.
It is worth mentioning that in other specific examples of the application, the carcass surface state features may also be extracted from the carcass surface state image by other means, such as: preprocessing the carcass surface state image to improve the effect of feature extraction. The preprocessing step may include operations such as image denoising, image enhancement, image smoothing, etc., to reduce noise and interference in the image and highlight features of the carcass surface; carcass surface state features are extracted from the preprocessed image using computer vision or image processing techniques. The common feature extraction method comprises the following steps: texture features: and extracting texture features in the image by a texture analysis method. Texture features of the carcass surface can describe information such as roughness, granularity and the like of the surface; shape characteristics: shape analysis methods are used to extract shape features of carcass surfaces in images. These features may describe shape information such as the bulge, recess, curvature, etc. of the carcass; color characteristics: color features in the image are extracted by a color analysis method. The color characteristics of the carcass surface can reflect the color distribution and color variation of different areas; the extracted carcass surface state characteristics are converted into a suitable representation for subsequent analysis and application.
In particular, the S3 generates a quality rating result based on the carcass surface state characteristics. In particular, in one specific example of the present application, as shown in fig. 5, the S3 includes: s31, optimizing each feature matrix of the carcass surface state multi-scale feature map to obtain an optimized carcass surface state multi-scale feature map; and S32, determining the quality rating result based on the optimized carcass surface state multi-scale feature map.
Specifically, the step S31 is to optimize each feature matrix of the carcass surface state multiscale feature map to obtain an optimized carcass surface state multiscale feature map. In particular, in the above technical solution, the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map respectively express enhanced shape semantic features, color semantic features and texture semantic features of the carcass surface state image of the beef carcass to be detected, so that when the feature sparse inversion fusion module is used for fusing the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map, the image semantic feature distribution differences of the respective feature matrices of the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map directly affect the distribution integrity between the respective feature matrices of the carcass surface state multiscale feature map, thereby affecting the overall feature matrix based on the feature value regression value of the respective feature matrices of the carcass surface state multiscale feature map to obtain accurate result classification by taking into consideration the channel distribution differences of feature extraction of the first, second and third convolutional neural network models. Therefore, when the multi-scale feature map of the carcass surface state is classified and returned by the classifier, the application optimizes each feature matrix of the multi-scale feature map of the carcass surface state.
In a specific example of the present application, optimizing each feature matrix of the carcass surface state multiscale feature map to obtain an optimized carcass surface state multiscale feature map includes: calculating optimization coefficients of each feature matrix of the carcass surface state multi-scale feature map; and carrying out weighted optimization on the corresponding feature matrix of the carcass surface state multiscale feature map by using the optimization coefficient to obtain the optimized carcass surface state multiscale feature map.
Specifically, calculating the optimization coefficients of each feature matrix of the carcass surface state multi-scale feature map according to the following coefficient calculation formula; wherein, the coefficient calculation formula is:
Wherein the method comprises the steps of Is the/>, of each feature matrix of the carcass surface state multiscale feature mapCharacteristic value of location,/>Probability function representing eigenvalues, i.e. eigenvalues/>Mapping to/>A probability function of the interval of time,Is the scale of each feature matrix of the carcass surface state multiscale feature map, i.e. width times height,/>, ofIs a class probability value obtained by the multi-scale feature map of the carcass surface state through a classifier, and/>Is a weight superparameter,/>Is an optimization coefficient.
That is, for the feature scene corresponding to each feature matrix of the carcass surface state multi-scale feature map, the probability distribution foreground constraint and the relative probability mapping response assumption are used for carrying out the class probability reasoning logic association of scene saturation, so that the feature set of each feature matrix of the carcass surface state multi-scale feature map is endowed with scene concept ontology cognition, that is, the overall distribution is internally aligned with the class probability logic reasoning based on the scene in the classification process, so that the understanding capability of the feature matrix scene distribution of the carcass surface state multi-scale feature map on the class cognition is improved. Thus, the optimization coefficient is used againAnd (3) carrying out weighted optimization on the corresponding feature matrix of the carcass surface state multi-scale feature map, so that the accuracy of a classification result obtained by the classifier of the optimized carcass surface state multi-scale feature map can be improved.
Specifically, the step S32 is to determine the quality rating result based on the optimized carcass surface state multiscale feature map. That is, in one specific example of the present application, the object of passing the carcass surface state multiscale feature map through the classifier-based quality rater to obtain the quality rating result is to perform quality assessment of beef cattle carcasses to be detected according to the information of the feature maps, and represent the assessment result as quality rating labels. The quality ranker is a classifier that determines the quality level of the beef carcass to be tested by learning the relationship between the features extracted from the feature map and the different quality levels. This classifier can be a conventional machine-learning classifier such as a Support Vector Machine (SVM) or Random Forest (Random Forest), or a deep learning model such as a Convolutional Neural Network (CNN). By inputting the carcass surface state multi-scale feature map into the quality rating device, the quality evaluation can be performed by utilizing various information such as shape, color and texture contained in the feature map. The ranker will analyze and process the feature maps and ultimately output a result that is indicative of the quality level. It should be noted that the quality rating result may be discrete quality rating labels, such as "a", "B", "C", etc., or may be continuous rating values. The grading results can be used for representing the quality grade of beef cattle carcasses to be detected, helping farmers, slaughterhouses or market related parties to quickly know the quality condition of the carcasses, and making corresponding decisions.
Accordingly, in one possible implementation, the optimized carcass surface state multi-scale feature map may be passed through a classifier-based quality ranker to obtain the quality ranking result, where the quality ranking result is used to represent a quality ranking label of the beef carcass to be detected, for example: and collecting the marked carcass surface state multi-scale characteristic map and corresponding quality rating label data. Ensuring that the dataset contains sufficient sample and annotation information; features are extracted from the carcass surface state feature map for each scale using a multi-scale feature extraction method. These features may include a variety of information such as shape, color, texture, etc.; and fusing the features extracted from the different scale feature graphs to obtain a comprehensive feature representation. The fusion mode can be simple splicing or weighted average, and a multi-mode fusion method can also be used; the data set is divided into a training set and a test set. The training set is used for training the classifier, and the testing set is used for evaluating the performance of the classifier; the features are normalized to ensure that the features have the same dimensions and distribution. Common normalization methods include mean normalization, variance normalization, and the like; the appropriate classifier is selected and trained using the training set. In the training process, matching the characteristics with corresponding quality rating labels; evaluating the trained classifier by using the test set; and predicting a carcass surface state multiscale characteristic diagram of the beef cattle carcass to be detected by using the trained classifier, and generating a corresponding quality rating result. The rating results may be discrete quality level labels, such as high, medium, low.
It should be noted that, in other specific examples of the present application, the quality rating result may also be generated based on the carcass surface state characteristics in other manners, for example: collecting carcass surface state characteristics with marks and corresponding quality rating data; the collected carcass surface state characteristics are preprocessed for subsequent analysis and modeling. The preprocessing step may include normalization, feature scaling, feature balancing, etc. to ensure that the features have consistent dimensions and distribution; the collected quality ratings data is processed for subsequent model training and evaluation. The label processing can include converting the rating result into a numerical form, or performing label balancing and other operations; if the feature dimensions are high, a feature selection method may be used to select the most relevant features to reduce the feature dimensions and improve the model's effectiveness. Common feature selection methods include correlation analysis, information gain, L1 regularization, and the like; a quality rating model is trained using the collected carcass surface state characteristics and corresponding quality rating data. Common models include linear regression, decision trees, support vector machines, neural networks, etc.; the trained model is evaluated using the evaluation dataset to evaluate the performance and accuracy of the model. Common evaluation indexes include mean square error and the like; and predicting the new carcass surface state characteristics by using the trained model to generate a corresponding quality rating result. The rating result may be a continuous value (e.g., score) or a discrete value (e.g., high, medium, low quality rating).
In summary, the beef cattle carcass quality rating method based on the optical test according to the embodiment of the application is explained, wherein the beef cattle carcass is scanned by utilizing an optical sensor, the information of the surface shape, the color, the texture and the like of the beef cattle carcass is obtained, and then the beef cattle carcass to be evaluated is rated in quality by the image processing and analysis technology. Therefore, the surface information of the beef cattle carcasses can be rapidly and accurately obtained, and the characteristic parameters related to the beef cattle carcasses are objectively and scientifically extracted, so that the beef cattle carcasses are automatically rated, errors and fatigue of manual vision are avoided, the reliability and comparability of the rating are improved, and the efficiency and consistency of the rating are further improved.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An optical test-based beef cattle carcass quality grading method is characterized by comprising the following steps of:
acquiring a carcass surface state image of a beef cattle carcass to be detected through a camera;
Extracting carcass surface state features from the carcass surface state image; and
And generating a quality rating result based on the carcass surface state characteristics.
2. The method of claim 1, wherein extracting carcass surface state features from the carcass surface state image comprises:
Extracting a carcass surface shape feature map from the carcass surface state image;
extracting a carcass surface color feature map from the carcass surface state image;
extracting a carcass surface texture feature map from the carcass surface state image; and
The carcass surface state characteristics are determined based on the carcass surface shape characteristic map, the carcass surface color characteristic map, and the carcass surface texture characteristic map.
3. The method of claim 2, wherein extracting a carcass surface shape profile from the carcass surface state image comprises:
the carcass surface state image is passed through a shape feature extractor based on a first convolutional neural network model to obtain the carcass surface shape feature map.
4. The method for grading beef cattle carcass quality based on optical testing according to claim 3, wherein extracting carcass surface color feature map from the carcass surface state image comprises:
And passing the carcass surface shape feature map through a color feature extractor based on a second convolution neural network model to obtain a carcass surface color feature map.
5. The method of claim 4, wherein extracting carcass surface texture feature map from the carcass surface state image comprises:
and (3) passing the carcass surface color feature map through a texture feature extractor based on a third convolutional neural network model to obtain a carcass surface texture feature map.
6. The method of claim 5, wherein determining the carcass surface state characteristics based on the carcass surface shape characteristic map, the carcass surface color characteristic map, and the carcass surface texture characteristic map comprises:
Performing characteristic diagram strengthening on the carcass surface shape characteristic diagram, the carcass surface color characteristic diagram and the carcass surface texture characteristic diagram to obtain a strengthened carcass surface shape characteristic diagram, a strengthened carcass surface color characteristic diagram and a strengthened carcass surface texture characteristic diagram; and
And fusing the surface shape characteristic diagram, the surface color characteristic diagram and the surface texture characteristic diagram of the reinforced carcass to obtain the surface state characteristic of the carcass.
7. The method of claim 6, wherein the performing feature map augmentation on the carcass surface shape feature map, the carcass surface color feature map, and the carcass surface texture feature map to obtain an augmented carcass surface shape feature map, an augmented carcass surface color feature map, and an augmented carcass surface texture feature map comprises: using a feature map enhancer based on a heavy parameterization layer to conduct feature map enhancement on the carcass surface shape feature map, the carcass surface color feature map and the carcass surface texture feature map according to the following formula to obtain the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map;
Wherein, the formula is:
wherein, Representing the mean value of the feature map,/>Representing the variance of the feature map,/>Is randomly sampled from a Gaussian distribution,/>Characteristic values representing respective positions of the enhanced carcass surface shape profile, the enhanced carcass surface color profile, and the enhanced carcass surface texture profile,/>Representing multiplication by location.
8. The method of claim 7, wherein fusing the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map, and the enhanced carcass surface texture feature map to obtain the carcass surface state features comprises: using a feature sparse conversion fusion module to fuse the enhanced carcass surface shape feature map, the enhanced carcass surface color feature map and the enhanced carcass surface texture feature map in the following formula to obtain a carcass surface state multiscale feature map as the carcass surface state feature;
Wherein, the formula is:
wherein, 、/>And/>Conversion matrices of high-level features X, Y and Z respectively, wherein X, Y and Z are feature vectors obtained by expanding the surface shape feature map, the surface color feature map and the surface texture feature map of the reinforced carcass respectivelyThe function is reconstructed for the dimension.
9. The method of claim 8, wherein generating a quality rating result based on the carcass surface state characteristics comprises:
Optimizing each feature matrix of the carcass surface state multi-scale feature map to obtain an optimized carcass surface state multi-scale feature map; and
And determining the quality rating result based on the optimized carcass surface state multiscale feature map.
10. The method for grading beef cattle carcass quality based on optical testing according to claim 9, wherein determining the quality grading result based on the optimized carcass surface state multi-scale feature map comprises:
And the quality rating result is obtained by passing the optimized carcass surface state multi-scale feature map through a quality rating device based on a classifier, and the quality rating result is used for representing a quality rating label of the beef cattle carcass to be detected.
CN202410324603.0A 2024-03-21 2024-03-21 Beef cattle carcass quality grading method based on optical test Withdrawn CN117934463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410324603.0A CN117934463A (en) 2024-03-21 2024-03-21 Beef cattle carcass quality grading method based on optical test

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410324603.0A CN117934463A (en) 2024-03-21 2024-03-21 Beef cattle carcass quality grading method based on optical test

Publications (1)

Publication Number Publication Date
CN117934463A true CN117934463A (en) 2024-04-26

Family

ID=90754166

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410324603.0A Withdrawn CN117934463A (en) 2024-03-21 2024-03-21 Beef cattle carcass quality grading method based on optical test

Country Status (1)

Country Link
CN (1) CN117934463A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015785A (en) * 2024-04-07 2024-05-10 吉林大学 Remote monitoring nursing system and method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118015785A (en) * 2024-04-07 2024-05-10 吉林大学 Remote monitoring nursing system and method thereof

Similar Documents

Publication Publication Date Title
CN117934463A (en) Beef cattle carcass quality grading method based on optical test
CN112597038B (en) Software defect prediction method and system
CN108629370B (en) Classification recognition algorithm and device based on deep belief network
Bhatt et al. An analysis of the performance of Artificial Neural Network technique for apple classification
CN113392931A (en) Hyperspectral open set classification method based on self-supervision learning and multitask learning
CN104881684A (en) Stereo image quality objective evaluate method
CN112163450A (en) Based on S3High-frequency ground wave radar ship target detection method based on D learning algorithm
CN117355038B (en) X-shaped hole processing method and system for circuit board soft board
Chang et al. Blind image quality assessment by visual neuron matrix
CN117078670B (en) Production control system of cloud photo frame
CN116977239A (en) Defect detection method, device, computer equipment and storage medium
CN117131348B (en) Data quality analysis method and system based on differential convolution characteristics
CN117635418A (en) Training method for generating countermeasure network, bidirectional image style conversion method and device
CN117576781A (en) Training intensity monitoring system and method based on behavior recognition
CN117474080A (en) Multi-discriminant-based countermeasure migration learning method and device
CN117078007A (en) Multi-scale wind control system integrating scale labels and method thereof
CN117058079A (en) Thyroid imaging image automatic diagnosis method based on improved ResNet model
Berón et al. Optimal feature selection for blind super-resolution image quality evaluation
CN114117210A (en) Intelligent financial product recommendation method and device based on federal learning
CN118277674B (en) Personalized image content recommendation method based on big data analysis
CN118133689B (en) Teaching scene-oriented simulation platform data processing method and device and electronic equipment
CN116610080B (en) Intelligent production method of leisure chair and control system thereof
KR102512552B1 (en) Apparatus and method for analyzing artificial intelligence processing results
Hou et al. Be natural: A saliency-guided deep framework for image quality
CN117672269A (en) Intelligent seat assessment method and system for customer service personnel

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20240426