CN114375465A - Picture classification method and device, storage medium and electronic equipment - Google Patents

Picture classification method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN114375465A
CN114375465A CN201980100392.XA CN201980100392A CN114375465A CN 114375465 A CN114375465 A CN 114375465A CN 201980100392 A CN201980100392 A CN 201980100392A CN 114375465 A CN114375465 A CN 114375465A
Authority
CN
China
Prior art keywords
feature matrix
picture
classified
preset
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980100392.XA
Other languages
Chinese (zh)
Inventor
郭子亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Shenzhen Huantai Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd, Shenzhen Huantai Technology Co Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of CN114375465A publication Critical patent/CN114375465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A picture classification method, a picture classification device, a storage medium and an electronic device are provided, wherein the method comprises the following steps: acquiring a picture to be classified and a text corresponding to the picture to be classified (101); determining a first feature matrix of the picture to be classified and a second feature matrix of the text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified (102); splicing the first feature matrix and the second feature matrix to obtain a third feature matrix (103); and classifying the pictures to be classified according to a preset picture classification model and a third feature matrix to determine the classes of the pictures to be classified (104).

Description

Picture classification method and device, storage medium and electronic equipment Technical Field
The present application belongs to the field of electronic technologies, and in particular, to a method and an apparatus for classifying pictures, a storage medium, and an electronic device.
Background
With the development of the information industry and the internet, the representation form of media information gradually changes from the traditional form mainly comprising text information to the form mainly comprising pictures and texts, even the form mainly comprising pictures. Furthermore, as the content and flow rate of media information increase, higher requirements are placed on the quality of the media information.
Taking the media information containing pictures as an example, for the media information containing pictures, the pictures in the media information are generally classified to determine the category of the pictures, so as to determine whether the pictures belong to the pictures to be filtered. When the picture belongs to a picture to be filtered, the media information containing the picture needs to be filtered so as to improve the quality of the media information.
Disclosure of Invention
The embodiment of the application provides a picture classification method, a picture classification device, a storage medium and electronic equipment, which can improve the accuracy of picture classification.
In a first aspect, an embodiment of the present application provides a method for classifying pictures, including:
acquiring a picture to be classified and a text corresponding to the picture to be classified;
determining a first feature matrix of the picture to be classified and a second feature matrix of a text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
In a second aspect, an embodiment of the present application provides an image classification device, including:
the acquisition module is used for acquiring a picture to be classified and a text corresponding to the picture to be classified;
the determining module is used for determining a first feature matrix of the picture to be classified and a second feature matrix of the text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
the splicing module is used for splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
and the classification module is used for classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
In a third aspect, an embodiment of the present application provides a storage medium, on which a computer program is stored, where when the computer program is executed on a computer, the computer is caused to execute the picture classification method provided in this embodiment.
In a fourth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor is configured to, by calling the computer program stored in the memory, execute:
acquiring a picture to be classified and a text corresponding to the picture to be classified;
determining a first feature matrix of the picture to be classified and a second feature matrix of a text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
Drawings
The technical solutions and advantages of the present application will become apparent from the following detailed description of specific embodiments of the present application when taken in conjunction with the accompanying drawings.
Fig. 1 is a schematic flowchart of a first image classification method according to an embodiment of the present application.
Fig. 2 is a second flowchart of the image classification method according to the embodiment of the present application.
Fig. 3 is a schematic structural diagram of an image classification device according to an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a first electronic device according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a second electronic device according to an embodiment of the present application.
Detailed Description
Referring to the drawings, wherein like reference numbers refer to like elements, the principles of the present application are illustrated as being implemented in a suitable computing environment. The following description is based on illustrated embodiments of the application and should not be taken as limiting the application with respect to other embodiments that are not detailed herein.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first image classification method according to an embodiment of the present disclosure. The flow of the picture classification method may include:
in the related art, when pictures need to be classified, the pictures are generally classified only according to the picture content of the pictures, and the accuracy of such a classification manner is relatively low.
101. And acquiring the picture to be classified and a text corresponding to the picture to be classified.
For example, some browser applications installed in electronic devices typically push some media information to the user. Among these media information, there may be media information including pictures. In order to avoid pushing some media information containing popular pictures to the user, before pushing the media information containing pictures to the user, the electronic device may obtain pictures in the media information containing pictures, where the pictures are pictures to be classified. Meanwhile, the electronic equipment can obtain the title of the picture, wherein the title of the picture is the text corresponding to the picture to be classified.
102. Determining a first feature matrix of the picture to be classified and a second feature matrix of the text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified.
For example, the electronic device may perform feature extraction on the picture to be classified to extract a plurality of features of the picture to be classified. The multiple features of the picture to be classified form a first feature matrix, that is, each element in the first feature matrix represents a feature of the picture to be classified. The electronic device can perform feature extraction on the text corresponding to the picture to be classified so as to extract a plurality of features of the text corresponding to the picture to be classified. The multiple features of the text corresponding to the picture to be classified form a second feature matrix, that is, each element in the second feature matrix represents one feature of the text corresponding to the picture to be classified.
103. And splicing the first feature matrix and the second feature matrix to obtain a third feature matrix.
For example, after obtaining the first feature matrix and the second feature matrix, the electronic device may splice the first feature matrix and the second feature matrix, so as to obtain a third feature matrix.
For example, after obtaining the first feature matrix and the second feature matrix, the electronic device splices a plurality of features in the second feature matrix after the features in the last column in the first feature matrix, so as to obtain a third feature matrix.
For another example, after obtaining the first feature matrix and the second feature matrix, the electronic device may stitch a plurality of features in the second feature matrix before the features of the first column in the first feature matrix, so that a third feature matrix may be obtained.
For example, assume that the first feature matrix is: [ A1A 2A 3A 4], the second feature matrix is: [ B1B 2B 3], the third feature matrix may be: [ A1A 2A 3A 4B 1B 2B 3], and the third feature matrix may also be: [ B1B 2B 3A 1A 2A 3A 4 ].
104. And classifying the pictures to be classified according to a preset picture classification model and a third characteristic matrix so as to determine the classes of the pictures to be classified.
For example, after the third feature matrix is obtained, the electronic device may classify the picture to be classified according to the preset picture classification model and the third feature matrix, so as to determine the category of the picture to be classified.
It can be understood that, in this embodiment, the features of the picture and the text corresponding to the picture, for example, the features of the title of the picture, may be combined to classify the picture, so as to improve the accuracy of classifying the picture.
Referring to fig. 2, fig. 2 is a second flowchart illustrating a picture classification method according to an embodiment of the present disclosure. The picture classification method can comprise the following steps:
201. the electronic equipment acquires the pictures to be classified and texts corresponding to the pictures to be classified.
For example, some browser applications installed in electronic devices typically push some media information to the user. Among these media information, there may be media information including pictures. In order to avoid pushing some media information containing popular pictures to the user, before pushing the media information containing pictures to the user, the electronic device may obtain pictures in the media information containing pictures, where the pictures are pictures to be classified. Meanwhile, the electronic equipment can obtain the title of the picture, wherein the title of the picture is the text corresponding to the picture to be classified.
202. The electronic equipment determines a first feature matrix of the picture to be classified and a second feature matrix of the text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified.
It can be understood that a picture classification model can be preset in the electronic device, and the preset picture classification model is a trained model used for classifying pictures. After the picture to be classified and the text corresponding to the picture to be classified are obtained, the electronic equipment can classify the picture to be classified by using the preset picture classification model.
For example, the preset image classification model may include a preset image feature extraction submodel and a text feature extraction submodel, after the image to be classified and the text corresponding to the image to be classified are acquired, the electronic device may input the image to be classified and the text corresponding to the image to be classified into the preset image classification model, and obtain a first feature matrix of the image to be classified through the preset image feature extraction submodel; and extracting the sub-model through the preset text characteristics to obtain a second characteristic matrix of the text corresponding to the picture to be classified. The first feature matrix comprises a plurality of features of the pictures to be classified, and the second feature matrix comprises a plurality of features of texts corresponding to the pictures to be classified.
In some embodiments, the electronic device can preset a residual attention network model. After the picture to be classified is acquired, the electronic device can input the picture to be classified into the residual attention network model so as to perform feature extraction on the picture to be classified, and therefore a plurality of features of the picture to be classified are extracted. The plurality of characteristics of the picture to be classified form a first characteristic matrix. The electronic device can preset a bert model. After the text corresponding to the picture to be classified is acquired, the electronic device can input the text corresponding to the picture to be classified into the bert model so as to extract the features of the text corresponding to the picture to be classified, so that a plurality of features of the text corresponding to the picture to be classified are extracted. And forming a second feature matrix by a plurality of features of the text corresponding to the picture to be classified.
For example, the first feature matrix may be a1 × M matrix, where "1" represents the number of pictures to be classified, that is, 1, "M" represents the number of features of the pictures to be classified. The second feature matrix may be a1 × N matrix, where "1" indicates the number of texts corresponding to the pictures to be classified, that is, 1. "N" represents the number of multiple features of the text corresponding to the picture to be classified. M may be greater than N.
203. And the electronic equipment splices the first feature matrix and the second feature matrix to obtain a third feature matrix.
For example, the preset picture classification model may include a third full connection layer. After the first feature matrix and the second feature matrix are obtained, the electronic device may splice the first feature matrix and the second feature matrix at a third full connection layer of a preset image classification model, so as to obtain a third feature matrix.
For example, after obtaining the first feature matrix and the second feature matrix, the electronic device may stitch a plurality of features in the second feature matrix behind features in a last column in the first feature matrix at a third full connection layer of a preset picture classification model, so as to obtain a third feature matrix.
For another example, after obtaining the first feature matrix and the second feature matrix, the electronic device may stitch a plurality of features in the second feature matrix to be in front of the features in the first column in the first feature matrix at a third full-link layer of the preset picture classification model, so as to obtain a third feature matrix.
It should be noted that, in the embodiment of the present application, how to splice the first feature matrix and the second feature matrix depends on a training process of a preset image classification model. That is, in the process 103, the manner of splicing the first feature matrix and the second feature matrix is consistent with the manner of splicing the first feature matrix and the second feature matrix in the training process of the preset image classification model.
204. The electronic equipment determines the probability that the picture to be classified belongs to each of a plurality of preset classes according to a preset picture classification model and a third feature matrix to obtain a plurality of probabilities.
205. The electronic device determines a maximum probability from the plurality of probabilities.
206. And the electronic equipment determines the preset category corresponding to the maximum probability as the category of the picture to be classified.
For example, the preset image classification model may include a softmax layer, and after the third feature matrix is obtained, the electronic device may calculate, according to the third feature matrix, a probability that the image to be classified belongs to one of a plurality of preset categories in the softmax layer of the preset image classification model, and determine the preset category corresponding to the maximum probability as the category of the image to be classified. The plurality of preset categories may be set according to actual situations, and are not limited herein.
In some embodiments, the electronic device can preset a classification model. After obtaining the third feature matrix, the electronic device may input the third feature matrix into the preset classification model to determine the probability that the picture to be classified belongs to each of the multiple preset classes, obtain multiple probabilities, determine a maximum probability from the multiple probabilities, and determine the preset class corresponding to the maximum probability as the class of the picture to be classified.
For example, assume that there are 6 preset categories, namely preset categories C1, C2, C3, C4, C5 and C6. The electronic device can calculate the probability that the picture to be classified belongs to one of the 6 preset classes according to the third feature matrix. For example, the calculated probabilities are: 90% (corresponding to the preset category C1), 1% (corresponding to the preset category C2), 3% (corresponding to the preset category C3), 3% (corresponding to the preset category C4), 1% (corresponding to the preset category C5), and 2% (corresponding to the preset category C6). Wherein the maximum probability is 90%, and 90% corresponds to the preset category C1. Then, the electronic device may determine that the category of the picture to be classified is C1.
In some embodiments, before the flow 203, the method may further include:
the electronic equipment judges whether the number of the first features in the first feature matrix is less than or equal to the number of the second features in the second feature matrix;
if the number of the first features is smaller than or equal to the number of the second features, the electronic equipment performs dimensionality reduction on the second feature matrix to enable the number of the second features to be smaller than the number of the first features, and a second feature matrix after dimensionality reduction is obtained;
the process 203 may include:
and the electronic equipment splices the first feature matrix and the second feature matrix after dimensionality reduction to obtain a third feature matrix.
For example, the preset image classification model may include a first full connection layer, and when the number of the first features in the first feature matrix is less than or equal to the number of the second features in the second feature matrix, the electronic device may perform a dimension reduction operation on the second feature matrix at the first full connection layer, so that the number of the second features is less than the number of the first features, and the dimension-reduced second feature matrix is obtained. After the second feature matrix after dimension reduction is obtained, the electronic device may splice the first feature matrix and the second feature matrix after dimension reduction at a third full connection layer of a preset image classification model to obtain a third feature matrix.
For another example, the preset picture classification model may include a first fully connected layer and a second fully connected layer. If the number of columns of the first feature matrix is greater than 512, the electronic device may receive the first feature matrix at the first full connection layer, and then perform a dimension reduction operation on the first feature matrix to reduce the number of columns of the first feature matrix to 512. If the number of columns of the second feature matrix is greater than 256, the electronic device may receive the second feature matrix at the second fully connected layer and then perform a dimensionality reduction operation on the second feature matrix to reduce the number of columns of the second feature matrix to 512. After obtaining the first feature matrix after dimension reduction and the second feature matrix after dimension reduction, the electronic device may splice the first feature matrix after dimension reduction and the second feature matrix after dimension reduction at a third full connection layer of a preset image classification model to obtain a third feature matrix.
In some embodiments, the electronic device may preset a dimension reduction module, and in order to facilitate the picture information to participate in the prediction, the electronic device may determine whether the number of first features (the number of picture features) of the first feature matrix is less than or equal to the number of second features (the number of features of the text corresponding to the picture) of the second feature matrix. If the number of the picture features is less than or equal to the number of the features of the text corresponding to the picture, the electronic device can perform dimension reduction operation on the features of the text corresponding to the picture in the dimension reduction module, so that the number of the features of the text corresponding to the picture is less than the number of the features of the picture, and a second feature matrix after dimension reduction is obtained. Subsequently, the electronic device may splice the first feature matrix and the reduced second feature matrix, thereby obtaining a third feature matrix.
In some embodiments, the process 201 may include:
the electronic equipment acquires a picture to be classified;
the electronic equipment judges whether a text corresponding to the picture to be classified exists or not;
if the text corresponding to the picture to be classified does not exist, the electronic device determines a second feature matrix of the text corresponding to the picture to be classified, which may include:
the electronic equipment acquires a zero setting matrix, wherein the value of each element in the zero setting matrix is 0;
the process 203 may include:
and the electronic equipment splices the first characteristic matrix and the zero setting matrix to obtain a third characteristic matrix.
In some embodiments, there may be situations where there is no text corresponding to the picture to be classified. For example, a picture may not have a corresponding title. Then, the electronic device cannot determine the second feature matrix of the text corresponding to the picture to be classified. Therefore, in the embodiment of the present application, a zero matrix is also preset, and the value of each element in the zero matrix is 0. When the text corresponding to the picture to be classified does not exist, the electronic equipment can directly acquire the zero setting matrix. Subsequently, the electronic device may splice the first feature matrix and the zero matrix to obtain a third feature matrix, so as to classify the picture to be classified according to a preset picture classification model and the third feature matrix to determine the category of the picture to be classified.
In some embodiments, when there is no text corresponding to the picture to be classified, the preset text feature extraction sub-model outputs a zero matrix. Then, the electronic device can splice the first feature matrix and the zero setting matrix at a third full connection layer of the preset image classification model to obtain a third feature matrix.
In some embodiments, the to-be-classified picture is one of the pictures in the to-be-pushed information, and after the process 206, the method may further include:
the method comprises the steps that the electronic equipment obtains a preset filtering category set, wherein the preset filtering category set comprises a plurality of filtering categories;
the electronic equipment judges whether the category of the picture to be classified is matched with one filtering category in a preset filtering category set or not;
and if the category of the picture to be classified is matched with one filtering category in the preset filtering category set, the electronic equipment does not push the information to be pushed.
For example, the electronic device may preset a filtering category set, which may include a plurality of filtering categories. That is, when a category of a picture belongs to one of the set of filtering categories, the electronic device may determine that the picture is a filterable picture. Then, when the to-be-pushed information that needs to be pushed to the user includes the picture, the electronic device may not push the to-be-pushed information. Or, when the picture needs to be pushed to the user, the electronic device may not push the picture to the user.
In this embodiment of the application, after determining the category of one of the pictures in the to-be-pushed information through the processes 201 to 206, the electronic device may obtain a preset filtering category set. Then the electronic equipment can judge whether the category of the picture is matched with one filtering category in a preset filtering category set or not; and if the category of the picture to be classified is matched with one filtering category in the preset filtering category set, the electronic equipment does not push the information to be pushed.
In some embodiments, before the flow 201, the following may be further included:
the method comprises the steps that the electronic equipment obtains a plurality of sample pictures, texts corresponding to the sample pictures and a plurality of preset categories;
the electronic equipment trains a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures and the preset categories;
and the electronic equipment determines the trained model as a preset picture classification model.
For example, the electronic device may preset a model. Then, the electronic device may obtain a plurality of sample pictures, texts corresponding to the respective sample pictures, and a plurality of preset categories (picture categories). Then, the electronic device can input the plurality of sample pictures, the text corresponding to each sample picture and the plurality of preset categories into the preset model so as to train the preset model. The trained model can be used as a preset image classification model.
In some embodiments, the training of the preset model by the electronic device according to the plurality of sample pictures, the text corresponding to each sample picture, and the plurality of preset categories may include:
the electronic equipment conducts iterative training on a preset model according to the sample pictures, the texts corresponding to the sample pictures, the preset categories and the preset loss function until the loss value of the loss function is minimum.
For example, the electronic device may preset a loss function. Then, the electronic device can perform iterative training on the preset model according to the plurality of sample pictures, the text corresponding to each sample picture, the plurality of preset categories and the preset loss function until the loss value of the loss function is minimum. The electronics can determine that the model converges when the loss value of the loss function is minimal.
For example, the electronic device may input a plurality of sample pictures, a text corresponding to each sample picture, a category of each sample picture, and a plurality of preset categories into the preset model to obtain the prediction matrix. The prediction matrix is an M × N matrix, where M represents the number of sample pictures or the number of texts corresponding to the sample pictures. It can be known that the number of sample pictures is equal to the number of texts corresponding to the sample pictures. N represents the number of preset categories. The ith row and the jth column of the prediction matrix represent the probability that the category of the ith sample picture predicted by the model is the jth category in the preset categories.
The electronic device further comprises a real matrix, wherein the real matrix is an M × N matrix, and M represents the number of sample pictures or the number of texts corresponding to the sample pictures. It can be known that the number of sample pictures is equal to the number of texts corresponding to the sample pictures. N represents the number of preset categories. The ith row and the jth column of the prediction matrix represent the true probability that the category of the ith sample picture is the jth category in the multiple preset categories. For example, assuming that the category of the 1 st sample picture is the 2 nd category of the plurality of preset categories, the 0 th row and the 1 st column of the real matrix are 1, and the other columns of the 0 th row of the real matrix are all 0.
The electronic equipment can preset a loss function, and after the prediction matrix is obtained, the electronic equipment can input the prediction matrix and the real matrix into the preset loss function for calculation to obtain a loss value. Each time a loss value is obtained, the loss value can be transmitted back to each layer of the preset model so as to adjust the parameters of the preset model until the preset model converges. The trained model can be used as a preset image classification model.
For example, the predetermined model may include a predetermined residual attention network submodel, a predetermined bert submodel, a first fully-connected layer, a second fully-connected layer, a third fully-connected layer, and a softmax layer. Then, the electronic device may sequentially return the loss value to the softmax layer, the third full-link layer, the first full-link layer and the second full-link layer, the preset residual attention network submodel, and the preset bert model to adjust the parameters of the preset model. The trained model can be used as a preset image classification model. The trained residual attention network submodel can be used as a preset image feature extraction submodel, and the trained bert submodel can be used as a preset text feature extraction submodel.
For another example, the predetermined model may include a predetermined residual attention network submodel, a predetermined bert submodel, a first fully-connected layer, a second fully-connected layer, a third fully-connected layer, and a softmax layer. Then, the electronic device may sequentially return the loss value to the softmax layer, the third fully-connected layer, the first fully-connected layer, and the second fully-connected layer to adjust the parameters of the preset model. The trained model can be used as a preset image classification model. And the preset residual attention network submodel is used as a preset picture feature extraction submodel, and the preset bert submodel is used as a text feature extraction submodel.
In some embodiments, after obtaining the plurality of sample pictures, the text corresponding to each sample picture, and the plurality of preset categories, the electronic device may input the plurality of sample pictures into a preset residual attention network model to extract a plurality of features of each sample picture, so as to obtain a sample picture matrix. The sample picture matrix is an M × N matrix, M represents the number of sample pictures, and N represents the number of features of each sample picture. Wherein, the number of the features of each sample picture is the same. The ith row and jth column of the sample picture matrix represent the jth feature of the ith sample picture. The electronic equipment can input texts corresponding to the sample pictures into a preset bert model so as to extract a plurality of characteristics of the texts corresponding to the sample pictures and obtain a text matrix. The text matrix is an M × P matrix, M represents the number of texts corresponding to a plurality of sample pictures, and P represents the number of features of the texts corresponding to each sample picture. And the quantity of the features of the text corresponding to each sample picture is the same. The ith row and the jth column of the text matrix represent the jth feature of the text corresponding to the ith sample picture. Wherein P is less than N. Then, the electronic device can splice the sample picture matrix and the text matrix to obtain a spliced matrix, then input the spliced matrix and the preset classes into a preset model, and perform supervised training on the spliced matrix by using the preset classes to obtain a trained model. The trained model can be used as a preset classification model.
In some embodiments, after a preset model is trained by inputting a batch of sample pictures and texts corresponding to the sample pictures, a trained model can be obtained, and the electronic device can obtain a batch of verification data from the verification set and input the verification data into the trained model to verify the accuracy of the trained model. The verification set comprises a plurality of batches of verification data, and each batch of verification data comprises a plurality of pictures, texts corresponding to the pictures and the categories of the pictures. When the accuracy obtained this time is greater than the accuracy obtained last time, the electronic device may store the trained model this time. When the accuracy obtained this time is smaller than the accuracy obtained last time, the electronic device may not store the trained model this time. When the accuracy of the trained model obtained multiple times does not increase, for example, when the accuracy of the trained model obtained multiple times is: 87%, 86.9%, 86.7%, and 86.8%, the electronic device may confirm that the model training is finished, and the model finally obtained by the electronic device is the model with the accuracy rate of 87%. The model can be used as a preset picture classification model.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a picture classification device according to an embodiment of the present application. The picture classification apparatus may include: an acquisition module 301, a determination module 302, a concatenation module 303, and a classification module 304.
The obtaining module 301 is configured to obtain a picture to be classified and a text corresponding to the picture to be classified.
A determining module 302, configured to determine a first feature matrix of the picture to be classified and a second feature matrix of the text corresponding to the picture to be classified, where the first feature matrix includes multiple features of the picture to be classified, and the second feature matrix includes multiple features of the text corresponding to the picture to be classified.
And the splicing module 303 is configured to splice the first feature matrix and the second feature matrix to obtain a third feature matrix.
The classification module 304 is configured to classify the picture to be classified according to a preset picture classification model and the third feature matrix, so as to determine the category of the picture to be classified.
In some embodiments, the classification module 304 may be configured to: determining the probability that the picture to be classified belongs to each of a plurality of preset classes according to a preset picture classification model and the third feature matrix to obtain a plurality of probabilities; determining a maximum probability from the plurality of probabilities; and determining the preset category corresponding to the maximum probability as the category of the picture to be classified.
In some embodiments, the picture to be classified is one of pictures in the information to be pushed, and the classification module 304 may be configured to: acquiring a preset filtering category set, wherein the preset filtering category set comprises a plurality of filtering categories; judging whether the category of the picture to be classified is matched with one filtering category in the preset filtering category set; and if the category of the picture to be classified is matched with one filtering category in the preset filtering category set, not pushing the information to be pushed.
In some embodiments, the splicing module 303 may be configured to: and splicing the plurality of features in the second feature matrix into the features of the last column in the first feature matrix to obtain a third feature matrix.
In some embodiments, the splicing module 303 may be configured to: judging whether the number of first features in the first feature matrix is less than or equal to the number of second features in the second feature matrix; if the number of the first features is smaller than or equal to the number of the second features, performing dimension reduction operation on the second feature matrix to enable the number of the second features to be smaller than the number of the first features, and obtaining a second feature matrix after dimension reduction; and splicing the first feature matrix and the second feature matrix after dimensionality reduction to obtain a third feature matrix.
In some embodiments, the obtaining module 301 may be configured to: acquiring a picture to be classified; judging whether a text corresponding to the picture to be classified exists or not;
if there is no text corresponding to the picture to be classified, the determining module 302 may be configured to: acquiring a zero setting matrix, wherein the value of each element in the zero setting matrix is 0;
the splicing module 303 may be configured to: and splicing the first feature matrix and the zero setting matrix to obtain a third feature matrix.
In some embodiments, the obtaining module 301 may be configured to: acquiring a plurality of sample pictures, texts corresponding to the sample pictures and a plurality of preset categories; training a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures and the plurality of preset categories; and determining the trained model as the preset image classification model.
In some embodiments, the obtaining module 301 may be configured to: and performing iterative training on a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures, the plurality of preset categories and a preset loss function until the loss value of the loss function is minimum.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed on a computer, the computer is caused to execute the flow in the image classification method provided in this embodiment.
The embodiment of the present application further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor is configured to execute the flow in the image classification method provided in this embodiment by calling the computer program stored in the memory.
For example, the electronic device may be a mobile terminal such as a tablet computer or a smart phone. Referring to fig. 4, fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The electronic device 400 may include components such as a memory 401, a processor 402, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 4 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The memory 401 may be used to store applications and data. The memory 401 stores applications containing executable code. The application programs may constitute various functional modules. The processor 402 executes various functional applications and data processing by running an application program stored in the memory 401.
The processor 402 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 401 and calling data stored in the memory 401, thereby integrally monitoring the electronic device.
In this embodiment, the processor 402 in the electronic device loads the executable code corresponding to the process of one or more application programs into the memory 401 according to the following instructions, and the processor 401 runs the application program stored in the memory 401, so as to implement the following processes:
acquiring a picture to be classified and a text corresponding to the picture to be classified;
determining a first feature matrix of the picture to be classified and a second feature matrix of a text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
Referring to fig. 5, fig. 5 is a schematic view of a second structure of an electronic device according to an embodiment of the present disclosure.
The electronic device 400 may include components such as a memory 401, a processor 402, an input unit 403, an output unit 404, a display screen 405, and the like.
The memory 401 may be used to store applications and data. The memory 401 stores applications containing executable code. The application programs may constitute various functional modules. The processor 402 executes various functional applications and data processing by running the application programs stored in the storage 401.
The processor 402 is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing an application program stored in the memory 401 and calling data stored in the memory 401, thereby performing overall monitoring of the electronic device.
The input unit 403 may be used to receive input numbers, character information, or user characteristic information, such as a fingerprint, and generate a keyboard, mouse, joystick, optical, or trackball signal input related to user setting and function control.
The output unit 404 may be used to display information input by or provided to a user and various graphical user interfaces of the electronic device, which may be made up of graphics, text, icons, video, and any combination thereof. The output unit may include a display panel.
The display screen 405 may be used to display text, pictures, etc.
In this embodiment, the processor 402 in the electronic device loads the executable code corresponding to the processes of one or more application programs into the memory 401 according to the following instructions, and the processor 402 runs the application programs stored in the memory 401, so as to implement the following processes:
acquiring a picture to be classified and a text corresponding to the picture to be classified;
determining a first feature matrix of the picture to be classified and a second feature matrix of a text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
In some embodiments, the processor 402, when executing the classifying of the picture to be classified according to the preset picture classification model and the third feature matrix to determine the category of the picture to be classified, may execute: determining the probability that the picture to be classified belongs to each of a plurality of preset classes according to a preset picture classification model and the third feature matrix to obtain a plurality of probabilities; determining a maximum probability from the plurality of probabilities; and determining the preset category corresponding to the maximum probability as the category of the picture to be classified.
In some embodiments, the picture to be classified is one of the pictures in the information to be pushed, and after the processor 402 determines the preset category corresponding to the maximum probability as the category of the picture to be classified, the processor may further perform: acquiring a preset filtering category set, wherein the preset filtering category set comprises a plurality of filtering categories; judging whether the category of the picture to be classified is matched with one filtering category in the preset filtering category set; and if the category of the picture to be classified is matched with one filtering category in the preset filtering category set, not pushing the information to be pushed.
In some embodiments, when the processor 402 performs the splicing of the first feature matrix and the second feature matrix to obtain a third feature matrix, the processor may perform the splicing of a plurality of features in the second feature matrix to a last column of features in the first feature matrix to obtain the third feature matrix.
In some embodiments, before the processor 402 performs the splicing of the first feature matrix and the second feature matrix to obtain a third feature matrix, the following may be further performed: judging whether the number of first features in the first feature matrix is less than or equal to the number of second features in the second feature matrix; if the number of the first features is smaller than or equal to the number of the second features, performing dimension reduction operation on the second feature matrix to enable the number of the second features to be smaller than the number of the first features, and obtaining a second feature matrix after dimension reduction; when the processor 402 performs the splicing of the first feature matrix and the second feature matrix to obtain a third feature matrix, the following steps may be performed: and splicing the first feature matrix and the second feature matrix after dimensionality reduction to obtain a third feature matrix.
In some embodiments, when the processor 402 executes the acquiring of the picture to be classified and the text corresponding to the picture to be classified, it may execute: acquiring a picture to be classified; judging whether a text corresponding to the picture to be classified exists or not; if the text corresponding to the picture to be classified does not exist, the processor 402 may execute the following steps when the second feature matrix of the text corresponding to the picture to be classified is determined: acquiring a zero setting matrix, wherein the value of each element in the zero setting matrix is 0; when the processor 402 performs the splicing of the first feature matrix and the second feature matrix to obtain a third feature matrix, the following steps may be performed: and splicing the first feature matrix and the zero setting matrix to obtain a third feature matrix.
In some embodiments, before the processor 402 performs the acquiring of the picture to be classified and the text corresponding to the picture to be classified, the following may be further performed: acquiring a plurality of sample pictures, texts corresponding to the sample pictures and a plurality of preset categories; training a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures and the plurality of preset categories; and determining the trained model as the preset image classification model.
In some embodiments, when the processor 402 performs the training of the preset model according to the plurality of sample pictures, the text corresponding to each sample picture, and the plurality of preset categories, the method may further include: and performing iterative training on a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures, the plurality of preset categories and a preset loss function until the loss value of the loss function is minimum.
In the above embodiments, the descriptions of the embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description of the image classification method, and are not described herein again.
The image classification device provided in the embodiment of the present application and the image classification method in the above embodiment belong to the same concept, and any one of the methods provided in the embodiments of the image classification method may be operated on the image classification device, and a specific implementation process thereof is described in detail in the embodiment of the image classification method, and is not described herein again.
It should be noted that, for the image classification method described in the embodiment of the present application, it can be understood by those skilled in the art that all or part of the process for implementing the image classification method described in the embodiment of the present application can be completed by controlling the related hardware through a computer program, where the computer program can be stored in a computer-readable storage medium, such as a memory, and executed by at least one processor, and during the execution process, the process of the embodiment of the image classification method can be included. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a Random Access Memory (RAM), or the like.
For the image classification device in the embodiment of the present application, each functional module may be integrated into one processing chip, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium, such as a read-only memory, a magnetic or optical disk, or the like.
The above detailed description is provided for a picture classification method, an apparatus, a storage medium, and an electronic device provided in the embodiments of the present application, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understanding the method and the core idea of the present application; meanwhile, for those skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (20)

  1. A picture classification method comprises the following steps:
    acquiring a picture to be classified and a text corresponding to the picture to be classified;
    determining a first feature matrix of the picture to be classified and a second feature matrix of a text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
    splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
    classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
  2. The image classification method according to claim 1, wherein the classifying the image to be classified according to a preset image classification model and the third feature matrix to determine the category of the image to be classified comprises:
    determining the probability that the picture to be classified belongs to each of a plurality of preset classes according to a preset picture classification model and the third feature matrix to obtain a plurality of probabilities;
    determining a maximum probability from the plurality of probabilities;
    and determining the preset category corresponding to the maximum probability as the category of the picture to be classified.
  3. The image classification method according to claim 2, wherein the image to be classified is one of the images in the information to be pushed, and after the preset category corresponding to the maximum probability is determined as the category of the image to be classified, the method further comprises:
    acquiring a preset filtering category set, wherein the preset filtering category set comprises a plurality of filtering categories;
    judging whether the category of the picture to be classified is matched with one filtering category in the preset filtering category set;
    and if the category of the picture to be classified is matched with one filtering category in the preset filtering category set, not pushing the information to be pushed.
  4. The image classification method according to claim 1, wherein the splicing the first feature matrix and the second feature matrix to obtain a third feature matrix comprises:
    and splicing the plurality of features in the second feature matrix into the features of the last column in the first feature matrix to obtain a third feature matrix.
  5. The image classification method according to claim 1, wherein before the splicing the first feature matrix and the second feature matrix to obtain a third feature matrix, the method further comprises:
    judging whether the number of first features in the first feature matrix is less than or equal to the number of second features in the second feature matrix;
    if the number of the first features is smaller than or equal to the number of the second features, performing dimension reduction operation on the second feature matrix to enable the number of the second features to be smaller than the number of the first features, and obtaining a second feature matrix after dimension reduction;
    the splicing the first feature matrix and the second feature matrix to obtain a third feature matrix includes:
    and splicing the first feature matrix and the second feature matrix after dimensionality reduction to obtain a third feature matrix.
  6. The image classification method according to claim 1, wherein the obtaining of the image to be classified and the text corresponding to the image to be classified includes:
    acquiring a picture to be classified;
    judging whether a text corresponding to the picture to be classified exists or not;
    if the text corresponding to the picture to be classified does not exist, the determining the second feature matrix of the text corresponding to the picture to be classified comprises the following steps:
    acquiring a zero setting matrix, wherein the value of each element in the zero setting matrix is 0;
    the splicing the first feature matrix and the second feature matrix to obtain a third feature matrix includes:
    and splicing the first feature matrix and the zero setting matrix to obtain a third feature matrix.
  7. The image classification method according to claim 1, wherein before the obtaining of the image to be classified and the text corresponding to the image to be classified, the method further comprises:
    acquiring a plurality of sample pictures, texts corresponding to the sample pictures and a plurality of preset categories;
    training a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures and the plurality of preset categories;
    and determining the trained model as the preset image classification model.
  8. The image classification method according to claim 7, wherein the training of the preset model according to the plurality of sample images, the text corresponding to each sample image, and the plurality of preset categories includes:
    and performing iterative training on a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures, the plurality of preset categories and a preset loss function until the loss value of the loss function is minimum.
  9. A picture classification apparatus, comprising:
    the acquisition module is used for acquiring a picture to be classified and a text corresponding to the picture to be classified;
    the determining module is used for determining a first feature matrix of the picture to be classified and a second feature matrix of the text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
    the splicing module is used for splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
    and the classification module is used for classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
  10. The picture classification device according to claim 9, wherein the classification module is configured to:
    determining the probability that the picture to be classified belongs to each of a plurality of preset classes according to a preset picture classification model and the third feature matrix to obtain a plurality of probabilities;
    determining a maximum probability from the plurality of probabilities;
    and determining the preset category corresponding to the maximum probability as the category of the picture to be classified.
  11. The apparatus for classifying pictures according to claim 10, wherein the picture to be classified is one of the pictures in the information to be pushed, and the classification module is configured to:
    acquiring a preset filtering category set, wherein the preset filtering category set comprises a plurality of filtering categories;
    judging whether the category of the picture to be classified is matched with one filtering category in the preset filtering category set;
    and if the category of the picture to be classified is matched with one filtering category in the preset filtering category set, not pushing the information to be pushed.
  12. A storage medium having stored therein a computer program which, when run on a computer, causes the computer to execute the picture classification method according to any one of claims 1 to 8.
  13. An electronic device, wherein the electronic device comprises a processor and a memory, wherein the memory stores a computer program, and the processor is configured to execute, by calling the computer program stored in the memory:
    acquiring a picture to be classified and a text corresponding to the picture to be classified;
    determining a first feature matrix of the picture to be classified and a second feature matrix of a text corresponding to the picture to be classified, wherein the first feature matrix comprises a plurality of features of the picture to be classified, and the second feature matrix comprises a plurality of features of the text corresponding to the picture to be classified;
    splicing the first feature matrix and the second feature matrix to obtain a third feature matrix;
    classifying the pictures to be classified according to a preset picture classification model and the third feature matrix so as to determine the classes of the pictures to be classified.
  14. The electronic device of claim 13, wherein the processor is configured to perform:
    determining the probability that the picture to be classified belongs to each of a plurality of preset classes according to a preset picture classification model and the third feature matrix to obtain a plurality of probabilities;
    determining a maximum probability from the plurality of probabilities;
    and determining the preset category corresponding to the maximum probability as the category of the picture to be classified.
  15. The electronic device of claim 14, wherein the picture to be classified is one of the pictures in the information to be pushed, and the processor is configured to perform:
    acquiring a preset filtering category set, wherein the preset filtering category set comprises a plurality of filtering categories;
    judging whether the category of the picture to be classified is matched with one filtering category in the preset filtering category set;
    and if the category of the picture to be classified is matched with one filtering category in the preset filtering category set, not pushing the information to be pushed.
  16. The electronic device of claim 13, wherein the processor is configured to perform:
    and splicing the plurality of features in the second feature matrix into the features of the last column in the first feature matrix to obtain a third feature matrix.
  17. The electronic device of claim 13, wherein the processor is configured to perform:
    judging whether the number of first features in the first feature matrix is less than or equal to the number of second features in the second feature matrix;
    if the number of the first features is smaller than or equal to the number of the second features, performing dimension reduction operation on the second feature matrix to enable the number of the second features to be smaller than the number of the first features, and obtaining a second feature matrix after dimension reduction;
    and splicing the first feature matrix and the second feature matrix after dimensionality reduction to obtain a third feature matrix.
  18. The electronic device of claim 13, wherein the processor is configured to perform:
    acquiring a picture to be classified;
    judging whether a text corresponding to the picture to be classified exists or not;
    if the text corresponding to the picture to be classified does not exist, acquiring a zero matrix, wherein the value of each element in the zero matrix is 0;
    and splicing the first feature matrix and the zero setting matrix to obtain a third feature matrix.
  19. The electronic device of claim 13, wherein the processor is configured to perform:
    acquiring a plurality of sample pictures, texts corresponding to the sample pictures and a plurality of preset categories;
    training a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures and the plurality of preset categories;
    and determining the trained model as the preset image classification model.
  20. The electronic device of claim 19, wherein the processor is configured to perform:
    and performing iterative training on a preset model according to the plurality of sample pictures, the texts corresponding to the sample pictures, the plurality of preset categories and a preset loss function until the loss value of the loss function is minimum.
CN201980100392.XA 2019-11-05 2019-11-05 Picture classification method and device, storage medium and electronic equipment Pending CN114375465A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/115786 WO2021087770A1 (en) 2019-11-05 2019-11-05 Picture classification method and apparatus, and storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN114375465A true CN114375465A (en) 2022-04-19

Family

ID=75849400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980100392.XA Pending CN114375465A (en) 2019-11-05 2019-11-05 Picture classification method and device, storage medium and electronic equipment

Country Status (2)

Country Link
CN (1) CN114375465A (en)
WO (1) WO2021087770A1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103399870A (en) * 2013-07-08 2013-11-20 华中科技大学 Visual word bag feature weighting method and system based on classification drive
CN103473327A (en) * 2013-09-13 2013-12-25 广东图图搜网络科技有限公司 Image retrieval method and image retrieval system
CN103699523B (en) * 2013-12-16 2016-06-29 深圳先进技术研究院 Product classification method and apparatus

Also Published As

Publication number Publication date
WO2021087770A1 (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN108830235B (en) Method and apparatus for generating information
CN112559800B (en) Method, apparatus, electronic device, medium and product for processing video
CN112418345B (en) Method and device for quickly identifying small targets with fine granularity
WO2022037299A1 (en) Abnormal behavior detection method and apparatus, and electronic device and computer-readable storage medium
CN112016502B (en) Safety belt detection method, safety belt detection device, computer equipment and storage medium
US20230021551A1 (en) Using training images and scaled training images to train an image segmentation model
US20240135698A1 (en) Image classification method, model training method, device, storage medium, and computer program
CN113901647A (en) Part process rule compiling method and device, storage medium and electronic equipment
CN113643260A (en) Method, apparatus, device, medium and product for detecting image quality
CN113449840A (en) Neural network training method and device and image classification method and device
CN113033373B (en) Method for training face recognition model and recognizing face and related device
CN110738261A (en) Image classification and model training method and device, electronic equipment and storage medium
CN114375465A (en) Picture classification method and device, storage medium and electronic equipment
CN114140852B (en) Image detection method and device
CN114842476A (en) Watermark detection method and device and model training method and device
CN114120416A (en) Model training method and device, electronic equipment and medium
CN111079771B (en) Method, system, terminal equipment and storage medium for extracting characteristics of click-to-read image
CN113204973A (en) Training method, device, equipment and storage medium of answer-questions recognition model
CN110851349A (en) Page abnormal display detection method, terminal equipment and storage medium
CN110827261A (en) Image quality detection method and device, storage medium and electronic equipment
CN114359904B (en) Image recognition method, image recognition device, electronic equipment and storage medium
CN114677691B (en) Text recognition method, device, electronic equipment and storage medium
CN113378773B (en) Gesture recognition method, gesture recognition device, gesture recognition apparatus, gesture recognition storage medium, and gesture recognition program product
CN117571321B (en) Bearing fault detection method, device, equipment and storage medium
CN118131893A (en) Data processing method, device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination