CN113486947A - Garment commodity gender classification method and device based on size data - Google Patents

Garment commodity gender classification method and device based on size data Download PDF

Info

Publication number
CN113486947A
CN113486947A CN202110749017.7A CN202110749017A CN113486947A CN 113486947 A CN113486947 A CN 113486947A CN 202110749017 A CN202110749017 A CN 202110749017A CN 113486947 A CN113486947 A CN 113486947A
Authority
CN
China
Prior art keywords
size
gender
data
sub
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110749017.7A
Other languages
Chinese (zh)
Inventor
陈畅新
黄于晏
钟艺豪
李百川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Youmi Technology Co ltd
Original Assignee
Youmi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Youmi Technology Co ltd filed Critical Youmi Technology Co ltd
Priority to CN202110749017.7A priority Critical patent/CN113486947A/en
Publication of CN113486947A publication Critical patent/CN113486947A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method and a device for classifying the sex of clothing commodities based on size data, wherein the method comprises the following steps: acquiring size related data of a target clothing commodity; extracting and combining the data of a plurality of size dimensions in the size-related data according to a preset combination rule to form a plurality of size combination data; inputting the plurality of size combination data into a plurality of corresponding sub-size and sex recognition network models in a size and sex recognition network model to obtain a plurality of sub-size recognition results; determining a size identification result corresponding to the size-related data according to the plurality of sub-size identification results; the size recognition result is used to indicate a gender classification of the size-related data. Therefore, the method and the device can improve the accuracy of gender identification by using the gender information hidden in the size information, and further increase the range and the type of the available information for gender identification so as to improve the efficiency of gender identification.

Description

Garment commodity gender classification method and device based on size data
Technical Field
The invention relates to the technical field of neural networks, in particular to a clothes commodity gender classification method and device based on size data.
Background
With the rise of Internet E-commerce of clothes, the form and characteristics of the data of the clothes commodities are changed. The early internet e-commerce pays more attention to the direct embodiment of the clothing products on the detail pages, the picture and text information of the clothing products is more standard for the convenience of viewing of users, the newly emerged two types of e-commerce pay more attention to the content and bring the scenes into the content, interference information can be introduced at the moment, at present, the e-commerce documents are often brought into various scenes, the e-commerce documents are not required to enter a platform, the documents are directly placed on the single-product pages, the manufacturing quality of the single-product pages is uneven, and therefore the gender identification of the clothing products by relying on the single-product pages has a great challenge.
In the prior art, gender information implied by size information provided by a merchant for clothing commodities is not considered when the gender categories of the clothing commodities are identified, so that a blind area on the thinking exists, and a solution is needed.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a method and an apparatus for determining gender classification of clothing articles based on size data, which can perform gender identification by combining information of multiple dimensions in the size data of clothing articles, and is beneficial to improve accuracy of gender identification by using gender information implicit in the size information, and further increase range and type of usable information for gender identification, so as to improve efficiency of gender identification.
In order to solve the technical problem, the invention discloses a method for classifying the sex of clothing goods based on size data in a first aspect, which comprises the following steps:
acquiring size related data of a target clothing commodity;
extracting and combining the data of a plurality of size dimensions in the size-related data according to a preset combination rule to form a plurality of size combination data;
inputting the plurality of size combination data into a plurality of corresponding sub-size and sex recognition network models in a size and sex recognition network model to obtain a plurality of sub-size recognition results;
determining a size identification result corresponding to the size-related data according to the plurality of sub-size identification results; the size recognition result is used to indicate a gender classification of the size-related data.
As an alternative embodiment, in the first aspect of the present invention, the size-related data includes height size data and/or weight size data; and/or, the plurality of size dimensions includes at least two of a maximum height dimension, a minimum height dimension, a maximum weight dimension, and a minimum weight dimension.
As an alternative embodiment, in the first aspect of the present invention, the sub-size recognition result includes confidence scores of the corresponding size combination data under a plurality of gender categories; the size recognition result comprises confidence scores of the size-related data under a plurality of gender categories; determining the size recognition result corresponding to the size-related data according to the plurality of sub-size recognition results, including:
determining the size sub-model weight of the sub-size gender identification network model corresponding to each sub-size identification result; the size sub-model weight is related to the training prediction accuracy of the corresponding sub-size gender identification network model;
and determining the size identification result corresponding to the size related data according to the plurality of sub-size identification results and the corresponding size sub-model weight.
As an optional implementation manner, in the first aspect of the present invention, the determining, according to the plurality of sub-size recognition results and the corresponding size sub-model weights, a size recognition result corresponding to the size-related data includes:
for each sub-size recognition result, multiplying the confidence scores of the corresponding size combination data under a plurality of gender categories with the corresponding size sub-model weight to obtain the weighted confidence scores of the corresponding size combination data under a plurality of gender categories;
calculating a total weighted confidence score corresponding to each gender category according to the weighted confidence scores of the size combination data corresponding to all the sub-size recognition results under a plurality of gender categories;
sorting all the gender categories according to the total weighted confidence score from high to low to obtain a gender category sequence;
and determining the gender categories with the preset number in the front of the gender category sequence and the corresponding total weighted confidence scores as the size identification results corresponding to the size-related data.
As an optional implementation manner, in the first aspect of the present invention, the text gender identification network model is obtained by training based on the following steps:
acquiring size training data of clothing commodities;
extracting and combining the data of the multiple dimension dimensions in the dimension training data according to the combination rule to form multiple dimension training sets;
and respectively inputting the plurality of size training sets into a size and gender identification training network for training until convergence so as to obtain a plurality of sub-size and gender identification network models through training.
As an alternative implementation, in the first aspect of the present invention, the method further includes:
acquiring text related data and image related data of a target clothing commodity;
inputting the text related data into a text gender identification network model to obtain a text identification result;
inputting the image related data into an image gender identification network model to obtain an image identification result;
and determining the gender category corresponding to the target clothing commodity according to the text recognition result, the size recognition result and the image recognition result.
As an optional implementation manner, in the first aspect of the present invention, the determining, according to the text recognition result, the size recognition result, and the image recognition result, a gender category corresponding to the target clothing item includes:
determining final confidence scores of the target clothing commodity under a plurality of gender categories according to the text recognition result, the size recognition result and the image recognition result;
and determining the gender category corresponding to the target clothing commodity according to the final confidence score of the target clothing commodity under the plurality of gender categories.
The invention discloses a clothes commodity gender sorting device based on size data in a second aspect, which comprises:
the size acquisition module is used for acquiring text related data of the target clothing commodity;
the size combination module is used for extracting and combining the data of a plurality of size dimensions in the text related data according to a preset combination rule to form a plurality of size combination data;
the size identification module is used for inputting the plurality of size combination data into a plurality of corresponding sub-size gender identification network models in the text gender identification network model to obtain a plurality of sub-size identification results;
the size determining module is used for determining a size identification result corresponding to the size related data according to the plurality of sub-size identification results; the size recognition result is used to indicate a gender classification of the size-related data.
As an alternative embodiment, in the second aspect of the present invention, the size-related data includes height size data and/or weight size data; and/or, the plurality of size dimensions includes at least two of a maximum height dimension, a minimum height dimension, a maximum weight dimension, and a minimum weight dimension.
As an alternative embodiment, in the second aspect of the present invention, the sub-size recognition result includes confidence scores of the corresponding size combination data under a plurality of gender categories; the size recognition result comprises confidence scores of the size-related data under a plurality of gender categories; the size determining module determines a specific manner of the size identification result corresponding to the size-related data according to the plurality of sub-size identification results, and the specific manner includes:
determining the size sub-model weight of the sub-size gender identification network model corresponding to each sub-size identification result; the size sub-model weight is related to the training prediction accuracy of the corresponding sub-size gender identification network model;
and determining the size identification result corresponding to the size related data according to the plurality of sub-size identification results and the corresponding size sub-model weight.
As an optional implementation manner, in the second aspect of the present invention, a specific manner of determining the size recognition result corresponding to the size-related data according to the plurality of sub-size recognition results and the corresponding size sub-model weights by the size determination module includes:
for each sub-size recognition result, multiplying the confidence scores of the corresponding size combination data under a plurality of gender categories with the corresponding size sub-model weight to obtain the weighted confidence scores of the corresponding size combination data under a plurality of gender categories;
calculating a total weighted confidence score corresponding to each gender category according to the weighted confidence scores of the size combination data corresponding to all the sub-size recognition results under a plurality of gender categories;
sorting all the gender categories according to the total weighted confidence score from high to low to obtain a gender category sequence;
and determining the gender categories with the preset number in the front of the gender category sequence and the corresponding total weighted confidence scores as the size identification results corresponding to the size-related data.
As an optional implementation manner, in the second aspect of the present invention, the text gender identification network model is trained based on the following steps:
acquiring size training data of clothing commodities;
extracting and combining the data of the multiple dimension dimensions in the dimension training data according to the combination rule to form multiple dimension training sets;
and respectively inputting the plurality of size training sets into a size and gender identification training network for training until convergence so as to obtain a plurality of sub-size and gender identification network models through training.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further comprises:
the acquisition module is used for acquiring text related data and image related data of the target clothing commodity;
the text recognition module is used for inputting the text related data into a text gender recognition network model to obtain a text recognition result;
the image identification module is used for inputting the image related data into an image gender identification network model so as to obtain an image identification result;
and the gender determining module is used for determining the gender category corresponding to the target clothing commodity according to the text recognition result, the size recognition result and the image recognition result.
As an optional implementation manner, in the second aspect of the present invention, the specific manner in which the gender determining module determines the gender category corresponding to the target clothing item according to the text recognition result, the size recognition result, and the image recognition result includes:
determining final confidence scores of the target clothing commodity under a plurality of gender categories according to the text recognition result, the size recognition result and the image recognition result;
and determining the gender category corresponding to the target clothing commodity according to the final confidence score of the target clothing commodity under the plurality of gender categories.
The invention discloses a third aspect of another clothes commodity gender sorting device based on size data, which comprises:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute part or all of the steps of the method for classifying the gender of the clothing commodity based on the size data disclosed by the first aspect of the embodiment of the invention.
In a fourth aspect of the embodiments of the present invention, a computer storage medium is disclosed, where the computer storage medium stores computer instructions, and when the computer instructions are called, the computer instructions are used to perform some or all of the steps in the method for classifying clothing article gender based on size data disclosed in the first aspect of the embodiments of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the size related data of the target clothing commodity is obtained; extracting and combining the data of a plurality of size dimensions in the size-related data according to a preset combination rule to form a plurality of size combination data; inputting the plurality of size combination data into a plurality of corresponding sub-size and sex recognition network models in a size and sex recognition network model to obtain a plurality of sub-size recognition results; determining a size identification result corresponding to the size-related data according to the plurality of sub-size identification results; the size recognition result is used to indicate a gender classification of the size-related data. Therefore, the invention can combine the data of multiple dimensions in the text data of the goods and respectively carry out prediction and identification through the multiple sub-network models, thereby being capable of carrying out gender identification by combining the information of multiple dimensions in the size data of the clothing goods, being beneficial to improving the accuracy of gender identification by using the gender information implicit in the size information and further increasing the range and the variety of available information for gender identification so as to improve the efficiency of gender identification.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flow chart of a method for classifying the gender of a clothing article based on size data according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another method for classifying the gender of a clothing article based on size data according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a device for classifying the gender of a clothing article based on size data according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of another apparatus for classifying the sex of an article of clothing based on size data according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another device for classifying the sex of clothing articles based on size data according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," and the like in the description and claims of the present invention and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The invention discloses a clothes commodity gender classification method and a device based on size data, which can combine data of multiple dimensions in text data of a commodity and respectively carry out prediction and identification through a plurality of sub-network models, thereby being capable of carrying out gender identification by combining information of multiple dimensions in the size data of the clothes commodity, being beneficial to improving the accuracy of gender identification by using gender information implicit in the size information and further increasing the range and the variety of available information for gender identification so as to improve the efficiency of gender identification. The following are detailed below.
Example one
Referring to fig. 1, fig. 1 is a schematic flow chart illustrating a method for classifying the gender of a clothing product based on size data according to an embodiment of the present invention. The method described in fig. 1 is applied to a clothing gender prediction device based on a neural network, where the prediction device may be a corresponding prediction terminal, prediction device or server, and the server may be a local server or a cloud server, and the embodiment of the present invention is not limited thereto. As shown in fig. 1, the method for classifying the gender of the clothing article based on the size data can comprise the following operations:
101. and acquiring the size related data of the target clothing product.
In the embodiment of the invention, the data related to the size of the target clothing product can be obtained through text information or image information in a product page of the target clothing product, and the size preset parameter of the merchant can also be directly obtained through a specific interface, wherein the size data in the image information in the page of the clothing product can be identified through an image identification algorithm such as an OCR algorithm. Optionally, the size-related data comprises height size data and/or weight size data.
102. And extracting and combining the data of a plurality of size dimensions in the size-related data according to a preset combination rule to form a plurality of size combination data.
In the embodiment of the present invention, the preset combination rule may be that data of multiple size dimensions are randomly extracted and combined for multiple times, and optionally, multiple size combination data may be formed by randomly extracting several non-repeating data from the data of multiple size dimensions for multiple times.
In embodiments of the invention, the plurality of dimension dimensions may include at least two of a maximum height dimension, a minimum height dimension, a maximum weight dimension, and a minimum weight dimension.
103. And inputting the plurality of size combination data into a plurality of corresponding sub-size and gender identification network models in the size and gender identification network model to obtain a plurality of sub-size identification results.
In the embodiment of the present invention, each sub-size gender identification network model corresponds to a combination of a plurality of size dimensions corresponding to one type of size combination data, and preferably, each sub-size gender identification network model is obtained by inputting size training data that is the same as the size dimension combination of the corresponding size combination data for training. Therefore, the data format of the model input of each sub-size gender identification network model is the same as that of the corresponding size combination data.
104. And determining the size identification result corresponding to the size related data according to the plurality of sub-size identification results.
In an embodiment of the invention, the size recognition result is used to indicate a gender classification of the size-related data.
Therefore, the method described by the embodiment of the invention can combine the data of multiple dimensions in the text data of the goods and respectively carry out prediction and identification through the multiple sub-network models, so that the gender identification can be carried out by combining the information of multiple dimensions in the size data of the clothing goods, the accuracy of gender identification can be improved by using the gender information implicit in the size information, the range and the category of the available information for gender identification can be further increased, and the efficiency of gender identification can be improved.
As an alternative embodiment, the sub-size recognition result includes confidence scores of the corresponding size combination data under a plurality of gender categories; the size recognition result includes confidence scores for the size-related data for a plurality of gender categories. In step 104, determining a size recognition result corresponding to the size-related data according to the plurality of sub-size recognition results includes:
determining the size sub-model weight of the sub-size gender identification network model corresponding to each sub-size identification result;
and determining the size identification result corresponding to the size related data according to the plurality of sub-size identification results and the corresponding size sub-model weights.
In the embodiment of the invention, the weight of the size sub-model is related to the training prediction accuracy of the corresponding sub-size gender identification network model, optionally, the training prediction accuracy can be calculated by obtaining historical training prediction data of the sub-size gender identification network model, for example, the average value of the training prediction accuracy of the sub-size gender identification network model in multiple training can be counted. Preferably, the size sub-model weight is proportional to the training prediction accuracy of the corresponding sub-size gender identification network model, that is, the higher the training prediction accuracy of the sub-size gender identification network model is, the greater the proportion of the output sub-size identification result in the final size identification success is.
Therefore, by implementing the optional implementation mode, the size identification result corresponding to the size-related data can be determined according to the plurality of sub-size identification results and the corresponding size sub-model weights, so that the accuracy rate of gender identification can be improved by using the gender information implicit in the size information, and the range and the category of the usable information for gender identification can be further increased, thereby improving the efficiency of gender identification.
As an optional implementation manner, in the foregoing step, determining the size recognition result corresponding to the size-related data according to the plurality of sub-size recognition results and the corresponding size sub-model weights includes:
for each sub-size recognition result, multiplying the confidence scores of the corresponding size combination data under a plurality of gender categories by the corresponding size sub-model weight to obtain the weighted confidence scores of the corresponding size combination data under the plurality of gender categories;
calculating a total weighted confidence score corresponding to each gender category according to the weighted confidence scores of the size combination data corresponding to all the sub-size recognition results under the gender categories;
sorting all gender categories according to the total weighted confidence score from high to low to obtain gender category sequences;
and determining the gender categories with the preset number in front of the gender category sequence and the corresponding total weighted confidence scores as the size identification results corresponding to the size-related data.
Alternatively, the sum of all the weighted confidence scores for each gender category may be determined as the total weighted confidence score for each gender category.
Optionally, the preset number may be 1, and at this time, the gender category with the highest total weighted confidence score and the corresponding total weighted confidence score are selected to be determined as the text recognition result corresponding to the text related data. Alternatively, the preset number may be determined according to actual conditions or empirical values.
Therefore, by implementing the optional implementation manner, the total weighted confidence score corresponding to each gender category can be calculated according to the weighted confidence scores of the size combination data corresponding to all the sub-size recognition results under the plurality of gender categories, and the size recognition result corresponding to the size-related data can be further determined, so that the accuracy of gender recognition can be improved by using the gender information implicit in the size information, and the range and the category of available information for gender recognition can be further increased, so that the efficiency of gender recognition can be improved.
As an optional implementation manner, the text gender identification network model is obtained by training based on the following steps:
acquiring size training data of clothing commodities;
extracting and combining data of multiple dimension dimensions in the dimension training data according to a combination rule to form multiple dimension training sets;
and respectively inputting the plurality of size training sets into a size and gender identification training network for training until convergence so as to obtain a plurality of sub-size and gender identification network models through training.
In the embodiment of the present invention, the combination rule according to which the size training data is combined should be the same as the combination rule in step 102, so that in step 102, each size combination data must have a corresponding sub-size gender identification network model.
Optionally, the structure of the size and gender identification training network is the same as the network structure of the sub-size and gender identification network model obtained by training, and both the structure can adopt a decision tree network structure. Preferably, each sub-size gender identification network model adopts a decision tree network structure to form a random forest network model. Optionally, the decision tree network is a model for decision judgment based on a tree structure. The data set is classified through a plurality of conditional discrimination processes, and finally a required result is obtained. The starting point of the decision tree is a root node, the intermediate decision flow is an internal node, and the classification result is a leaf node. A decision tree comprises a root node, a plurality of internal nodes and a plurality of leaf nodes, wherein the root node comprises a sample complete set, the leaf nodes correspond to decision results (which sex labels are output), each other node corresponds to an attribute test (for example, whether the maximum value of the input height is smaller than the maximum value of the height), the sample set contained in each node is divided into child nodes according to the results of the attribute tests, and the root node comprises the sample complete set. The path from the root node to each leaf node corresponds to a decision test sequence. That is, the input data is finally classified into different categories through a plurality of different classifications.
Optionally, the structure of the size and gender identification training network may also adopt a gradient-boosted tree GBDT based on a boosting ensemble learning strategy, and the like, and optionally, the structure is not limited to use of a traditional machine learning method, and may also use a deep learning model to capture the relationship between the gender tag and the size data, for example, deep learning networks such as RNN, LSTM, GRU, and the like may also be used.
Optionally, the size training data and the size-related data may require size data processing before training or prediction, including but not limited to one or more of decoding, transcoding, and regular extraction of the size training data or the size-related data. Optionally, the size data processing includes:
judging whether data of a certain size dimension is missing in the data of a plurality of size dimensions of the size-related data;
when the judgment result is yes, the data of the size dimension of the missing data is determined to be the average value of the data of other size dimensions, and through the implementation, the data of the size dimension of the missing data of the size-related data can be determined to be the average value of the data of other size dimensions, so that the information of the size-related data is favorably filled, the size-related data is conveniently used for carrying out gender prediction subsequently, and the accuracy of gender identification is improved by using the gender information implicit in the size information.
Therefore, by implementing the optional implementation mode, the plurality of size training sets can be respectively input into the size and gender identification training network to be trained until convergence, so that a plurality of sub-size gender identification network models are obtained through training, and the efficiency and the accuracy of the size and gender task are improved when the size and gender identification task is subsequently performed according to the plurality of sub-size gender identification network models.
As an optional implementation manner, in the above step, inputting the plurality of size training sets to the size and gender identification training network respectively for training until convergence, so as to train and obtain a plurality of sub-size and gender identification network models, including:
respectively inputting the plurality of size training sets into a size and gender recognition training network to obtain a random forest model consisting of a plurality of decision tree models;
and selecting and training a plurality of decision tree models in the random forest model based on grid searching and/or cross validation to obtain a plurality of sub-size gender recognition network models.
Specifically, based on grid search, parameters required in a group of models can be manually set, a program automatically runs all the parameters once by using an exhaustion method, and when a decision tree model is trained, the parameters required to be adjusted comprise the maximum leaf node number, the weight of each category, a threshold value of a kini coefficient, the depth of the decision tree and the minimum value of the weight sum of all samples of the leaf nodes.
Specifically, based on cross validation, all data can be divided into K parts, one part is selected as a test set, the remaining K-1 parts are used as a training set and trained, and the training is repeated for K times until each part of the K parts of data is used as a test set, and K models are trained to obtain K accuracy scores.
Specifically, grid search and cross validation can be combined, all parameter combinations are listed through the grid search, the parameter combinations are divided into N parameter combinations, the model uses each parameter combination to perform k-fold cross validation in turn, k models are trained, and the average value of the prediction accuracy of the k models is used as the comprehensive score of the parameter combination. The scores of the N parameter combinations are then compared, and the parameter combination with the highest score is used as the final model parameter.
Specifically, the random forest model is a combination of a plurality of decision tree models, classification results of the decision tree models are voted for selection, and therefore a strong classifier is formed.
Therefore, the optional implementation mode can be implemented to select and train the multiple decision tree models based on grid search and cross validation, so that the convergence rate of the training is higher, and the efficiency and the accuracy of the size and gender task are improved when the size and gender recognition task is performed according to the multiple sub-size gender recognition network models.
Example two
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating another method for classifying the gender of a clothing product based on size data according to an embodiment of the present invention. The method described in fig. 2 is applied to a clothing gender prediction device based on a neural network, where the prediction device may be a corresponding prediction terminal, prediction device or server, and the server may be a local server or a cloud server, and the embodiment of the present invention is not limited thereto. As shown in fig. 2, the method for classifying the gender of the clothing article based on the size data may include the following operations:
201. and acquiring the size related data of the target clothing product.
202. And extracting and combining the data of a plurality of size dimensions in the size-related data according to a preset combination rule to form a plurality of size combination data.
203. And inputting the plurality of size combination data into a plurality of corresponding sub-size and gender identification network models in the size and gender identification network model to obtain a plurality of sub-size identification results.
204. And determining the size identification result corresponding to the size related data according to the plurality of sub-size identification results.
The detailed technical details and technical noun explanations of the steps 201-204 can refer to the description of the steps 101-104 in the first embodiment, which will not be described herein again.
205. And acquiring text related data and image related data of the target clothing commodity.
In the embodiment of the invention, the text related data of the target clothing commodity can be acquired from the text or the image of the commodity page corresponding to the target clothing commodity. In the embodiment of the present invention, the text related data of the target clothing item may be one or a combination of multiple data in item introduction, item publicity and merchant information in an item detail page, for example, it may include one or multiple data in an item title, an item recommendation, an item store name, an item image text, an item style, an item price, an item origin, an item brand information, an item store contact way and item shipping address information, and the present invention is not limited thereto.
In the embodiment of the invention, the image related data of the target clothing commodity can be obtained from the image material of the commodity page of the target clothing commodity. Preferably, the image-related data comprises all images of the item-related page of the target item of clothing. In the embodiment of the invention, the image related data of the target clothing article can be an image in a commodity display page or an image in a commodity detail page, and the image can be one or more of a clothing body image, a clothing detail image and a model dressing image.
206. And inputting the text related data into a text gender identification network model to obtain a text identification result.
In an embodiment of the invention, the text recognition result comprises a confidence score of the text related data in at least one gender category.
207. And inputting the image related data into the image gender identification network model to obtain an image identification result.
In an embodiment of the invention, the image recognition result comprises a confidence score of the image-related data in at least one gender category.
208. And determining the gender category corresponding to the target clothing commodity according to the text recognition result, the size recognition result and the image recognition result.
Therefore, by implementing the method described in the embodiment of the invention, the gender identification can be respectively performed by combining the multi-modal data such as the text data, the image data and the size data of the clothing goods, and the gender of the clothing goods can be finally determined according to the multi-modal data corresponding to various identification results, so that the clothing gender identification can be performed based on the characteristics of the clothing goods on more modal levels, the accuracy and the efficiency of the gender identification of the clothing goods can be effectively improved, and the problem of lower accuracy caused by the fact that the clothing gender identification is performed only by using single modal data in the prior art can be effectively solved.
As an alternative embodiment, in the step 206, inputting the text related data into the text gender recognition network model to obtain the text recognition result includes:
extracting and combining data of a plurality of text dimensions in the text related data according to a preset combination rule to form a plurality of text combined data;
inputting the plurality of text combination data into a plurality of corresponding sub-text type identification network models in the text type identification network model to obtain a plurality of sub-text identification results;
and determining a text recognition result corresponding to the text related data according to the plurality of sub-text recognition results.
Wherein the text recognition result is used for indicating the gender classification of the text related data. In the embodiment of the present invention, the preset combination rule may be that data of a plurality of text dimensions are randomly extracted and combined for a plurality of times, and optionally, a plurality of text combination data may be formed by randomly extracting a plurality of non-repeating data from the data of the plurality of text dimensions for a plurality of times.
In the embodiment of the invention, the plurality of text dimensions may include at least two of a commodity title, a commodity recommendation, a commodity store name, a text in a commodity image, a commodity code, a commodity style, a commodity price, a commodity production place, commodity brand information, a commodity store contact way and commodity shipping address information, wherein the text in the commodity image may identify a text in an image in a page of the clothing commodity through an image identification algorithm such as an OCR algorithm.
In the embodiment of the present invention, each subfile identity recognition network model corresponds to a combination of a plurality of text dimensions corresponding to one text combination data, and preferably, each subfile identity recognition network model is obtained by inputting text training data that is the same as a text dimension combination of corresponding text combination data and performing training. Each subfile identity thus identifies the data format of the model input to the network model, which is the same as the format of the corresponding text composition data.
Therefore, by implementing the optional implementation mode, data of multiple dimensions in the text data of the commodity can be combined, and the prediction and identification can be performed through the multiple sub-network models respectively, so that the gender identification can be performed by comprehensively combining information of multiple dimensions in the text data of the commodity, the identification accuracy can be improved, and the problem of identification errors or identification incapability caused by the fact that the keyword matching is simply used in the prior art is solved.
As an alternative embodiment, the sub-text recognition result includes confidence scores of the corresponding text combination data under a plurality of gender categories; the text recognition result includes confidence scores for the text-related data for a plurality of gender categories. In the above step, determining the text recognition result corresponding to the text related data according to the plurality of sub-text recognition results includes:
determining the text sub-model weight of the sub-text identity recognition network model corresponding to each sub-text recognition result;
and determining a text recognition result corresponding to the text related data according to the plurality of sub-text recognition results and the corresponding text sub-model weights.
In the embodiment of the invention, the weight of the text sub-model is related to the training prediction accuracy of the corresponding sub-characteristic identity recognition network model, optionally, the training prediction accuracy can be calculated by obtaining historical training prediction data of the sub-characteristic identity recognition network model, for example, the average value of the training prediction accuracy of the sub-characteristic identity recognition network model in multiple training can be counted. Preferably, the text sub-model weight is in direct proportion to the training prediction accuracy of the corresponding sub-text identity recognition network model, that is, the higher the training prediction accuracy of the sub-text identity recognition network model is, the greater the proportion of the output sub-text recognition result in the final text recognition success is.
Therefore, the optional implementation mode can determine the text recognition result corresponding to the text related data according to the plurality of sub-text recognition results and the corresponding text sub-model weights, so that a more accurate and reasonable text recognition result can be obtained, the recognition accuracy can be improved, and the problem that in the prior art, the recognition is wrong or cannot be recognized due to the fact that the keyword matching is simply used is solved.
As an optional implementation manner, in the foregoing step, determining a text recognition result corresponding to text-related data according to a plurality of sub-text recognition results and corresponding text sub-model weights includes:
for each sub-text recognition result, multiplying the confidence score of the corresponding text combination data under a plurality of gender categories by the corresponding text sub-model weight to obtain the weighted confidence score of the corresponding text combination data under the plurality of gender categories;
calculating a total weighted confidence score corresponding to each gender category according to the weighted confidence scores of the text combination data corresponding to all the sub-text recognition results under the gender categories;
sorting all gender categories according to the total weighted confidence score from high to low to obtain gender category sequences;
and determining the gender categories with the preset number in front of the gender category sequence and the corresponding total weighted confidence score as text recognition results corresponding to the text related data.
Alternatively, the sum of all the weighted confidence scores for each gender category may be determined as the total weighted confidence score for each gender category.
Optionally, the preset number may be 1, and at this time, the gender category with the highest total weighted confidence score and the corresponding total weighted confidence score are selected to be determined as the text recognition result corresponding to the text related data. Alternatively, the preset number may be determined according to actual conditions or empirical values.
Therefore, by implementing the optional implementation mode, the total weighted confidence score corresponding to each gender category can be calculated according to the weighted confidence scores of the text combination data corresponding to all the sub-text recognition results under the gender categories, and the text recognition result corresponding to the text related data can be further determined, so that a more accurate and reasonable text recognition result can be obtained, the recognition accuracy can be improved, and the problem of recognition error or recognition incapability caused by the fact that the keyword matching is simply used in the prior art is solved.
As an optional implementation manner, the text gender identification network model is obtained by training based on the following steps:
acquiring text training data of clothing commodities;
extracting and combining data of a plurality of text dimensions in the text training data according to a combination rule to form a plurality of text training sets;
and respectively inputting the plurality of text training sets into a text gender identification training network for training until convergence so as to obtain a plurality of subfile gender identification network models through training.
In the embodiment of the present invention, the combination rule according to which the text training data is extracted and combined should be the same as the combination rule in the above step, but in an actual situation, some text dimensions of the text-related data possibly existing in the prediction link may not exist or have no data, at this time, the number of a plurality of text combination data obtained by extracting and combining through the combination rule may be smaller than the number of the text training sets, and since each text training set is trained to obtain one sub-text gender identification network model, in the above step, there may exist a part of sub-text gender identification network models without corresponding data input, but each text combination data necessarily has one corresponding sub-text gender identification network model.
Optionally, the structure of the text gender identification training network is the same as the network structure of the trained subfile gender identification network model, and a neural network structure such as a CNN structure which can be used for text identification can be adopted. Preferably, the structure of the Text gender identification training Network can adopt a deep learning Network structure based on TextCNN (Text Convolutional Neural Network), which includes a Convolutional layer, a pooling layer and a softmax classification layer.
Optionally, the structure of the text gender identification training network may also adopt a fixed representation model based on Word vectors, such as FastText, Word2Vec, GloVe, or a recurrent neural network structure, such as TextRNN or TextRNN + Attention, or use a dynamic representation model based on Word vectors, such as Transformer, ERNIE, BERT, ElMo, BART, Bort, T5, XLNet, and the like, and a combination of these network structures, such as DPCNN, TextRCNN, rt + RNN, BERT + RCNN, BERT + DPCNN, BERT + CNN, and the like. Optionally, the text training data and the text related data need to be processed with text data before training or prediction, including but not limited to one or more of word segmentation, cleaning, word vector mapping, and word vector stitching for the text training data or the text related data.
Therefore, by implementing the optional implementation mode, the plurality of text training sets can be respectively input into the text gender identification training network for training until convergence, so that a plurality of subfile nature identification network models are obtained through training, and the efficiency and the accuracy of the text gender identification task are improved when the text gender identification task is subsequently performed according to the plurality of subfile nature identification network models.
As an alternative embodiment, the step of inputting each text training set into the text gender identification training network for training until convergence includes:
inputting text information related to clothing commodities;
integrating the domain dictionaries disclosed by various mechanisms or various word segmenters, performing word frequency statistics and related word expansion, and constructing the domain dictionaries and the stop word lists of various industries;
the constructed domain dictionary is led into a word segmentation tool or a jieba word segmentation device of HanLP (Chinese Language Processing package) to be used as the prior knowledge of the word segmentation device, and a stop word list is added to perform word segmentation and cleaning on an input text;
mapping the word segmentation result of the text into a K-dimensional word vector by using a pre-training word vector;
splicing M word vectors in each sentence to form an M multiplied by K matrix as the input of a text gender identification training network;
setting the number J and the size S of convolution kernels of convolution layers in a text gender recognition training network, and performing convolution calculation on an input matrix to extract features;
mapping the M multiplied by K vector matrix into an x multiplied by y characteristic matrix f through an activation function after convolution, and outputting a characteristic matrix after the text passes through each convolution kernel;
inputting the characteristic matrix into a pooling layer to perform pooling operation to obtain a pooling result, so as to reduce the number of parameters and accelerate calculation under the condition of keeping main characteristics;
after convolution and pooling for multiple times, the final output is spliced and input into the softmax layer for classification to obtain the confidence coefficient of each gender category, training is carried out through a cross entropy loss function, the gender category corresponding to the maximum confidence coefficient is output, a threshold value is set for the confidence coefficient of each gender category, the output larger than the threshold value is used as a gender label of the clothing commodity text, and the gender score is the corresponding confidence coefficient.
As an alternative implementation, in step 207, inputting the image-related data into the image gender identification network model to obtain an image identification result, including:
inputting the image related data into an image gender identification network model to obtain confidence scores of the image related data under a plurality of gender categories;
determining an image identification result corresponding to the image related data according to the confidence scores of the image related data under a plurality of gender categories and the weight information corresponding to each gender category; the image recognition result is used to indicate gender classification of the image-related data.
Therefore, by implementing the optional implementation mode, the gender of the clothing commodity can be determined by combining the weight information of a plurality of gender categories of the clothing commodity and a plurality of confidence degree prediction results, and the gender accuracy of the clothing commodity can be effectively improved.
As an alternative embodiment, the image-related data includes a plurality of product images corresponding to the target clothing product, and in the above step, the inputting the image-related data into the image gender identification network model to obtain confidence scores of the image-related data under a plurality of gender categories includes:
and respectively inputting the plurality of commodity images into the image gender identification network model to obtain the sub-prediction gender category corresponding to each commodity image and the corresponding image confidence score.
In the embodiment of the invention, the image gender recognition network model can adopt a pre-trained image recognition network, optionally, the image gender recognition network model can adopt an image recognition model pre-trained based on an ImageNet image data set, such as a MobilenetV3 model, a ResNet model, an EfficientNet model or a ShuffLeNet model, and migration learning of a gender classification task is carried out through the image training data set of clothing goods on the basis. Specifically, the model takes an RGB three-channel image training data set as input, carries out training through softmax cross entropy loss, and finally outputs confidence degrees corresponding to all categories, wherein the gender category with the highest confidence degree and exceeding a preset threshold value is regarded as a predicted gender category corresponding to the input image.
Therefore, by implementing the optional implementation mode, the plurality of commodity images can be respectively input into the image gender identification network model to obtain the sub-prediction gender category corresponding to each commodity image and the corresponding image confidence score, so that the final image identification result can be determined according to the sub-prediction gender category corresponding to each commodity image and the corresponding image confidence score, and the efficiency and the accuracy of the image gender identification task are improved.
As an alternative embodiment, the determining the image recognition result corresponding to the image-related data according to the confidence scores of the image-related data under the multiple gender categories and the weight information corresponding to each gender category in the step of determining the image recognition result corresponding to the image-related data includes:
determining a category confidence score corresponding to each gender category according to the sub-prediction gender categories corresponding to all the commodity images and the corresponding image confidence scores;
determining the category weight corresponding to each gender category according to the sub-prediction gender categories corresponding to all the commodity images and the corresponding image confidence scores;
determining the product of the category confidence score corresponding to each gender category and the category weight as the final category confidence score corresponding to each gender category;
and determining the target predicted gender category and the target confidence score corresponding to the image related data according to the final category confidence scores corresponding to all gender categories.
In the embodiment of the present invention, the average value of the confidence scores of all the images corresponding to each sub-prediction gender category may be determined as the category confidence score of the gender category corresponding to the sub-prediction gender category, and the average value of the confidence scores of all the images may be determined as the category confidence score, instead of directly taking the reason for the highest confidence of each category to solve the problem that the final category information is wrong when a certain image material is misjudged by the model, the output error of the model for each category may be reduced by calculating the average confidence.
Optionally, the category weight may include a quantitative weight corresponding to a gender category and/or a category weight decay factor.
Therefore, by implementing the optional implementation mode, the target prediction gender category and the target confidence score corresponding to the image related data can be determined according to the final category confidence scores corresponding to all gender categories, and the efficiency and the accuracy of the image gender identification task can be effectively improved.
As an optional implementation manner, in the foregoing step, determining a category weight corresponding to each gender category according to the sub-predicted gender categories corresponding to all the commodity images and the corresponding image confidence scores includes:
and determining the number of images of all commodity images belonging to each sub-prediction gender category, and determining the proportion of the number of images to the total number of all commodity images as the number weight corresponding to the gender category corresponding to the sub-prediction gender category.
Therefore, the optional implementation method can determine the ratio of the image quantity of all commodity images belonging to any gender category to the total quantity of all commodity images as the quantity weight corresponding to the gender category, so that the confidence score of the gender category can be effectively corrected, the category score with a large quantity can be improved, and the category score with a low quantity can be reduced, thereby improving the efficiency and accuracy of the image gender identification task.
As an optional implementation manner, in the foregoing step, determining a category weight corresponding to each gender category according to the sub-predicted gender categories corresponding to all the commodity images and the corresponding image confidence scores includes:
a class weight attenuation factor corresponding to each gender class is determined.
The category weight attenuation factor is related to the proportion of the image materials corresponding to the corresponding gender category in the image related data. Preferably, the class weight decay factor for males and females may be set to 1 and the neutral and other class weight decay factors may be set to 0.9. This is because there may be some interference patterns, such as size tables and scene patterns, that do not include clothing in a plurality of image materials corresponding to the same product, and when the number of the product pattern including clothing is similar to that of the interference pattern, the difference of the category confidence scores finally calculated will be small, so the purpose of introducing the category weight attenuation factor is to reduce the score values of the neutral and other categories by a small margin, and reduce the contribution of the neutral and other categories to the final output gender.
Therefore, the optional implementation method can determine the class weight attenuation factor corresponding to each gender class, and can effectively correct the confidence score of the gender class so as to improve the efficiency and accuracy of the image gender identification task.
As an alternative embodiment, determining the target predicted gender category and the target confidence score corresponding to the image-related data according to the final category confidence scores corresponding to all gender categories includes:
sorting all final category confidence scores from high to low to obtain a confidence score sequence;
calculating a score difference value between the first two confidence scores of the confidence score sequence, and judging whether the score difference value is greater than a preset score difference value threshold value to obtain a score judgment result;
when the score judgment result is yes, determining a higher confidence score and a gender category corresponding to the higher confidence score in the previous two confidence scores as a target confidence score and a target predicted gender category corresponding to the image related data;
when the score judgment result is negative, judging whether the gender categories corresponding to the first two confidence score are respectively male and female to obtain a gender judgment result;
when the gender judgment result is yes, determining the average value of the first two confidence score and the neutral gender as a target confidence score and a target prediction gender category corresponding to the image related data;
and when the gender judgment result is negative, determining the average value of the first two confidence scores and other gender categories as the target confidence score and the target prediction gender category corresponding to the image related data.
The other gender category refers to a category for which gender cannot be determined, for example, when the input image is an unrelated image containing no gender information, such as a text image or a merchant LOGO, the gender category is used to indicate the independence.
Therefore, the alternative embodiment can be implemented to judge the difference between the first two confidence scores of the confidence score sequence and the gender categories corresponding to the two confidence scores respectively, so as to determine the final gender identification result more accurately.
Next, a specific embodiment of the above disclosed method for inputting image-related data into an image gender identification network model for identification will be described, which comprises the following operations:
and performing gender classification of the clothing image by adopting a deep learning method based on a convolutional neural network, wherein the model adopts MobilenetV3 pre-trained based on an ImageNet image data set, and migration learning of a gender classification task is performed through the clothing image data set on the basis. The model takes the RGB three-channel clothes image as input, is trained through softmax cross entropy loss, and finally outputs confidence degrees corresponding to all categories, wherein the category with the highest confidence degree and exceeding a preset threshold value is regarded as the predicted gender corresponding to the input image.
Since the commodity page usually contains a plurality of image materials, an association relationship between the gender of a single image material and the final gender of the commodity page needs to be constructed: the image interface needs to receive all image material of the same commodity page as input, i.e. a stored set of image gender tags. The collection stores the model predicted gender and corresponding confidence for each image material. After a stored set of image gender labels is obtained, the average confidence of each category and the weight proportion of each category in the whole label set are calculated. And finally, multiplying the average confidence of each category by the weight ratio, and multiplying by a preset category attenuation factor to obtain the final score of the category. Sorting the scores of all categories from high to low, taking out the first two and calculating the score difference, when the difference is greater than or equal to the threshold value, outputting the gender category to which the highest score belongs by the image interface, wherein the output gender score is the score corresponding to the category; when the difference is smaller than the threshold, it is necessary to judge whether the first two sexes with the highest score are male or female at the same time, if so, the image interface outputs a neutral category, the gender score is the average of the two categories, otherwise, the image interface outputs other categories and corresponding scores. The method comprises the following specific steps:
the merchandise page A contains n image materials, wherein the model predicts a gender category C for the ith image materialiAnd confidence degree SiFinally, the commodity page a will obtain a storage set L of image tags, where the set L stores gender category and confidence of each image material.
Calculate the average confidence for each class in the set L
Figure BDA0003143745540000161
l represents a class, n _ l represents the number belonging to the class:
Figure BDA0003143745540000162
calculating the weight proportion of each category in the set L in the whole label set to obtain the quantity weight Wl
Figure BDA0003143745540000163
Multiplying the average confidence of each category by the weight ratio and a preset category weight attenuation factor alpha, wherein the result is the final score S of the categoryl
Figure BDA0003143745540000164
Sorting the scores of all categories from high to low, and taking out the highest score Sl1And second highest score Sl2Calculating the difference between the two, and when the difference is greater than or equal to a threshold k, Sl1Class C to which it belongsl1The output sex of the image interface is obtained; when the difference is smaller than the threshold k, the judgment C is requiredl1And Cl2If the sex is the same as the sex, the image interface outputs a neutral category, otherwise, the image interface outputs other sexes.
As an alternative implementation manner, in the step 208, determining the gender category corresponding to the target clothing item according to the text recognition result, the size recognition result, and the image recognition result includes:
determining final confidence scores of the target clothing commodity under a plurality of gender categories according to the text recognition result, the size recognition result and the image recognition result;
and determining the gender category corresponding to the target clothing goods according to the final confidence score of the target clothing goods under the plurality of gender categories.
Gender categories in embodiments of the present invention may include one or a combination of male, female, neutral, or others.
Therefore, by implementing the optional implementation mode, the gender category corresponding to the target clothing commodity can be determined according to the final confidence scores of the target clothing commodity under the multiple gender categories, so that the accuracy and efficiency of the clothing commodity gender identification are improved, and the problem of lower accuracy caused by the fact that only single modal data is used for clothing gender identification in the prior art can be effectively solved.
As an optional implementation manner, in the above steps, determining the final confidence scores of the target clothing item in the multiple gender categories according to the text recognition result, the size recognition result, and the image recognition result includes:
determining final confidence scores of the target clothing commodity under a plurality of gender categories according to the text recognition result, the size recognition result and the image recognition result;
and determining the gender category corresponding to the target clothing goods according to the final confidence score of the target clothing goods under the plurality of gender categories.
Gender categories in embodiments of the present invention may include one or a combination of male, female, neutral, or others.
Therefore, by implementing the optional implementation mode, the gender category corresponding to the target clothing commodity can be determined according to the final confidence scores of the target clothing commodity under the multiple gender categories, so that the accuracy and efficiency of the clothing commodity gender identification are improved, and the problem of lower accuracy caused by the fact that only single modal data is used for clothing gender identification in the prior art can be effectively solved.
As an optional implementation manner, in the above steps, determining the final confidence scores of the target clothing item in the multiple gender categories according to the text recognition result, the size recognition result, and the image recognition result includes:
respectively adjusting the confidence scores of the text related data, the size related data and the image related data under at least one gender category according to corresponding confidence correction formulas to obtain final confidence scores of the text related data, the size related data and the image related data under at least one gender category;
and determining the final confidence scores of the text related data, the size related data and the image related data under at least one gender category as the final confidence scores of the target clothing commodity under a plurality of gender categories.
In the embodiment of the present invention, the confidence coefficient modification formula is related to the prediction accuracy of the gender identification network model corresponding to the corresponding confidence coefficient score and/or the confidence coefficient score threshold difference, and optionally, the confidence coefficient score threshold difference is the difference between the confidence coefficient score corresponding to the weighting factor and the identification confidence coefficient threshold of the corresponding gender identification network model.
Therefore, by implementing the optional implementation mode, the final confidence score of the target clothing commodity under a plurality of gender categories after being adjusted by the confidence correction formula under at least one gender category of the text related data, the size related data and the image related data can be determined to be the final confidence score of the target clothing commodity under the plurality of gender categories, so that the rationality and the accuracy of the final confidence score can be improved, the accuracy and the efficiency of subsequent clothing commodity gender identification can be improved, and the problem of lower accuracy caused by the fact that only single modal data is used for clothing gender identification in the prior art can be effectively solved.
As an alternative embodiment, the confidence correction formula is:
S=So×b×γ;
where S is the final confidence score, SoFor confidence score, b is the confidence normalization factor, and γ is the interface weighting factor.
In the embodiment of the invention, the interface weight factor is related to the prediction accuracy of the gender identification network model corresponding to the confidence score, and is used for measuring the importance of each gender identification network model, for example, when the confidence score of text related data is considered, historical prediction accuracy information related to the text gender identification network model is selected to determine the interface weight factor. Historical prediction accuracy information associated with the size gender identification network model is selected to determine the interface weight factors, for example, while considering the confidence scores of the size-related data. Historical prediction accuracy information associated with the image gender identification network model is selected to determine interface weight factors, for example, while considering confidence scores for image-related data. Alternatively, the historical prediction accuracy information may be determined by calculating an average prediction accuracy for the corresponding gender identification network model over a historical time period or historical prediction tasks.
Optionally, the interface weight factor may also be determined according to the inherent judgment attribute of the gender identification network model, for example: the image gender identification network model is used for carrying out gender judgment based on the image content of the clothing commodity, the image gender identification network model receives a plurality of image materials to determine the output gender category of the clothing commodity, the gender classification accuracy of the image gender identification network model is higher than that of the text gender identification network model and the size gender identification network model, therefore, the image gender identification network model is considered to have higher reliability and stronger classification capability, the interface weight factor of the image gender identification network model can be set to be maximum, and the text gender identification network model and the size gender identification network model also determine a lower interface weight factor than the image gender identification network model according to actual conditions such as experimental results.
As an alternative embodiment, the confidence normalization factor is determined based on the following formula:
b=logax;
wherein x is related to the confidence score, a is related to the confidence threshold of the gender identification network model corresponding to the confidence score, and x and a are both greater than 1. The reason why a is set to be greater than 1 is to ensure logaThe reason why x is set to be greater than 1 is to ensure that b is finally obtained, that is, the confidence normalization factor is greater than 0, so as to avoid data errors in subsequent confidence correction based on the factor.
Alternatively, x may be transformed from the confidence score, for example, x may be equal to the confidence score plus 1, and similarly, a may also be transformed from the confidence threshold of the gender identification network model, for example, a may be the confidence threshold plus 1, and in order to ensure the transformation effect of the confidence normalization factor, the formulas according to which x and a are transformed from the corresponding confidence scores or confidence thresholds should be consistent or similar. For example, when the confidence score is 0.88 and the confidence threshold is 0.70, thenx may be 1+0.88 and a 1+0.7, so the confidence normalization factor b is log1.71.88。
In the embodiment of the present invention, the confidence threshold of the gender identification network model is a gender classification threshold determined in the training, which corresponds to the confidence threshold after the final training convergence of the corresponding network, such as the text gender identification network model, the size gender identification network model or the image gender identification network model.
Optionally, the confidence normalization factor may also be a preset specific number or a piecewise function, in addition to the above logarithmic formula. Specifically, the function of the confidence normalization factor is that each gender identification network model has different judgment criteria for gender scores, that is, the output confidence scores are in different dimensions, so that the gender identification network model cannot be directly used for comparison, and the confidence of each gender identification network model needs to be mapped to the same dimension by the confidence normalization factor, so that the gender identification confidence scores of each gender identification network model can be directly compared.
For example: the confidence score output by the text gender identification network model is 0.88 (threshold value is 0.85), the confidence score output by the size gender identification network model is 0.6 (threshold value is 0.4), and the confidence score output by the image gender identification network model is 0.88 (threshold value is 0.7). It is assumed that the importance degrees of the three models are consistent, and if the confidence scores are directly compared, it is obvious that the score of the text gender identification network model is the highest, and the score of the size gender identification network model is the lowest, then the gender output by the text gender identification network model has a great influence on the gender finally output. But for the text gender identification network model, the confidence score is only 3.5% higher than the threshold value, while the confidence score of the size gender identification network model is 50% higher than the threshold value, and obviously, the result of the size gender identification network model is more reliable. Therefore, it is necessary to adjust the scores of the models by the confidence normalization factor to reduce the gender score of the text model from 0.88 to 0.74 and increase the gender score of the size model from 0.6 to 0.93, so as to achieve comparability.
Therefore, the confidence coefficient correction formula for correcting the confidence coefficient score can be determined by implementing the optional implementation mode, so that the final confidence coefficient score can be determined more accurately, the accuracy and efficiency of subsequent clothes commodity gender identification can be improved, and the problem of lower accuracy caused by the fact that only single modal data is used for clothes gender identification in the prior art can be effectively solved.
As an optional implementation manner, in the above step, determining the gender category corresponding to the target clothing item according to the final confidence scores of the target clothing item in the multiple gender categories includes:
and determining the gender category with the highest final confidence score among the final confidence scores of the target clothing goods under the multiple gender categories as the gender category corresponding to the target clothing goods.
Therefore, by implementing the optional implementation method, the gender category with the highest final confidence score of the target clothing goods under the multiple gender categories can be determined as the gender category corresponding to the target clothing goods, so that the gender category of the target clothing goods can be accurately determined, the accuracy and efficiency of the gender identification of the clothing goods are improved, and the problem of lower accuracy caused by the fact that only single modal data is used for the gender identification of clothing in the prior art can be effectively solved.
As an optional implementation manner, in the above step, determining the gender category corresponding to the target clothing item according to the final confidence scores of the target clothing item in the multiple gender categories includes:
according to the final confidence scores of the target clothing goods under the multiple gender categories, sequencing the multiple final confidence scores from high to low to obtain a score sequencing result;
determining the gender category corresponding to the target clothing commodity according to the gender category corresponding to the first N final confidence scores in the score sorting result; where N is an integer greater than 1 and N is less than the total number of final confidence scores.
Therefore, by implementing the optional implementation mode, the gender category corresponding to the target clothing commodity can be determined according to the gender category corresponding to the first N final confidence scores in the score sorting result, so that the gender category of the target clothing commodity can be accurately determined, the accuracy and efficiency of the gender identification of the clothing commodity are improved, and the problem of lower accuracy caused by the fact that only single modal data is used for the gender identification of clothing in the prior art can be effectively solved.
As an optional implementation manner, in the above step, determining the gender category corresponding to the target clothing item according to the gender categories corresponding to the top N final confidence scores in the score ranking result includes:
calculating the difference between the first two final confidence scores in the score sorting result;
judging whether the difference value is larger than a preset difference threshold value or not to obtain a first judgment result;
when the first judgment result is yes, determining the gender category corresponding to the highest final confidence score as the gender category corresponding to the target clothing commodity;
when the first judgment result is negative, judging whether the gender categories corresponding to the first two final confidence scores are respectively male and female to obtain a second judgment result;
when the second judgment result is yes, determining the gender category corresponding to the target clothing commodity as neutral;
and when the second judgment result is negative, determining the gender category corresponding to the highest final confidence score as the gender category corresponding to the target clothing commodity.
A specific embodiment of the above disclosed garment product gender identification method based on multimodal data is described as follows, which comprises the following steps:
and acquiring multi-mode data of the target clothing product, including image data, text data and size data, and respectively inputting the multi-mode data into the corresponding gender identification network model.
Respectively introducing a preset weight factor gamma and a confidence coefficient standardization factor b for each gender identification network model, mapping the gender scores of each gender identification network model to the same dimension through the confidence coefficient standardization factor b, and then adjusting the importance degree of each gender identification network model by using the weight factor gamma, wherein the relationship between the final gender score S of each gender identification network model and the output score So of each interface is that S is So x b x y;
comparing the final gender scores S of the gender identification network models: calculating the score difference value of the highest gender score and the second highest gender score, and if the score difference value is greater than the score difference threshold value, taking the gender category to which the highest gender score belongs as the output gender of the commodity; and if the score difference is smaller than the score difference threshold value, considering whether the gender categories to which the highest gender score and the second highest gender score belong are male and female at the same time, if so, outputting a neutral category as the gender of the commodity, otherwise, still outputting the gender category to which the highest gender score belongs as the gender of the commodity.
EXAMPLE III
Referring to fig. 3, fig. 3 is a schematic structural diagram of a device for classifying gender of clothing articles based on size data according to an embodiment of the present invention. The apparatus described in fig. 3 may be applied to a corresponding prediction terminal, prediction device, or server, and the server may be a local server or a cloud server, which is not limited in the embodiment of the present invention. As shown in fig. 3, the apparatus may include:
a size obtaining module 301, configured to obtain text-related data of a target clothing item;
a size combination module 302, configured to extract and combine data of multiple size dimensions in the text-related data according to a preset combination rule to form multiple size combination data;
a size recognition module 303, configured to input the multiple size combination data into multiple corresponding sub-size gender recognition network models in the text gender recognition network model to obtain multiple sub-size recognition results;
a size determining module 304, configured to determine, according to the multiple sub-size recognition results, a size recognition result corresponding to the size-related data; the size recognition result is used to indicate a gender classification of the size-related data.
As an alternative embodiment, the size-related data comprises height size data and/or weight size data; and/or, the plurality of size dimensions includes at least two of a maximum height dimension, a minimum height dimension, a maximum weight dimension, and a minimum weight dimension.
As an alternative embodiment, the sub-size recognition result includes confidence scores of the corresponding size combination data under a plurality of gender categories; the size recognition result comprises confidence scores of the size-related data under a plurality of gender categories; the size determining module 304 determines a specific manner of the size recognition result corresponding to the size-related data according to the plurality of sub-size recognition results, including:
determining the size sub-model weight of the sub-size gender identification network model corresponding to each sub-size identification result; the size sub-model weight is related to the training prediction accuracy of the corresponding sub-size gender identification network model;
and determining the size identification result corresponding to the size related data according to the plurality of sub-size identification results and the corresponding size sub-model weights.
As an optional implementation manner, the determining module 304 determines a specific manner of the size recognition result corresponding to the size-related data according to the plurality of sub-size recognition results and the corresponding weight of the size sub-model, including:
for each sub-size recognition result, multiplying the confidence scores of the corresponding size combination data under a plurality of gender categories by the corresponding size sub-model weight to obtain the weighted confidence scores of the corresponding size combination data under the plurality of gender categories;
calculating a total weighted confidence score corresponding to each gender category according to the weighted confidence scores of the size combination data corresponding to all the sub-size recognition results under the gender categories;
sorting all gender categories according to the total weighted confidence score from high to low to obtain gender category sequences;
and determining the gender categories with the preset number in front of the gender category sequence and the corresponding total weighted confidence scores as the size identification results corresponding to the size-related data.
As an optional implementation mode, the text gender identification network model is obtained by training based on the following steps:
acquiring size training data of clothing commodities;
extracting and combining data of multiple dimension dimensions in the dimension training data according to a combination rule to form multiple dimension training sets;
and respectively inputting the plurality of size training sets into a size and gender identification training network for training until convergence so as to obtain a plurality of sub-size and gender identification network models through training.
As an alternative embodiment, as shown in fig. 4, the apparatus further includes:
an obtaining module 305, configured to obtain text related data and image related data of a target clothing item;
the text recognition module 306 is used for inputting the text related data into the text gender recognition network model to obtain a text recognition result;
the image identification module 307 is used for inputting the image related data into the image gender identification network model to obtain an image identification result;
and the gender determination module 308 is configured to determine a gender category corresponding to the target clothing item according to the text recognition result, the size recognition result, and the image recognition result.
As an optional implementation manner, the specific manner of determining the gender category corresponding to the target clothing item by the gender determination module 308 according to the text recognition result, the size recognition result, and the image recognition result includes:
determining final confidence scores of the target clothing commodity under a plurality of gender categories according to the text recognition result, the size recognition result and the image recognition result;
and determining the gender category corresponding to the target clothing goods according to the final confidence score of the target clothing goods under the plurality of gender categories.
Example four
Referring to fig. 5, fig. 5 is a schematic structural diagram of another apparatus for classifying gender of clothing products based on size data according to an embodiment of the present invention. As shown in fig. 5, the apparatus may include:
a memory 401 storing executable program code;
a processor 402 coupled with the memory 401;
the processor 402 calls the executable program code stored in the memory 401 to execute part or all of the steps of the method for classifying the gender of the clothing article based on the size data disclosed in the first embodiment or the second embodiment of the invention.
EXAMPLE five
The embodiment of the invention discloses a computer storage medium, which stores computer instructions, and when the computer instructions are called, the computer instructions are used for executing part or all of the steps of the clothes commodity gender classification method based on the size data disclosed in the first embodiment or the second embodiment of the invention.
The above-described embodiments of the apparatus are merely illustrative, and the modules described as separate components may or may not be physically separate, and the components shown as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above detailed description of the embodiments, those skilled in the art will clearly understand that the embodiments may be implemented by software plus a necessary general hardware platform, and may also be implemented by hardware. Based on such understanding, the above technical solutions may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, where the storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a Programmable Read-Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), a One-time Programmable Read-Only Memory (OTPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc-Read-Only Memory (CD-ROM), or other disk memories, CD-ROMs, or other magnetic disks, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
Finally, it should be noted that: the method and device for classifying the gender of the clothing article based on the size data disclosed in the embodiment of the invention are only the preferred embodiment of the invention, and are only used for illustrating the technical scheme of the invention, but not limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A method for classifying the gender of an article of clothing based on dimensional data, the method comprising:
acquiring size related data of a target clothing commodity;
extracting and combining the data of a plurality of size dimensions in the size-related data according to a preset combination rule to form a plurality of size combination data;
inputting the plurality of size combination data into a plurality of corresponding sub-size and sex recognition network models in a size and sex recognition network model to obtain a plurality of sub-size recognition results;
determining a size identification result corresponding to the size-related data according to the plurality of sub-size identification results; the size recognition result is used to indicate a gender classification of the size-related data.
2. The method for classifying gender of clothing items based on size data as claimed in claim 1, wherein the size related data comprises height size data and/or weight size data; and/or, the plurality of size dimensions includes at least two of a maximum height dimension, a minimum height dimension, a maximum weight dimension, and a minimum weight dimension.
3. The method for gender classification of clothing articles based on size data as claimed in claim 2 wherein the sub-size recognition result comprises confidence scores of the corresponding size combination data under a plurality of gender categories; the size recognition result comprises confidence scores of the size-related data under a plurality of gender categories; determining the size recognition result corresponding to the size-related data according to the plurality of sub-size recognition results, including:
determining the size sub-model weight of the sub-size gender identification network model corresponding to each sub-size identification result; the size sub-model weight is related to the training prediction accuracy of the corresponding sub-size gender identification network model;
and determining the size identification result corresponding to the size related data according to the plurality of sub-size identification results and the corresponding size sub-model weight.
4. The method for classifying gender of clothing articles based on size data as claimed in claim 3, wherein the determining the size recognition result corresponding to the size related data according to the plurality of sub-size recognition results and the corresponding weight of the size sub-model comprises:
for each sub-size recognition result, multiplying the confidence scores of the corresponding size combination data under a plurality of gender categories with the corresponding size sub-model weight to obtain the weighted confidence scores of the corresponding size combination data under a plurality of gender categories;
calculating a total weighted confidence score corresponding to each gender category according to the weighted confidence scores of the size combination data corresponding to all the sub-size recognition results under a plurality of gender categories;
sorting all the gender categories according to the total weighted confidence score from high to low to obtain a gender category sequence;
and determining the gender categories with the preset number in the front of the gender category sequence and the corresponding total weighted confidence scores as the size identification results corresponding to the size-related data.
5. The method for classifying the sex of clothing items based on size data as claimed in claim 1, wherein the text sex recognition network model is trained based on the following steps:
acquiring size training data of clothing commodities;
extracting and combining the data of the multiple dimension dimensions in the dimension training data according to the combination rule to form multiple dimension training sets;
and respectively inputting the plurality of size training sets into a size and gender identification training network for training until convergence so as to obtain a plurality of sub-size and gender identification network models through training.
6. The method for gender classification of articles of apparel based on dimensional data as recited in claim 1, further comprising:
acquiring text related data and image related data of a target clothing commodity;
inputting the text related data into a text gender identification network model to obtain a text identification result;
inputting the image related data into an image gender identification network model to obtain an image identification result;
and determining the gender category corresponding to the target clothing commodity according to the text recognition result, the size recognition result and the image recognition result.
7. The method for classifying the sex of clothing items based on size data according to claim 6, wherein the determining the sex category corresponding to the target clothing item according to the text recognition result, the size recognition result and the image recognition result comprises:
determining final confidence scores of the target clothing commodity under a plurality of gender categories according to the text recognition result, the size recognition result and the image recognition result;
and determining the gender category corresponding to the target clothing commodity according to the final confidence score of the target clothing commodity under the plurality of gender categories.
8. An apparatus for gender sorting of articles of apparel based on dimensional data, the apparatus comprising:
the size acquisition module is used for acquiring text related data of the target clothing commodity;
the size combination module is used for extracting and combining the data of a plurality of size dimensions in the text related data according to a preset combination rule to form a plurality of size combination data;
the size identification module is used for inputting the plurality of size combination data into a plurality of corresponding sub-size gender identification network models in the text gender identification network model to obtain a plurality of sub-size identification results;
the size determining module is used for determining a size identification result corresponding to the size related data according to the plurality of sub-size identification results; the size recognition result is used to indicate a gender classification of the size-related data.
9. An apparatus for gender sorting of articles of apparel based on dimensional data, the apparatus comprising:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the method for classifying the sex of clothing articles based on size data according to any one of claims 1 to 7.
10. A computer storage medium storing computer instructions which, when invoked, perform a method for gender classification of an item of clothing based on dimensional data according to any one of claims 1 to 7.
CN202110749017.7A 2021-07-01 2021-07-01 Garment commodity gender classification method and device based on size data Pending CN113486947A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110749017.7A CN113486947A (en) 2021-07-01 2021-07-01 Garment commodity gender classification method and device based on size data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110749017.7A CN113486947A (en) 2021-07-01 2021-07-01 Garment commodity gender classification method and device based on size data

Publications (1)

Publication Number Publication Date
CN113486947A true CN113486947A (en) 2021-10-08

Family

ID=77940231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110749017.7A Pending CN113486947A (en) 2021-07-01 2021-07-01 Garment commodity gender classification method and device based on size data

Country Status (1)

Country Link
CN (1) CN113486947A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103514A (en) * 2017-04-25 2017-08-29 北京京东尚科信息技术有限公司 Commodity distinguishing label determines method and apparatus
CN108665457A (en) * 2018-05-16 2018-10-16 腾讯科技(深圳)有限公司 Image-recognizing method, device, storage medium and computer equipment
CN111259782A (en) * 2020-01-14 2020-06-09 北京大学 Video behavior identification method based on mixed multi-scale time sequence separable convolution operation
CN111353540A (en) * 2020-02-28 2020-06-30 创新奇智(青岛)科技有限公司 Commodity category identification method and device, electronic equipment and storage medium
CN111738245A (en) * 2020-08-27 2020-10-02 创新奇智(北京)科技有限公司 Commodity identification management method, commodity identification management device, server and readable storage medium
CN111898738A (en) * 2020-07-30 2020-11-06 北京智能工场科技有限公司 Mobile terminal user gender prediction method and system based on full-connection neural network
CN112380349A (en) * 2020-12-04 2021-02-19 有米科技股份有限公司 Commodity gender classification method and device and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107103514A (en) * 2017-04-25 2017-08-29 北京京东尚科信息技术有限公司 Commodity distinguishing label determines method and apparatus
CN108665457A (en) * 2018-05-16 2018-10-16 腾讯科技(深圳)有限公司 Image-recognizing method, device, storage medium and computer equipment
CN111259782A (en) * 2020-01-14 2020-06-09 北京大学 Video behavior identification method based on mixed multi-scale time sequence separable convolution operation
CN111353540A (en) * 2020-02-28 2020-06-30 创新奇智(青岛)科技有限公司 Commodity category identification method and device, electronic equipment and storage medium
CN111898738A (en) * 2020-07-30 2020-11-06 北京智能工场科技有限公司 Mobile terminal user gender prediction method and system based on full-connection neural network
CN111738245A (en) * 2020-08-27 2020-10-02 创新奇智(北京)科技有限公司 Commodity identification management method, commodity identification management device, server and readable storage medium
CN112380349A (en) * 2020-12-04 2021-02-19 有米科技股份有限公司 Commodity gender classification method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
经海斌: ""网易严选个性化推荐***的设计与实现"", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2 *

Similar Documents

Publication Publication Date Title
CN112990432B (en) Target recognition model training method and device and electronic equipment
CN111340126B (en) Article identification method, apparatus, computer device, and storage medium
CN104778186B (en) Merchandise items are mounted to the method and system of standardized product unit
CN107683469A (en) A kind of product classification method and device based on deep learning
JP6884116B2 (en) Information processing equipment, information processing methods, and programs
US20210133439A1 (en) Machine learning prediction and document rendering improvement based on content order
JP2007128195A (en) Image processing system
CN112380349A (en) Commodity gender classification method and device and electronic equipment
WO2018196718A1 (en) Image disambiguation method and device, storage medium, and electronic device
CN108734159B (en) Method and system for detecting sensitive information in image
CN110827797B (en) Voice response event classification processing method and device
CN110716792B (en) Target detector and construction method and application thereof
WO2022121163A1 (en) User behavior tendency identification method, apparatus, and device, and storage medium
CN107403128A (en) A kind of item identification method and device
US20170039451A1 (en) Classification dictionary learning system, classification dictionary learning method and recording medium
CN110413825B (en) Street-clapping recommendation system oriented to fashion electronic commerce
CN111401343B (en) Method for identifying attributes of people in image and training method and device for identification model
CN111340051A (en) Picture processing method and device and storage medium
CN107403179A (en) A kind of register method and device of article packaged information
CN109635796B (en) Questionnaire recognition method, device and equipment
CN113486943A (en) Clothing commodity gender identification method and device based on multi-mode data
CN109101984B (en) Image identification method and device based on convolutional neural network
CN113033587B (en) Image recognition result evaluation method and device, electronic equipment and storage medium
CN113486946B (en) Clothing commodity gender classification method and device based on image data
CN113486947A (en) Garment commodity gender classification method and device based on size data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination