CN114969465A - Federal learning data processing method, device and equipment - Google Patents

Federal learning data processing method, device and equipment Download PDF

Info

Publication number
CN114969465A
CN114969465A CN202111251714.6A CN202111251714A CN114969465A CN 114969465 A CN114969465 A CN 114969465A CN 202111251714 A CN202111251714 A CN 202111251714A CN 114969465 A CN114969465 A CN 114969465A
Authority
CN
China
Prior art keywords
classification
feature
gradient information
characteristic
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111251714.6A
Other languages
Chinese (zh)
Inventor
刘鸿儒
孙中伟
宋红花
赵国梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202111251714.6A priority Critical patent/CN114969465A/en
Publication of CN114969465A publication Critical patent/CN114969465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Bioethics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides a method, a device and equipment for processing federated learning data, which are applied to a federated learning system, wherein the federated learning system comprises first equipment and second equipment, the first equipment comprises a plurality of first characteristics of a plurality of objects, the second equipment comprises a plurality of second characteristics of the plurality of objects, and the method comprises the following steps: the method comprises the steps that first equipment obtains model parameters of a classification model corresponding to ith iteration processing, and obtains gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature according to the model parameters; the first equipment updates object classification conditions corresponding to classification nodes in the classification model according to the gradient information of the first intervals and the gradient information of the second intervals; when the classification model converges, the first device determines at least one classification interval corresponding to each first feature and each second feature according to the converged classification model. The accuracy of feature classification is improved.

Description

Federal learning data processing method, device and equipment
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a method, a device and equipment for processing federated learning data.
Background
When the deep neural network is trained, the continuous statistical characteristics need to be subjected to bucket division so as to enhance the nonlinear expression capability of the characteristics and further improve the training effect of the deep neural network.
At present, when the barrel division is carried out on the continuous statistical characteristics, the barrel division method with equal frequency and equal distance can be carried out through manual experience. For example, the continuous interval of the user age characteristic is 1-100 years, and the user age characteristic is averagely divided into two buckets through manual experience, so that discrete user age characteristic intervals are obtained: age 1 to 50 and age 51 to 100. However, the method for partitioning the barrel according to the manual experience of equal distance and equal frequency cannot accurately partition the continuous statistical characteristics, so that the accuracy of the characteristic data partitioning is low, and the training effect of the deep neural network is poor.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for processing federated learning data, which are used for solving the technical problems that in the prior art, the accuracy of characteristic data sub-bucket is low and the training effect of a deep neural network is poor.
In a first aspect, an embodiment of the present application provides a method for processing federated learning data, which is applied to a federated learning system, where the federated learning system includes a first device and a second device, the first device includes at least one first feature of a plurality of objects, and the second device includes at least one second feature of the plurality of objects, and the method includes:
the first equipment obtains model parameters of a classification model corresponding to the ith iteration, the classification model comprises a plurality of classification nodes, and each classification node corresponds to an object classification condition;
the first equipment acquires gradient information of a plurality of first intervals corresponding to each first characteristic and gradient information of a plurality of second intervals corresponding to each second characteristic according to the model parameters;
the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the gradient information of a plurality of first intervals corresponding to each first characteristic and the gradient information of a plurality of second intervals corresponding to each second characteristic;
and sequentially increasing the iteration processing times until the classification model is converged, and determining at least one classification interval corresponding to each first characteristic and each second characteristic by the first equipment according to the converged classification model.
In one possible implementation, a classification node is identified for any one of the plurality of classification nodes; updating the object classification conditions corresponding to the classification nodes in the classification model according to the gradient information of the plurality of intervals corresponding to each first feature and the gradient information of the plurality of intervals corresponding to each second feature, wherein the updating comprises the following steps:
the first equipment determines a plurality of classification conditions to be selected according to a plurality of first intervals corresponding to each first characteristic and a plurality of second intervals corresponding to each second characteristic;
the first equipment determines a classification loss value corresponding to each to-be-selected classification condition;
and the first equipment updates the object classification condition corresponding to the classification node into the to-be-selected classification condition with the minimum classification loss value.
In one possible implementation, the classification information is used for any one of the plurality of candidate classification conditions; the determining, by the first device, a classification loss value corresponding to the to-be-selected classification condition includes:
determining a first object set corresponding to the classification node, wherein the first object set comprises at least two objects, and the plurality of objects comprise the first object set;
classifying the first object set according to the classification condition to be selected to obtain a first sub-object set and a second sub-object set;
and determining a classification loss value corresponding to the classification condition to be selected according to the gradient information of each object in the first sub-object set, the gradient information of each object in the second sub-object set and the gradient information of each object in the first object set.
In one possible embodiment, the gradient information includes a first gradient and a second gradient; determining a classification loss value corresponding to the classification condition to be selected according to the gradient information of each object in the first sub-object set, the gradient information of each object in the second sub-object set, and the gradient information of each object in the first object set, including:
determining a classification loss value corresponding to the classification condition to be selected according to the following formula I:
Figure BDA0003318547860000021
wherein, L is split For the classification loss value, the I L For the first set of sub-objects, the I R Is the second set of sub-objects, the I is the first set of objects, the g i Is the first gradient of the ith object, said h i And the second gradient of the ith object, the lambda is a first preset parameter, and the gamma is a second preset parameter.
In a possible implementation manner, the updating, by the first device, the object classification condition corresponding to each classification node in the classification model according to the gradient information of the plurality of first intervals corresponding to each first feature and the gradient information of the plurality of second intervals corresponding to each second feature includes:
the first equipment determines the updating sequence of the plurality of classification nodes according to the positions of the classification nodes in the classification model;
and the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the updating sequence and according to the gradient information of a plurality of first intervals corresponding to each first characteristic and the gradient information of a plurality of second intervals corresponding to each second characteristic.
In a possible implementation manner, the determining, by the first device, a plurality of candidate classification conditions according to a plurality of intervals corresponding to each first feature and a plurality of intervals corresponding to each second feature includes:
the first equipment determines an endpoint value corresponding to each interval and a characteristic corresponding to each interval;
and the first equipment determines the multiple classification conditions to be selected according to the endpoint value corresponding to each interval and the characteristic corresponding to each interval.
In a possible implementation manner, the obtaining, by the first device, gradient information of a plurality of first intervals corresponding to each first feature according to the model parameter includes:
the first device determines gradient information of each object according to the model parameters and at least one first characteristic of the plurality of objects;
the first equipment determines a plurality of first intervals corresponding to each first feature;
and the first equipment determines the gradient information of a plurality of first intervals corresponding to each first characteristic according to the gradient information of each object.
In a possible implementation, for any one of the at least one first feature; the first device determines gradient information of a plurality of first intervals corresponding to the first feature according to the gradient information of each object, and the determining includes:
for any one of the first intervals, the first device determines a target object corresponding to the first interval in the at least one object according to first features of the objects, wherein the first features of the target object are located in the first interval;
and determining the sum of the gradient information of the target object as the gradient information of the first interval.
In a possible implementation manner, the acquiring, by the first device, gradient information of a plurality of second intervals corresponding to each second feature includes:
the first equipment sends gradient information of each object to second equipment;
the first device receives, from the second device, gradient information of a plurality of second sections corresponding to each second feature, where the gradient information of the plurality of second sections corresponding to each second feature is determined by the second device according to the gradient information of each object.
In a possible embodiment, the feature to be processed is for any one of the at least one first feature and the at least one second feature; the first device determines a classification interval corresponding to the feature to be processed according to the converged classification model, and the method comprises the following steps:
the first equipment determines classification characteristics corresponding to each classification node in the classification model, wherein the object classification conditions corresponding to the classification nodes are conditions for classifying according to the classification characteristics;
the first equipment determines at least one target classification node in the plurality of classification nodes according to the classification characteristics corresponding to the classification nodes, wherein the classification characteristics corresponding to the target classification nodes are the characteristics to be processed;
and the first equipment determines the at least one classification interval according to a threshold value in the object classification condition corresponding to the target classification node.
In a possible implementation, after the first device determines at least one classification interval corresponding to each first feature and each second feature according to the converged classification model, the method further includes:
acquiring data to be classified, wherein the data to be classified comprises a plurality of characteristics, and the plurality of characteristics comprise part or all of the at least one first characteristic and the at least one second characteristic;
and classifying the data to be classified according to at least one classification interval corresponding to each first characteristic and each second characteristic.
In a second aspect, the present application provides a federated learning data processing apparatus, which is applied to a federated learning system, the federated learning system includes a first device and a second device, the first device includes at least one first feature of a plurality of objects therein, the second device includes at least one second feature of the plurality of objects therein, the federated learning data processing apparatus includes a first acquisition module, a second acquisition module, an update module, and a determination module, wherein:
the first obtaining module is used for obtaining model parameters of a classification model corresponding to the ith iteration treatment by the first equipment, wherein the classification model comprises a plurality of classification nodes, and each classification node corresponds to an object classification condition;
the second obtaining module is configured to, by the first device, obtain, according to the model parameter, gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature;
the updating module is configured to update, by the first device, the object classification condition corresponding to each classification node in the classification model according to the gradient information of the plurality of first intervals corresponding to each first feature and the gradient information of the plurality of second intervals corresponding to each second feature;
and sequentially increasing the iteration processing times until the classification model is converged, and determining at least one classification interval corresponding to each first characteristic and each second characteristic by the first equipment according to the converged classification model.
In a possible implementation manner, the update module is specifically configured to:
the first equipment determines a plurality of classification conditions to be selected according to a plurality of first intervals corresponding to each first characteristic and a plurality of second intervals corresponding to each second characteristic;
the first equipment determines a classification loss value corresponding to each classification condition to be selected;
and the first equipment updates the object classification condition corresponding to the classification node into the to-be-selected classification condition with the minimum classification loss value.
In a possible implementation manner, the update module is specifically configured to:
determining a first object set corresponding to the classification node, wherein the first object set comprises at least two objects, and the plurality of objects comprise the first object set;
classifying the first object set according to the classification condition to be selected to obtain a first sub-object set and a second sub-object set;
and determining a classification loss value corresponding to the classification condition to be selected according to the gradient information of each object in the first sub-object set, the gradient information of each object in the second sub-object set and the gradient information of each object in the first object set.
In a possible implementation manner, the update module is specifically configured to:
determining a classification loss value corresponding to the classification condition to be selected according to the following formula I:
Figure BDA0003318547860000051
wherein, L is split For the classification loss value, the I L For the first set of sub-objects, the I R Is the second set of sub-objects, the I is the first set of objects, the g i Is the first gradient of the ith object, said h i And the second gradient of the ith object, wherein the lambda is a first preset parameter, and the gamma is a second preset parameter.
In a possible implementation manner, the update module is specifically configured to:
the first equipment determines the updating sequence of the plurality of classification nodes according to the positions of the classification nodes in the classification model;
and the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the updating sequence and according to the gradient information of the plurality of first intervals corresponding to each first characteristic and the gradient information of the plurality of second intervals corresponding to each second characteristic.
In a possible implementation, the update module is specifically configured to:
the first equipment determines an endpoint value corresponding to each interval and a characteristic corresponding to each interval;
and the first equipment determines the multiple classification conditions to be selected according to the endpoint value corresponding to each interval and the characteristic corresponding to each interval.
In a possible implementation manner, the second obtaining module is specifically configured to:
the first device determines gradient information of each object according to the model parameters and at least one first characteristic of the plurality of objects;
the first equipment determines a plurality of first intervals corresponding to each first characteristic;
and the first equipment determines the gradient information of a plurality of first intervals corresponding to each first characteristic according to the gradient information of each object.
In a possible implementation manner, the second obtaining module is specifically configured to:
for any one of the first intervals, the first device determines a target object corresponding to the first interval in the at least one object according to first features of the objects, wherein the first features of the target object are located in the first interval;
and determining the sum of the gradient information of the target object as the gradient information of the first interval.
In a possible implementation manner, the second obtaining module is specifically configured to:
the first equipment sends gradient information of each object to second equipment;
the first device receives, from the second device, gradient information of a plurality of second sections corresponding to each second feature, where the gradient information of the plurality of second sections corresponding to each second feature is determined by the second device according to the gradient information of each object.
In a possible implementation, the determining module is specifically configured to:
the first equipment determines classification characteristics corresponding to each classification node in the classification model, wherein the object classification conditions corresponding to the classification nodes are conditions for classifying according to the classification characteristics;
the first equipment determines at least one target classification node from the plurality of classification nodes according to the classification characteristics corresponding to the classification nodes, wherein the classification characteristics corresponding to the target classification nodes are the characteristics to be processed;
and the first equipment determines the at least one classification interval according to a threshold value in the object classification condition corresponding to the target classification node.
In a possible implementation, the apparatus further includes a third obtaining module, configured to:
acquiring data to be classified, wherein the data to be classified comprises a plurality of characteristics, and the plurality of characteristics comprise part or all of the at least one first characteristic and the at least one second characteristic;
and classifying the data to be classified according to at least one classification interval corresponding to each first characteristic and each second characteristic.
In a third aspect, an embodiment of the present application provides a federated learning data processing apparatus, including a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the federated learning data processing method of the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-readable storage medium is configured to implement the federal learning data processing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a computer program product, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the federal learning data processing method in the first aspect.
The embodiment of the application provides a method, a device and equipment for processing federated learning data, which are applied to a federated learning system, wherein the federated learning system comprises first equipment and second equipment, the first equipment comprises at least one first feature of a plurality of objects, the second equipment comprises at least one second feature of the plurality of objects, the first equipment obtains model parameters of a classification model corresponding to the ith iteration, the classification model comprises a plurality of classification nodes, each classification node corresponds to an object classification condition, the first equipment updates the object classification condition corresponding to each classification node in the classification model according to gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature of the model parameters, the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the gradient information of a plurality of intervals corresponding to each first feature and the gradient information of a plurality of intervals corresponding to each second feature, and sequentially increasing the iteration processing times until the classification model converges, and determining at least one classification interval corresponding to each first characteristic and each second characteristic by the first equipment according to the converged classification model. In the method, the classification model updates the object classification condition corresponding to each classification node in the classification model according to the gradient information of the first interval corresponding to the first features and the gradient information of the second interval corresponding to the second features, and the object classification condition is determined by the classification model according to the gradient information of the first interval of the first features and the gradient information of the second interval of the second features, so that the first features and the second features can be accurately binned through the object classification condition, the accuracy of continuous feature binning is improved, and the training effect of the deep neural network is improved.
Drawings
Fig. 1 is a schematic view of an application scenario of a federated learning system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for processing federated learning data according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a classification model provided in an embodiment of the present application;
fig. 4 is a schematic flow chart of another federal learning data processing method according to an embodiment of the present application;
fig. 5 is a schematic diagram of a process of determining a plurality of candidate classification conditions according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a process of determining a classification interval according to an embodiment of the present application;
fig. 7 is a schematic process diagram of a method for processing federated learning data according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a federated learning data processing apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of another federal learning data processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware structure of a federal learning data processing device provided in the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the related art, when the barrel division is performed on the continuous statistical characteristics, the barrel division method with equal frequency and equal distance can be performed through manual experience. For example, the user's age characteristics are continuous characteristics, the interval of the age characteristics is 1-80 years old, the continuous age characteristics can be averagely divided into 4 discrete characteristics according to artificial experiences, and the interval range length of each discrete characteristic is the same, namely, the continuous age characteristics are divided into 4 intervals of 1-20 years old, 21-40 years old, 41-60 years old and 61-80 years old. However, the manual experience bucket partitioning method based on the equal distance and equal frequency cannot accurately partition the continuous statistical features, for example, the number of users in two equally-divided sections is different greatly, which results in lower accuracy of feature data bucket partitioning.
In order to solve the technical problem of low accuracy of feature data bucket partitioning in the related technology, the embodiment of the application provides a federated learning data processing method, which is applied to a federated learning system, wherein the federated learning system comprises first equipment and second equipment, the first equipment comprises at least one first feature of a plurality of objects, the second equipment comprises at least one second feature of the plurality of objects, the first equipment obtains model parameters of a classification model corresponding to the ith iteration processing, and obtains gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature according to the model parameters, the first equipment determines a plurality of classification conditions to be selected and determines a classification loss value corresponding to each classification condition to be selected according to a plurality of intervals corresponding to each first feature and a plurality of intervals corresponding to each second feature, and the first equipment updates the object classification condition corresponding to the classification node into the to-be-selected classification condition with the minimum classification loss value. In the method, the object classification condition is determined by the classification model according to the gradient information of the first intervals of the first features and the second intervals of the second features, and the classification loss value of the object classification condition is minimum, so that the first features and the second features can be accurately classified into buckets according to the object classification condition, and the accuracy of continuous feature classification is improved.
Next, an application scenario of the federal learning system according to the present application will be described with reference to fig. 1.
Fig. 1 is a schematic view of an application scenario of a federated learning system provided in an embodiment of the present application. See fig. 1, which includes a federal learning system and a server. The federal learning system comprises first equipment and second equipment, wherein the first equipment comprises a feature A and a feature B of a user A and a feature C and a feature D of a user B, and the second equipment comprises a feature E and a feature F of the user A and a feature G and a feature H of the user B. The server and the first and second devices may participate in a federated learning process.
Referring to fig. 1, in the federal learning process, a server issues a global model to a first device and a second device, the first device and the second device train the global model issued by the server according to characteristic data of a local user, the obtained local model is uploaded to the server, the server aggregates the local models uploaded by the first device and the second device to obtain an updated global model, and the above processes are repeated in sequence until the aggregated global model converges. Therefore, the first equipment and the second equipment have own training samples, the local model is trained by using the local training samples, and the model training can be completed under the condition that data are not local.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 2 is a flowchart illustrating a method for processing federated learning data according to an embodiment of the present application. Referring to fig. 2, the method may include:
s201, the first device obtains model parameters of a classification model corresponding to the ith iteration.
The execution subject in the embodiment of the application may be the first device in the federal learning system, or may be a federal learning data processing apparatus disposed in the first device, and the federal learning data processing apparatus may be implemented by software, or may be implemented by a combination of software and hardware.
Federal Machine Learning (also called federal Learning), which can combine all parties to perform data use and collaborative modeling on the premise that data is not out of the local, is gradually a common method in privacy protection calculation.
In the process of federal learning, the private data of the participants can be protected in a parameter exchange mode under an encryption mechanism, the data can not be transmitted, the participants do not need to expose own data to other participants and can not reversely deduce the data of other participants, and therefore the federal learning can well protect the privacy of users and guarantee the data safety, and the problem of data islanding can be solved.
The federal learning system comprises a first device and a second device. The first device comprises at least one first characteristic of a plurality of objects, and the second device comprises at least one second characteristic of the plurality of objects. Alternatively, the first device may be a party with a labeled feature in federal learning, and the second device may be a party without a labeled feature in federal learning. For example, the training tasks in the federal learning model are: when whether the user is overdue loan predicted according to the user characteristics, the first device may be a client device on the bank side, in the client device on the bank side, a plurality of characteristics of the user and tag characteristics of whether the user is overdue loan may be stored, the second device may be a client device on the e-commerce side, and in the client device on the e-commerce side, a plurality of characteristics of the user, such as user purchasing habits, user weekly consumption levels and the like, may be stored, but the tag characteristics of whether the user is overdue loan is not included in the client device on the e-commerce side.
Alternatively, a plurality of objects included in the first device and the second device may be users. For example, a bank's client device may store characteristics of multiple users, and an e-commerce client device may also store characteristics of multiple users. Optionally, in an actual application scenario of federal learning, first, a sample alignment process needs to be performed on user features of the first device and the second device to obtain an aligned sample. For example, the client of the bank includes the features of user a, user B, and user C, the client of the e-commerce includes the features of user a and user B, and the aligned sample includes the features of user a and user B. For example, the first device includes features of user a and user B, and the second device includes features of user a and user B.
Optionally, the first characteristic of the object in the first device and the second characteristic of the object in the second device may be the same or different. For example, a first feature in a bank client device that includes user a is: age characteristics, height characteristics, occupation characteristics and income characteristics, the second characteristics that include user A in the E-commerce client equipment are: age characteristics, occupation characteristics, such that the characteristics common between the bank client device and the e-commerce client device are the age characteristics and the occupation characteristics of user a, but the height characteristics and income characteristics of user a are not included in the e-commerce client device. It should be noted that the feature data applied in the context of the embodiment of the present application may be feature data authorized to be used by the user, or may be feature data permitted to be used by law and regulation.
The classification model may be a decision tree model. The decision tree model is a tree diagram formed by decision points, strategy points (event points) and results, is generally applied to sequence decision, and usually takes a maximum benefit expected value or a minimum expected cost as a decision criterion, solves benefit values of various schemes under different conditions in a graphical mode, and then makes a decision through comparison.
Optionally, the classification model includes a plurality of classification nodes, and each classification node corresponds to an object classification condition. For example, the classification model may classify a plurality of objects by the classification node, and the classification model may classify the objects according to the object classification conditions each time the objects are classified. For example, the object classification condition may be an object whose age is greater than 20 years, by which a plurality of objects can be classified into an object whose age is greater than 20 years and an object whose age is less than or equal to 20 years. Optionally, each time the classification model is split, the classification node corresponds to an object classification condition.
The model parameters of the classification model may be values corresponding to the underlying classification nodes. For example, in a decision tree model, each leaf node at the bottom of the decision tree corresponds to a value that is a model parameter of the classification model. For example, the decision tree model ends after the initial node is split once, the decision tree model includes 2 leaf nodes, and the value of each leaf node is the model parameter of the decision tree model. Optionally, determining the value of the leaf node in the decision tree model is a technical means disclosed in the prior art, and details of the embodiment of the present application are not repeated herein.
Optionally, when the first device obtains the classification models of the plurality of objects, the first device may obtain model parameters of the classification models. And sequentially increasing the iteration processing times until the classification model converges. For example, i may take 1,2, … … in order until the classification model converges. Optionally, when i is 1, a start node in the classification model is a leaf node, all objects are in the start node, and a value corresponding to the node may be a preset default value at this time.
Alternatively, the classification model convergence may be determined according to the following feasible implementation: and when i is greater than a first preset threshold value, determining that the classification model converges. For example, if the preset threshold is 3, it is determined that the classification model converges when the classification model iterates 3 times.
Alternatively, it may be determined whether the model converges according to the prediction result of the classification model. For example, if the difference between the accuracy rates of the models obtained by two or more adjacent iterations is smaller than a second preset threshold, the classification model is determined to be converged.
Next, the model parameters in the classification model will be described in detail with reference to fig. 3.
Fig. 3 is a schematic diagram of a classification model according to an embodiment of the present application. See fig. 3, including the classification model. The classification model comprises a classification node A, a classification node B and a classification node C. The object classification condition corresponding to the classification node A is whether the age is more than 20 years old, the object classification condition corresponding to the classification node B is whether the age is less than 30 years old, and the object classification condition corresponding to the classification node C is whether the weight is less than 30 kg.
Referring to fig. 3, all objects are included in the node corresponding to classification node a. Classification node a classifies all subjects as subjects older than 20 years of age and as subjects younger than or equal to 20 years of age. Classification node B classifies subjects older than 20 years into subjects older than 20 years and younger than 30 years and subjects older than or equal to 30 years. The classification node C classifies subjects whose age is less than or equal to 20 years old into subjects whose age is less than or equal to 20 years old and whose weight is less than 30kg and subjects whose age is less than or equal to 20 years old and whose weight is greater than or equal to 30 kg.
Referring to fig. 3, after classifying all the objects by 3 classification nodes, leaf nodes a1, a2, A3 and a4 are obtained, wherein a value corresponding to a1 is X1, a value corresponding to a2 is X2, a value corresponding to A3 is X3, and a value corresponding to a4 is X4, so that the model parameters of the classification model are X1, X2, X3 and X4.
S202, the first equipment obtains gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature according to the model parameters.
The first feature is a feature corresponding to a user in the first device, and the second feature is a feature corresponding to a user in the second device. For example, if the feature of the user a in the first device is the feature a and the feature B, and the feature of the user a in the second device is the feature C, the first feature of the user a is the feature a and the feature B, and the second feature of the user a is the feature C.
The first interval may be an interval obtained by segmenting the first feature. For example, if the first characteristic is age and the consecutive characteristic intervals of ages are 1-80 years old, the ages are segmented into 2 first intervals, one first interval being 1-40 years old and the other first interval being 41-80 years old.
The second interval may be an interval obtained by segmenting the second feature. For example, if the second characteristic is height and the continuous characteristic interval of the height is 100cm-200cm, the height is segmented into 2 second intervals, one second interval is 100cm-150cm, and the other second interval is 151cm-200 cm.
The gradient information includes a first gradient and a second gradient. And calculating a primary gradient on the basis of an expression of the interval first-order gradient to obtain a second gradient. Optionally, gradient information of a plurality of first intervals corresponding to each first feature may be obtained according to the model parameter according to a feasible implementation manner as follows: the first device determines gradient information for each object based on the model parameters and at least one first feature of the plurality of objects. For example, the classification model includes leaf nodes a1 and a2, the leaf node a1 has a value of X1, the leaf node a2 has a value of X2, if the object a, the object B, and the object C are classified into the a1 node of the classification model and the object D, the object E, and the object F are classified into the a2 node of the classification model according to at least one first feature of the plurality of objects, the first gradient of the object a, the object B, and the object C is a first-order gradient of X1, the first gradient of the object a, the object B, and the object C is calculated once again to obtain a second gradient of the object a, the object B, and the object C on the basis of the X1 first-order gradient, the first gradient of the object D, the object E, and the object F is a first-order gradient of X2, and the second gradient of the object D, the object E, and the object F is calculated once again on the basis of the X2 first-order gradient.
The first device determines a plurality of first intervals corresponding to each first feature. Optionally, a plurality of first intervals corresponding to the first feature may be determined according to an equidistant and equal-frequency halving bucket manner. For example, if the first feature is an age feature, the age feature ranging from 1 year to 80 years, the first feature may be divided into 4 first intervals: age 1 to 20, 21 to 40, 41 to 60, and 61 to 80.
The first device determines gradient information of a plurality of first intervals corresponding to each first feature according to the gradient information of each object. Optionally, for any one of the at least one first feature, the gradient information of a plurality of first intervals corresponding to the first feature may be determined according to the following feasible implementation manners: for any one of the first intervals, the first device determines a target object corresponding to the first interval in at least one object according to first features of the objects. Wherein the first feature of the target object is located within the first interval. For example, the first interval corresponding to the age characteristic is: the target objects corresponding to the first section of 1 year to 40 years are the user A, the user B and the user C, and the target objects corresponding to the first section of 41 years to 80 years are the user D, if the user A is 23 years old, the user B is 18 years old, the user C is 35 years old and the user D is 66 years old.
The sum of the gradient information of the target object is determined as the gradient information of the first section. For example, the sum of the first gradients of the target object is determined as a first gradient of the first section, and the sum of the second gradients of the target object is determined as a second gradient of the first section. For example, if the target objects corresponding to the first interval are user a and user B, the gradient information of user a is: the gradient information of the user B is as follows: a first gradient B and a second gradient B, the first gradient of the first interval being: the sum of the first gradient A and the first gradient B, and the second gradient in the first interval is: the sum of the second gradient a and the second gradient B.
Optionally, the first device may obtain gradient information of a plurality of second intervals corresponding to each second feature according to the following feasible implementation manners: the first device sends gradient information for each object to the second device. For example, after the first device obtains the gradient information of each object according to the parameters of the classification model, the gradient information of each object may be sent to the second device.
Optionally, before the first device sends the gradient information of each object to the second device, the first device may perform homomorphic encryption on the gradient information of each object, and the second device may obtain the gradient information of the homomorphic encrypted object from the first device. For example, the gradient information of each object may be encrypted by using a Paillier encryption algorithm, which is an addition homomorphic encryption algorithm, and a result obtained by performing addition operation decryption after encryption is the same as a result obtained by directly adding the original result.
The first device receives gradient information of a plurality of second intervals corresponding to each second feature from the second device. And determining the gradient information of a plurality of second intervals corresponding to each second feature by the second equipment according to the gradient information of each object. For example, for the weight characteristic, if the weight characteristic is a second characteristic of the user, the weight characteristic is divided into two second intervals according to an equidistant bucket dividing method: 10kg-20kg and 21kg-30kg, if the weight of the user a is 23kg and the weight of the user B is 29kg, the second device may determine that the second interval corresponding to the local user a and the local user B is: 21kg-30kg, and the second device already acquires the gradient information of the user A and the user B from the first device, so that the second device can acquire the gradient information of a second interval of 21kg-30kg corresponding to the weight characteristics according to the gradient information of the user A and the user B. Optionally, because the gradient information of the object obtained by the second device is homomorphic encrypted gradient information, the gradient information of a plurality of second intervals corresponding to each second feature that is troublesome for the first device from the second device is also homomorphic encrypted gradient information.
S203, the first device updates the object classification condition corresponding to each classification node in the classification model according to the gradient information of the plurality of first intervals corresponding to each first feature and the gradient information of the plurality of second intervals corresponding to each second feature.
Optionally, the object classification condition corresponding to the classification node in the classification model may be updated according to the following feasible implementation manner: and the first equipment determines a plurality of classification conditions to be selected according to a plurality of first intervals corresponding to each first characteristic and a plurality of second intervals corresponding to each second characteristic. And the first equipment determines the classification loss value corresponding to each classification condition to be selected, and updates the object classification condition corresponding to the classification node into the classification condition to be selected with the minimum classification loss value. For example, according to a plurality of first intervals corresponding to each first feature and a plurality of second intervals corresponding to each second feature, the first device determines the classification condition to be selected as: and if the classification loss value corresponding to the classification condition A is smaller than the classification loss value corresponding to the classification condition B, updating the object classification condition corresponding to the classification node into the classification condition A.
Optionally, when the classification model is updated, the first device updates the classification model because the first device includes the tag data.
And S204, when the classification model converges, the first equipment determines at least one classification interval corresponding to each first feature and each second feature according to the converged classification model.
Optionally, for any feature to be processed in the at least one first feature and the at least one second feature, the classification interval corresponding to the feature to be processed may be determined according to the following feasible implementation manners: and the first equipment determines the classification characteristics corresponding to the classification nodes in the classification model. And the individual sharing classification condition corresponding to the classification node is a condition for classifying according to the classification characteristic. And the first equipment determines at least one target classification node in the plurality of classification nodes according to the classification characteristics corresponding to the classification nodes. And the classification features corresponding to the target classification nodes are to-be-processed features. And the first equipment determines at least one classification interval according to a threshold value in the object classification condition corresponding to the target classification node.
Optionally, in an application scenario of federal learning, when gradient information is homomorphic encrypted and the second device sends a plurality of second intervals of a plurality of second features to the first device, the second device also encodes and encrypts the second features and the second intervals, so that the first device does not know specific encrypted features of the second device, but due to homomorphic encryption, a classification loss value corresponding to each classification condition can still be determined, if a classification feature corresponding to a classification condition of a classification node in a finally obtained converged classification model is a second feature in the second device, the first device cannot obtain a classification interval corresponding to the second feature through the classification model but can only obtain a classification interval corresponding to the first feature, but the first device can obtain the encoded second feature and the encoded classification interval, and send the encoded second feature and the encoded classification interval to the second device, and the second equipment analyzes the codes to further obtain a classification interval corresponding to the second characteristic in the second equipment. Therefore, on the premise of data confidentiality, classification intervals corresponding to a plurality of features can be accurately obtained, the use safety of the Federal learning model is improved, and the accuracy of feature data classification can also be improved.
The second device may send the encoded second feature correspondences to the first device
The embodiment of the application provides a federated learning data processing method, which is applied to a federated learning system, the federated learning system comprises a first device and a second device, the first device comprises at least one first feature of a plurality of objects, the second device comprises at least one second feature of a plurality of objects, the first device obtains model parameters of a classification model corresponding to the ith iteration processing, the classification model comprises a plurality of classification nodes, each classification node corresponds to an object classification condition, the first device obtains gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature according to the model parameters, the first device updates the object classification condition corresponding to each classification node in the classification model according to the gradient information of the plurality of first intervals corresponding to the first features and the gradient information of the plurality of second intervals corresponding to each second feature, when the classification model converges, the first device determines at least one classification interval corresponding to each first feature and each second feature according to the converged classification model. Because the object classification condition is determined by the classification model according to the gradient information of the first intervals of the first characteristics and the second intervals of the second characteristics, and the classification loss value of the object classification condition is minimum, the object classification condition can accurately classify the first characteristics and the second characteristics, so that the accuracy of continuous characteristic classification is improved, and the training effect of the deep neural network is improved.
Based on the embodiment shown in fig. 2, the federal learning data processing method will be described in detail below with reference to fig. 4.
Fig. 4 is a schematic flow chart of another federal learning data processing method according to an embodiment of the present application. Referring to fig. 4, the process of the method includes:
s401, the first device obtains model parameters of a classification model corresponding to the ith iteration, the classification model comprises a plurality of classification nodes, and each classification node corresponds to an object classification condition.
It should be noted that the execution process of step S401 may refer to the execution process of step S201, and details of the embodiment of the present application are not repeated herein.
S402, the first device obtains gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature according to the model parameters.
It should be noted that the execution process of step S402 may refer to the execution process of step S202, which is not described again in this embodiment of the application.
S403, the first device determines a plurality of classification conditions to be selected according to a plurality of first intervals corresponding to each first feature and a plurality of second intervals corresponding to each second feature.
Optionally, a plurality of candidate classification conditions may be determined according to the following feasible implementation manners: the first device determines an endpoint value corresponding to each interval and a characteristic corresponding to each interval. For example, if the first interval is an age characteristic and the first interval ranges from 1 year to 20 years, the end points of the first interval are 1 year and 20 years, and the characteristic of the first interval is an age.
And the first equipment determines a plurality of classification conditions to be selected according to the endpoint value corresponding to each interval and the characteristic corresponding to each interval. For example, the characteristics of the users are an age characteristic and a height characteristic, and if the intervals corresponding to the age characteristic are 1-20 years old and 20-40 years old, and the intervals corresponding to the height characteristic are 100-150 cm and 150-200 cm, the candidate classification conditions include: and (4) age classification: whether greater than 1 year old, greater than 20 years old, and greater than 40 years old; classifying the height: whether greater than 100cm, greater than 150cm and greater than 200 cm.
Next, a process of determining a plurality of candidate classification conditions will be described with reference to fig. 5.
Fig. 5 is a schematic process diagram for determining a plurality of candidate classification conditions according to an embodiment of the present application. Please refer to fig. 5, which includes a feature interval set. The characteristic interval set comprises an interval corresponding to age, an interval corresponding to height and an interval corresponding to weight. The age corresponds to the interval [1,20], [20,40], [40,60], the height corresponds to the interval [90,120], [120,150], [150,180], and the weight corresponds to the interval [30,50], [50,70], [70,90 ].
Referring to fig. 5, according to the feature interval set, the end points of all the feature interval sets are traversed to obtain a feature end point set. The feature end point combination comprises an end point of the age feature, an end point of the height feature and an end point of the weight feature. The endpoints of the age characteristics are: 1. 20,40, 60; the end points of the height features are: 90. 120,150, 180; the endpoints of the weight characteristics are: 30. 50,70 and 90.
Referring to fig. 5, a candidate classification condition set is determined according to the feature endpoint set. Wherein, the candidate classification condition set comprises 12 candidate conditions, which are respectively: whether the age is greater than 1 year, whether the age is greater than 20 years, whether the age is greater than 40 years, whether the age is greater than 60 years, whether the height is greater than 90cm, whether the height is greater than 120cm, whether the height is greater than 150cm, whether the height is greater than 180cm, whether the weight is greater than 30kg, whether the weight is greater than 50kg, whether the weight is greater than 70kg, and whether the weight is greater than 90 kg.
S404, the first device determines a classification loss value corresponding to each to-be-selected classification condition.
The classification loss value is the difference between the loss function after classification and the loss function before classification by the classification model according to the classification condition. For example, if the classification model has a loss function value of Y1 before classifying the plurality of users according to the classification condition of whether the age is greater than 20 years old, and has a loss function value of Y2 after classifying the plurality of users according to the classification condition of whether the age is greater than 20 years old, the classification loss values are Y1 to Y2.
Optionally, for any one candidate classification condition of the multiple candidate classification conditions, the classification loss value corresponding to the candidate classification condition may be determined according to the following feasible implementation manner: a first set of objects corresponding to the classification node is determined. The first object set comprises at least two objects, and the plurality of objects comprise the first object set. For example, all the objects to be classified are included in the first object set corresponding to the starting node of the classification model.
And classifying the first object set according to the classification conditions to be selected to obtain a first sub-object set and a second sub-object set. For example, if the candidate classification condition is whether the age is greater than 20 years, the first set of sub-objects is a set of objects whose age is less than 20 years, and the second set of sub-objects is a set of objects whose age is greater than or equal to 20 years.
And determining a classification loss value corresponding to the classification condition to be selected according to the gradient information of each object in the first sub-object set, the gradient information of each object in the second sub-object set and the gradient information of each object in the first object set. Optionally, the gradient information includes a first gradient and a second gradient, and the classification loss value corresponding to the classification condition to be selected may be determined according to the following feasible implementation manner: determining a classification loss value corresponding to a to-be-selected classification condition according to the following formula I:
Figure BDA0003318547860000171
wherein L is split To classify the loss value, I L Is a first set of sub-objects, I R Is a second set of sub-objects, I is a first set of objects, g i Is the first gradient of the ith object, h i Is the second gradient of the ith object, λ is the first preset parameter, and γ is the second preset parameter. In I ∈ I L When g is i Is a first gradient, h, of the ith object in the first set of sub-objects i A second gradient for an ith object in the first set of sub-objects; in I ∈ I R When g is i Is the first gradient, h, of the ith object in the second set of sub-objects i A second gradient for an ith object in the second set of sub-objects; when I ∈ I, g i Is a first gradient, h, of the ith object in the first set of objects i A second gradient for an ith object in the first set of objects.
Because the first device can obtain the gradient information of each object, the first device can obtain the classification loss value corresponding to each to-be-selected classification condition according to the formula one. For example, if the candidate classification condition is whether the age is greater than 20 years old, the first device determines the sum of the first gradients and the second gradients of all users with the ages less than 20 years old, the sum of the first gradients and the second gradients of all users with the ages greater than or equal to 20 years old, and the sum of the first gradients and the second gradients of all users before classification according to whether the age is greater than 20 years old, so as to obtain the classification loss value of the candidate classification condition whether the age is greater than 20 years old according to the first formula.
S405, the first device updates the object classification condition corresponding to the classification node into the to-be-selected classification condition with the minimum classification loss value.
Optionally, when the classification model performs the next classification, the classification model updates the object classification condition of the classification node to the candidate classification condition with the minimum classification loss value. For example, the candidate classification conditions are a classification condition a and a classification condition B, and if the classification loss value of the classification condition a is smaller than that of the classification condition B, the classification model classifies a plurality of objects according to the classification condition a when classifying next time.
Optionally, the object classification condition corresponding to each classification node may be updated according to the following feasible implementation manner: the first device determines an update sequence of the plurality of classification nodes according to the positions of the classification nodes in the classification model. For example, referring to fig. 3, the update sequence of the classification nodes is: the node B comprises a classification node A, a classification node B and a classification node C, wherein the classification node B and the classification node C are located at the same decision tree layer, so that the classification node B and the classification node C can be updated simultaneously.
And the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the updating sequence and according to the gradient information of a plurality of first intervals corresponding to each first characteristic and the gradient information of a plurality of second intervals corresponding to each second characteristic. For example, if the update sequence of the classification nodes is: the first device updates the object classification condition corresponding to the classification node a, then updates the object classification condition corresponding to the classification node B, and finally updates the object classification condition corresponding to the classification node C.
S406, the first device determines the classification characteristics corresponding to the classification nodes in the classification model.
And the object classification condition corresponding to the classification node is a condition for classifying according to the classification characteristic. For example, if the object classification condition is whether the age is greater than 20 years old, the classification characteristic is age; and if the object classification condition is that whether the height is larger than 100cm, the classification characteristic is the height.
Optionally, when the classification model converges, the first device may obtain an object classification condition corresponding to each classification node of the classification model, and then determine a classification feature corresponding to each classification node according to each object classification condition.
S407, the first device determines at least one target classification node in the plurality of classification nodes according to the classification features corresponding to the classification nodes.
Optionally, taking any one to-be-processed feature of the at least one first feature and the at least one second feature as an example, the classification feature corresponding to the target classification node is the to-be-processed feature. For example, if the first device determines a classification section corresponding to an age feature of the subject, the classification feature corresponding to the target classification node is the age feature.
Optionally, in an actual application process, the classification model may be classified by using the same classification features or different classification features each time the classification model is split, and the target classification node is a node classified according to the features to be processed.
S408, the first device determines at least one classification interval according to a threshold value in the object classification condition corresponding to the target classification node.
Optionally, the first device determines at least one classification interval according to a threshold in the object classification condition corresponding to the target classification node. For example, if the object classification condition is: if the age is more than 20 years old, the classification interval is: the age is less than 20 years old, and the age is greater than or equal to 20 years old.
Next, a process of determining the classification section will be described with reference to fig. 6.
Fig. 6 is a schematic process diagram for determining a classification interval according to an embodiment of the present application. See fig. 6, which includes a converged classification model. The classification model comprises a classification node A, a classification node B, a classification node C and a classification node D. The object classification condition corresponding to the classification node A is whether the age is more than 20 years old, the object classification condition corresponding to the classification node B is whether the age is less than 30 years old, the object classification condition corresponding to the classification node C is whether the weight is less than 30kg, and the object classification condition corresponding to the classification node D is whether the height is less than 180 cm. A1, A2, A3, A4 and A5 are leaf nodes of the classification model.
Referring to fig. 6, in the classification model, the classification characteristic corresponding to the classification node a is age, the classification characteristic corresponding to the classification node B is age, the classification characteristic corresponding to the classification node C is weight, and the classification characteristic corresponding to the classification node D is height. Therefore, the target classification nodes corresponding to the age features are: and the classification node A and the classification node B are respectively a target classification node corresponding to the weight characteristic as a classification node C, and a target classification node corresponding to the height characteristic as a classification node D.
Referring to fig. 6, the age-corresponding classification interval is: an age of less than or equal to 20 years, an age of greater than 20 years and less than 30 years, an age of greater than or equal to 30 years; the classification interval corresponding to the body weight is as follows: the weight is less than 30kg, and the weight is more than or equal to 30 kg; the classification interval corresponding to the height is as follows: the height is less than 180cm, and the height is greater than or equal to 180 cm.
S409, obtaining data to be classified, wherein the data to be classified comprises a plurality of characteristics, and the plurality of characteristics comprise part or all of at least one first characteristic and at least one second characteristic.
Alternatively, the data to be classified may be marked objects to be classified. For example, if the first feature and the second feature corresponding to the plurality of objects are: age, height, weight, occupation, income, the data to be classified may include features of: age, height, the data to be classified may also include features: age, height, weight, occupation, income.
And S410, classifying the data to be classified according to at least one classification interval corresponding to each first characteristic and each second characteristic.
Optionally, the data to be classified is classified according to at least one classification interval. For example, if the data to be classified is age 23, height 175cm, and weight 70kg, the first device may classify the data to be classified according to the classification interval of the age feature, the classification interval of the height feature, and the classification interval of the weight feature.
Optionally, after the first device obtains the converged classification model, the combined features of the users of the leaf nodes may be obtained according to the classification model. For example, referring to fig. 6, if a user a is located in the leaf node a2 after classifying a plurality of users, the corresponding combination feature of the user a is: age greater than or equal to 30 years, height greater than or equal to 180cm, and if user a is located in leaf node a5, user a has the corresponding combined characteristics of: the age is less than or equal to 20 years, and the body weight is greater than or equal to 30 kg.
The embodiment of the application provides a method for processing federated learning data, a first device obtains model parameters of a classification model corresponding to the ith iteration, the first device obtains gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature according to the model parameters, the first device determines a plurality of classification conditions to be selected and determines a classification loss value corresponding to each classification condition according to the plurality of first intervals corresponding to each first feature and the plurality of second intervals corresponding to each second feature, the first device updates an object classification condition corresponding to a classification node into the classification condition to be selected with the minimum classification loss value and determines a classification feature corresponding to each classification node in the classification model, and determines at least one target classification node in the plurality of classification nodes according to the classification feature corresponding to each classification node, the first device determines at least one classification interval according to a threshold value in the object classification condition corresponding to the target classification node, obtains data to be classified, and classifies the data to be classified according to at least one classification interval corresponding to each first feature and each second feature. In the method, when the classification model converges, the first device determines at least one classification interval corresponding to each first feature and each second feature according to the converged classification model. Because the object classification condition is determined by the classification model according to the gradient information of the first intervals of the first characteristics and the second intervals of the second characteristics, and the classification loss value of the object classification condition is minimum, the first characteristics and the second characteristics can be accurately classified by traversing the classification intervals corresponding to the target classification nodes corresponding to the classification characteristics in the classification model, so that the accuracy in continuous characteristic classification is improved, and the training effect of the deep neural network is improved.
On the basis of any of the above embodiments, the following describes in detail the process of the above federal learning data processing method with reference to fig. 7.
Fig. 7 is a process diagram of a method for processing federated learning data according to an embodiment of the present application. Please refer to fig. 7, which includes a classification model with i ═ 1. Since the classification model has not yet started iteration because i is 1, no other decision tree is available, and the classification model is an initial node. A set of feature intervals is determined. The characteristic interval set comprises an interval corresponding to age, an interval corresponding to height and an interval corresponding to weight. The age corresponds to the interval [1,20], [20,40], [40,60], the height corresponds to the interval [90,120], [120,150], [150,180], and the weight corresponds to the interval [30,50], [50,70], [70,90 ].
Referring to fig. 7, according to the model parameter with i equal to 1, the gradient information of each interval is determined, and the classification model is updated according to the gradient information of each interval. And iterating the classification model for N times according to the steps so as to make the classification model converge and obtain the converged classification model. The converged classification model comprises a classification node A, a classification node B and a classification node C, and further comprises leaf nodes A1, A2, A3 and A4.
Referring to fig. 7, the classification node a corresponds to whether the age is greater than 20 years old, the classification node B corresponds to whether the age is less than 40 years old, and the classification node C corresponds to whether the weight is less than 30 kg. And determining the classification characteristic corresponding to each classification node. The classification node A corresponds to a classification characteristic of age, the classification node B corresponds to a classification characteristic of age, and the classification node C corresponds to a classification characteristic of weight.
Referring to fig. 7, the target classification nodes corresponding to the age characteristics are: the classification node A and the classification node B, and the target classification node corresponding to the weight characteristics is a classification node C. And obtaining a classification interval corresponding to the age characteristic and a classification interval set corresponding to the weight characteristic according to the threshold values of the classification node A, the classification node B and the classification node C. Wherein, the classification interval of the age characteristics in the classification interval set is: less than or equal to 20 years old, greater than 20 years old and less than 40 years old and greater than or equal to 40 years old, the classification interval of the weight characteristics is: less than 30kg, greater than or equal to 30 kg. Therefore, the classification model can be iteratively updated through the gradient information of each interval, and the classification loss value of the object classification condition determined by the gradient information is minimum, so that the classification model can accurately classify a plurality of objects according to the characteristics, and the accuracy rate in continuous characteristic classification is further improved.
Fig. 8 is a schematic structural diagram of a federated learning data processing apparatus according to an embodiment of the present application. Referring to fig. 8, the federal learning data processing apparatus 10 includes a first obtaining module 11, a second obtaining module 12, an updating module 13 and a determining module 14, wherein:
the first obtaining module 11 is configured to obtain a model parameter of a classification model corresponding to an ith iteration, where the classification model includes a plurality of classification nodes, and each classification node corresponds to an object classification condition;
the second obtaining module 12 is configured to, by the first device, obtain, according to the model parameter, gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature;
the updating module 13 is configured to update, by the first device, the object classification condition corresponding to each classification node in the classification model according to the gradient information of the plurality of first intervals corresponding to each first feature and the gradient information of the plurality of second intervals corresponding to each second feature;
the determining module 14 is configured to sequentially increment the iteration processing times until the classification model converges, and the first device determines at least one classification interval corresponding to each first feature and each second feature according to the converged classification model.
In a possible implementation manner, the update module 13 is specifically configured to:
the first equipment determines a plurality of classification conditions to be selected according to a plurality of first intervals corresponding to each first characteristic and a plurality of second intervals corresponding to each second characteristic;
the first equipment determines a classification loss value corresponding to each to-be-selected classification condition;
and the first equipment updates the object classification condition corresponding to the classification node into the to-be-selected classification condition with the minimum classification loss value.
In a possible implementation manner, the update module 13 is specifically configured to:
determining a first object set corresponding to the classification node, wherein the first object set comprises at least two objects, and the plurality of objects comprise the first object set;
classifying the first object set according to the classification condition to be selected to obtain a first sub-object set and a second sub-object set;
and determining a classification loss value corresponding to the classification condition to be selected according to the gradient information of each object in the first sub-object set, the gradient information of each object in the second sub-object set and the gradient information of each object in the first object set.
In a possible implementation manner, the update module 13 is specifically configured to:
determining a classification loss value corresponding to the classification condition to be selected according to the following formula I:
Figure BDA0003318547860000221
wherein, L is split For the classification loss value, the I L For the first set of sub-objects, the I R Is a stand forThe second sub-object set, the I is the first object set, the g i Is the first gradient of the ith object, said h i And the second gradient of the ith object, the lambda is a first preset parameter, and the gamma is a second preset parameter.
In a possible implementation manner, the update module 13 is specifically configured to:
the first equipment determines the updating sequence of the plurality of classification nodes according to the positions of the classification nodes in the classification model;
and the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the updating sequence and according to the gradient information of the plurality of first intervals corresponding to each first characteristic and the gradient information of the plurality of second intervals corresponding to each second characteristic.
In a possible implementation manner, the update module 13 is specifically configured to:
the first equipment determines an endpoint value corresponding to each interval and a characteristic corresponding to each interval;
and the first equipment determines the multiple classification conditions to be selected according to the endpoint value corresponding to each interval and the characteristic corresponding to each interval.
In a possible implementation manner, the second obtaining module 12 is specifically configured to:
the first device determines gradient information of each object according to the model parameters and at least one first characteristic of the plurality of objects;
the first equipment determines a plurality of first intervals corresponding to each first characteristic;
and the first equipment determines the gradient information of a plurality of first intervals corresponding to each first characteristic according to the gradient information of each object.
In a possible implementation manner, the second obtaining module 12 is specifically configured to:
for any one of the first intervals, the first device determines a target object corresponding to the first interval in the at least one object according to first features of the objects, wherein the first features of the target object are located in the first interval;
and determining the sum of the gradient information of the target object as the gradient information of the first interval.
In a possible implementation manner, the second obtaining module 12 is specifically configured to:
the first equipment sends gradient information of each object to second equipment;
the first device receives, from the second device, gradient information of a plurality of second sections corresponding to each second feature, where the gradient information of the plurality of second sections corresponding to each second feature is determined by the second device according to the gradient information of each object.
In a possible implementation, the determining module 14 is specifically configured to:
the first equipment determines classification characteristics corresponding to each classification node in the classification model, wherein the object classification conditions corresponding to the classification nodes are conditions for classifying according to the classification characteristics;
the first equipment determines at least one target classification node in the plurality of classification nodes according to the classification characteristics corresponding to the classification nodes, wherein the classification characteristics corresponding to the target classification nodes are the characteristics to be processed;
and the first equipment determines the at least one classification interval according to a threshold value in the object classification condition corresponding to the target classification node.
The federal learning data processing apparatus provided in the embodiment of the present application may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar and will not be described herein again.
The federal learning data processing device shown in the embodiment of the application can be a chip, a hardware module, a processor and the like. Of course, the federal learning data processing apparatus may be in other forms, and this is not particularly limited in the embodiment of the present application.
Fig. 9 is a schematic structural diagram of another federal learning data processing apparatus according to an embodiment of the present application. On the basis of the embodiment shown in fig. 8, please refer to fig. 9, the federal learning data processing apparatus 10 further includes a third obtaining module 15, where the third obtaining module 15 is configured to:
acquiring data to be classified, wherein the data to be classified comprises a plurality of characteristics, and the plurality of characteristics comprise part or all of the at least one first characteristic and the at least one second characteristic;
and classifying the data to be classified according to at least one classification interval corresponding to each first characteristic and each second characteristic.
The federal learning data processing apparatus provided in the embodiment of the present application may execute the technical solutions shown in the above method embodiments, and the implementation principles and beneficial effects thereof are similar and will not be described herein again.
The federal learning data processing device shown in the embodiment of the application can be a chip, a hardware module, a processor and the like. Of course, the federal data processing apparatus may be in other forms, and this is not particularly limited in the embodiment of the present application.
Fig. 10 is a schematic diagram of a hardware structure of a federal learning data processing device provided in the present application. Referring to fig. 10, the federal learning data processing apparatus 20 may include: a processor 21 and a memory 22, wherein the processor 21 and the memory 22 may communicate; illustratively, the processor 21 and the memory 22 communicate via a communication bus 23, the memory 22 is configured to store program instructions, and the processor 21 is configured to call the program instructions in the memory to execute the federated learning data processing method shown in any of the above-described method embodiments.
Optionally, the federal learning data processing device 20 may also include a communications interface, which may include a transmitter and/or a receiver.
Optionally, the Processor may be a Central Processing Unit (CPU), or may be another general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present application may be embodied directly in a hardware processor, or in a combination of the hardware and software modules in the processor.
A readable storage medium having a computer program stored thereon; the computer program is for implementing the federated learning data processing method as described in any of the embodiments above.
An embodiment of the present application provides a computer program product, which includes instructions that, when executed, cause a computer to execute the above federal learning data processing method.
All or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The aforementioned program may be stored in a readable memory. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned memory (storage medium) includes: read-only memory (ROM), RAM, flash memory, hard disk, solid state disk, magnetic tape (magnetic tape), floppy disk (flexible disk), optical disk (optical disk), and any combination thereof.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processing unit of a general purpose computer, special purpose computer, embedded processor, or other programmable terminal device to produce a machine, such that the instructions, which execute via the processing unit of the computer or other programmable terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable terminal equipment to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable terminal device to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present application and their equivalents, the present application is also intended to encompass such modifications and variations.
In the present application, the terms "include" and variations thereof may refer to non-limiting inclusions; the term "or" and variations thereof may mean "and/or". The terms "first," "second," and the like in this application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. In the present application, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.

Claims (15)

1. A method for processing federated learning data is applied to a federated learning system, the federated learning system includes a first device and a second device, the first device includes at least one first feature of a plurality of objects, the second device includes at least one second feature of the plurality of objects, the method includes:
the first equipment obtains model parameters of a classification model corresponding to the ith iteration, the classification model comprises a plurality of classification nodes, and each classification node corresponds to an object classification condition;
the first equipment acquires gradient information of a plurality of first intervals corresponding to each first characteristic and gradient information of a plurality of second intervals corresponding to each second characteristic according to the model parameters;
the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the gradient information of a plurality of first intervals corresponding to each first characteristic and the gradient information of a plurality of second intervals corresponding to each second characteristic;
and sequentially increasing the iteration processing times until the classification model converges, and determining at least one classification interval corresponding to each first characteristic and each second characteristic by the first equipment according to the converged classification model.
2. The method of claim 1, wherein for any one of the plurality of classification nodes; updating the object classification conditions corresponding to the classification nodes in the classification model according to the gradient information of the plurality of intervals corresponding to each first feature and the gradient information of the plurality of intervals corresponding to each second feature, wherein the updating comprises the following steps:
the first equipment determines a plurality of classification conditions to be selected according to a plurality of first intervals corresponding to each first characteristic and a plurality of second intervals corresponding to each second characteristic;
the first equipment determines a classification loss value corresponding to each to-be-selected classification condition;
and the first equipment updates the object classification condition corresponding to the classification node into a to-be-selected classification condition with the minimum classification loss value.
3. The method of claim 2, wherein for any one of the plurality of candidate classification conditions; the determining, by the first device, a classification loss value corresponding to the to-be-selected classification condition includes:
determining a first object set corresponding to the classification node, wherein the first object set comprises at least two objects, and the plurality of objects comprise the first object set;
classifying the first object set according to the classification condition to be selected to obtain a first sub-object set and a second sub-object set;
and determining a classification loss value corresponding to the classification condition to be selected according to the gradient information of each object in the first sub-object set, the gradient information of each object in the second sub-object set and the gradient information of each object in the first object set.
4. The method of claim 3, wherein the gradient information comprises a first gradient and a second gradient; determining a classification loss value corresponding to the classification condition to be selected according to the gradient information of each object in the first sub-object set, the gradient information of each object in the second sub-object set, and the gradient information of each object in the first object set, including:
determining a classification loss value corresponding to the classification condition to be selected according to the following formula I:
Figure FDA0003318547850000021
wherein, L is split For the classification loss value, the I L For the first set of sub-objects, the I R Is the second set of sub-objects, the I is the first set of objects, the g i Is the first gradient of the ith object, said h i And the second gradient of the ith object, the lambda is a first preset parameter, and the gamma is a second preset parameter.
5. The method according to any one of claims 1 to 4, wherein the updating, by the first device, the object classification condition corresponding to each classification node in the classification model according to the gradient information of the plurality of first intervals corresponding to each first feature and the gradient information of the plurality of second intervals corresponding to each second feature comprises:
the first equipment determines the updating sequence of the plurality of classification nodes according to the positions of the classification nodes in the classification model;
and the first equipment updates the object classification condition corresponding to each classification node in the classification model according to the updating sequence and according to the gradient information of the plurality of first intervals corresponding to each first characteristic and the gradient information of the plurality of second intervals corresponding to each second characteristic.
6. The method according to any one of claims 2 to 4, wherein the determining, by the first device, a plurality of candidate classification conditions according to a plurality of intervals corresponding to each first feature and a plurality of intervals corresponding to each second feature comprises:
the first equipment determines an endpoint value corresponding to each interval and a characteristic corresponding to each interval;
and the first equipment determines the multiple classification conditions to be selected according to the endpoint value corresponding to each interval and the characteristic corresponding to each interval.
7. The method according to any one of claims 1 to 4, wherein the obtaining, by the first device, gradient information of a plurality of first intervals corresponding to each first feature according to the model parameter includes:
the first device determines gradient information of each object according to the model parameters and at least one first characteristic of the plurality of objects;
the first equipment determines a plurality of first intervals corresponding to each first characteristic;
and the first equipment determines the gradient information of a plurality of first intervals corresponding to each first characteristic according to the gradient information of each object.
8. The method of claim 7, wherein for any one of the at least one first feature; the first device determines gradient information of a plurality of first intervals corresponding to the first feature according to the gradient information of each object, and the determination includes:
for any one first interval in the multiple first intervals, the first device determines a target object corresponding to the first interval in the at least one object according to first features of the multiple objects, wherein the first features of the target object are located in the first interval;
and determining the sum of the gradient information of the target object as the gradient information of the first interval.
9. The method according to any one of claims 1 to 4, wherein the obtaining, by the first device, gradient information of a plurality of second intervals corresponding to each second feature includes:
the first equipment sends gradient information of each object to second equipment;
the first device receives, from the second device, gradient information of a plurality of second sections corresponding to each second feature, where the gradient information of the plurality of second sections corresponding to each second feature is determined by the second device according to the gradient information of each object.
10. The method according to any of claims 1-4, characterized in that features are to be processed for any of at least one first feature and said at least one second feature; the first device determines a classification interval corresponding to the feature to be processed according to the converged classification model, and the method comprises the following steps:
the first equipment determines classification characteristics corresponding to each classification node in the classification model, wherein the object classification conditions corresponding to the classification nodes are conditions for classifying according to the classification characteristics;
the first equipment determines at least one target classification node from the plurality of classification nodes according to the classification characteristics corresponding to the classification nodes, wherein the classification characteristics corresponding to the target classification nodes are the characteristics to be processed;
and the first equipment determines the at least one classification interval according to a threshold value in the object classification condition corresponding to the target classification node.
11. The method according to any one of claims 1-4, wherein after the first device determines at least one classification interval corresponding to each first feature and each second feature according to the converged classification model, the method further comprises:
acquiring data to be classified, wherein the data to be classified comprises a plurality of characteristics, and the plurality of characteristics comprise part or all of the at least one first characteristic and the at least one second characteristic;
and classifying the data to be classified according to at least one classification interval corresponding to each first characteristic and each second characteristic.
12. The utility model provides a federal study data processing apparatus, its characterized in that is applied to federal study system, federal study system includes first equipment and second equipment, include at least one first characteristic of a plurality of objects in the first equipment, include in the second equipment at least one second characteristic of a plurality of objects, federal study data processing apparatus includes first acquisition module, second acquisition module, update module and confirms the module, wherein:
the first obtaining module is used for obtaining model parameters of a classification model corresponding to the ith iteration, wherein the classification model comprises a plurality of classification nodes, and each classification node corresponds to an object classification condition;
the second obtaining module is configured to, by the first device, obtain, according to the model parameter, gradient information of a plurality of first intervals corresponding to each first feature and gradient information of a plurality of second intervals corresponding to each second feature;
the updating module is configured to update, by the first device, the object classification condition corresponding to each classification node in the classification model according to the gradient information of the plurality of first intervals corresponding to each first feature and the gradient information of the plurality of second intervals corresponding to each second feature;
the determining module is configured to sequentially increase the number of iterative processes until the classification model converges, and the first device determines, according to the converged classification model, at least one classification interval corresponding to each first feature and each second feature.
13. The utility model provides a federal study data processing apparatus, its characterized in that, federal study data processing apparatus includes: a memory, a processor, and a federated learning data processing program stored on the memory and operable on the processor, the federated learning data processing program when executed by the processor implementing the steps of the federated learning data processing method of any of claims 1-11.
14. A computer readable storage medium having stored thereon a federal learning data processing program which, when executed by a processor, performs the steps of the federal learning data processing method as claimed in any one of claims 1 to 11.
15. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the federal learning data processing method as claimed in any of claims 1 to 11.
CN202111251714.6A 2021-10-25 2021-10-25 Federal learning data processing method, device and equipment Pending CN114969465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111251714.6A CN114969465A (en) 2021-10-25 2021-10-25 Federal learning data processing method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111251714.6A CN114969465A (en) 2021-10-25 2021-10-25 Federal learning data processing method, device and equipment

Publications (1)

Publication Number Publication Date
CN114969465A true CN114969465A (en) 2022-08-30

Family

ID=82974371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111251714.6A Pending CN114969465A (en) 2021-10-25 2021-10-25 Federal learning data processing method, device and equipment

Country Status (1)

Country Link
CN (1) CN114969465A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024094094A1 (en) * 2022-11-02 2024-05-10 华为技术有限公司 Model training method and apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024094094A1 (en) * 2022-11-02 2024-05-10 华为技术有限公司 Model training method and apparatus

Similar Documents

Publication Publication Date Title
CN110929870B (en) Method, device and system for training neural network model
WO2021098372A1 (en) Method and apparatus for training graph neural network model for representing knowledge graph
CN109859054B (en) Network community mining method and device, computer equipment and storage medium
KR101183391B1 (en) Image comparison by metric embeddings
CN111079939B (en) Machine learning model feature screening method and device based on data privacy protection
CN112990276B (en) Federal learning method, device, equipment and storage medium based on self-organizing cluster
CN113297573A (en) Vertical federal learning defense method and device based on GAN simulation data generation
CN108764726B (en) Method and device for making decision on request according to rules
CN112527273A (en) Code completion method, device and related equipment
CN107240029B (en) Data processing method and device
CN109495513B (en) Unsupervised encrypted malicious traffic detection method, unsupervised encrypted malicious traffic detection device, unsupervised encrypted malicious traffic detection equipment and unsupervised encrypted malicious traffic detection medium
CN113377964B (en) Knowledge graph link prediction method, device, equipment and storage medium
EP4100896A1 (en) Methods of providing data privacy for neural network based inference
WO2023207013A1 (en) Graph embedding-based relational graph key personnel analysis method and system
CN116681104B (en) Model building and realizing method of distributed space diagram neural network
CN114969465A (en) Federal learning data processing method, device and equipment
CN115795535A (en) Differential private federal learning method and device for providing adaptive gradient
CN113569151B (en) Data recommendation method, device, equipment and medium based on artificial intelligence
CN115481441A (en) Difference privacy protection method and device for federal learning
CN115271980A (en) Risk value prediction method and device, computer equipment and storage medium
CN105825147A (en) Cloud service based SNS (Social Networking Service) similar grouping anonymity method
CN110807476A (en) Password security level classification method and device and electronic equipment
CN110598727B (en) Model construction method based on transfer learning, image recognition method and device thereof
CN113497785A (en) Malicious encrypted flow detection method and system, storage medium and cloud server
CN116091891A (en) Image recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination