WO2020220823A1 - 用于构建决策树的方法和装置 - Google Patents

用于构建决策树的方法和装置 Download PDF

Info

Publication number
WO2020220823A1
WO2020220823A1 PCT/CN2020/077579 CN2020077579W WO2020220823A1 WO 2020220823 A1 WO2020220823 A1 WO 2020220823A1 CN 2020077579 W CN2020077579 W CN 2020077579W WO 2020220823 A1 WO2020220823 A1 WO 2020220823A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
client
target category
statistical information
split
Prior art date
Application number
PCT/CN2020/077579
Other languages
English (en)
French (fr)
Inventor
刘洋
张钧波
陈明鑫
刘颖婷
郑宇�
Original Assignee
京东城市(南京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东城市(南京)科技有限公司 filed Critical 京东城市(南京)科技有限公司
Priority to US17/607,299 priority Critical patent/US20220230071A1/en
Priority to EP20798206.7A priority patent/EP3965023A4/en
Publication of WO2020220823A1 publication Critical patent/WO2020220823A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, in particular to methods and devices for constructing decision trees.
  • Decision tree is the most commonly used algorithm for machine learning. Different from the inexplicability of neural network, decision tree provides a reliable and reasonable model explanation through the importance of features, which provides a more reliable basis for decision-making by government departments and the financial industry. For example, when a bank rejects a loan application and the government passes a certification, the law requires the corresponding department to provide a reliable basis, such as the reason for rejecting or passing the application, and the decision tree can provide a reliable basis. With the increasing awareness of data privacy protection, a joint modeling tree model based on multiple data platforms has emerged.
  • the embodiments of the present disclosure propose methods and devices for constructing a decision tree.
  • the embodiments of the present disclosure provide a method for constructing a decision tree, which is applied to a control terminal, including: sending a request for obtaining statistical information of attribute information of a target category to at least one client; Each client stores statistical information of the attribute information of the target category of the sample; generates splitting point information according to the statistical information of the attribute information of the target category of the sample stored by each client; and sends the splitting point information to at least one client.
  • the statistical information includes the maximum value and the minimum value
  • the split point information includes the split value
  • generating the split point information according to the statistical information of the attribute information of the target category of the sample stored by each client includes: The maximum value and minimum value of the attribute information of the target category of the sample stored in each end are integrated to obtain the system maximum and system minimum of the attribute information of the target category; the split value is selected between the system maximum and the system minimum.
  • the statistical information further includes label statistical information
  • the split point information also includes split attributes; generating split point information according to the statistical information of the attribute information of the target category of the sample stored by each client, and also includes: For the candidate categories in the set, according to the label statistical information of the attribute information of the candidate category of the samples stored by each client, the drop value of the data impurity after splitting according to the candidate category is obtained; the candidate category with the largest drop value is determined as Split attributes.
  • sending a request for obtaining statistical information of the attribute information of the target category to at least one client includes: if there is no split attribute, randomly selecting a category from the candidate category set as the target category; otherwise, The split attribute is determined as the target category; a request for obtaining statistical information of the attribute information of the target category is sent to at least one client.
  • the method further includes: communicating with at least one client in an encrypted manner.
  • the embodiments of the present disclosure provide a method for constructing a decision tree, which is applied to a client, including: receiving a request from a control terminal for obtaining statistical information of attribute information of a target category; , Perform the following building steps: send the statistical information of the attribute information of the target category of the locally stored sample to the control end; receive the split point information returned by the control end, split the respective stored samples according to the split point information, and store the result of the split Nodes are used to build a decision tree; if the node meets the preset termination conditions, the decision tree is output; if the node does not meet the preset termination conditions, the target category is updated based on the split point information, and the above steps are continued based on the updated target category.
  • the termination condition includes at least one of the following: the sum of the number of samples of the same node in at least one client controlled by the control terminal is less than a predetermined parameter value; or the depth of the established decision tree exceeds the predetermined parameter value. Set the depth value.
  • the method further includes: if the number of samples of the node is empty, receiving an information broadcast from a node whose number of samples is not empty to continue building a decision tree.
  • the statistical information includes label statistical information; and the method further includes: encrypting the label statistical information.
  • encrypting the label statistical information includes: encrypting the label statistical information in a homomorphic encryption manner.
  • the method further includes: randomly selecting different sample subsets to generate at least one decision tree; and composing the at least one decision tree into a random forest model.
  • the method further includes: receiving user information of the user to be predicted, wherein the user information includes at least one attribute information; voting on the user information through a random forest model to obtain the label of the user to be predicted.
  • embodiments of the present disclosure provide a system for building a decision tree, including a control terminal and at least one client, wherein the control terminal is configured to implement the method as in any one of the first aspect; A client is configured to implement any method as in the second aspect.
  • the embodiments of the present disclosure provide an apparatus for constructing a decision tree, which is applied to a control terminal and includes: a request unit configured to send statistics for obtaining attribute information of a target category to at least one client Information request; the statistical information receiving unit is configured to receive the statistical information of the attribute information of the target category of the sample stored in each client; the splitting unit is configured to the attribute information of the target category of the sample stored in each client The statistic information generates split point information; the sending unit is configured to send split point information to at least one client.
  • the statistical information includes a maximum value and a minimum value
  • the split point information includes a split value
  • the splitting unit is further configured to: each client stores the maximum value and the minimum value of the attribute information of the target category of the sample. Perform integration to obtain the system maximum and system minimum of the attribute information of the target category; select the split value between the system maximum and the system minimum.
  • the statistical information further includes label statistical information
  • the split point information also includes split attributes
  • the splitting unit is further configured to: for the candidate category in the candidate category set, according to the candidate category of each sample stored by each client The label statistics information of the attribute information of, obtain the drop value of the impurity of the data after splitting according to the candidate category; the candidate category with the largest drop value is determined as the split attribute.
  • the request unit is further configured to: if there is no split attribute, randomly select a category from the candidate category set as the target category; otherwise, determine the split attribute as the target category; and send the user to at least one client. To obtain the statistical information request of the attribute information of the target category.
  • the device further includes an encryption and decryption unit configured to communicate with the at least one client in an encrypted manner.
  • the embodiments of the present disclosure provide an apparatus for constructing a decision tree, which is applied to a client and includes: a request receiving unit configured to receive statistics sent by a control terminal for obtaining attribute information of a target category Information request; the tree building unit is configured to perform the following tree building steps based on the target category: send the statistical information of the attribute information of the target category of the locally stored sample to the control terminal; receive the split point information returned by the control terminal, according to the split point information Split the respective stored samples, and store the split nodes to build a decision tree; if the node meets the preset termination condition, the decision tree is output; the loop unit is configured to if the node does not meet the preset termination condition, The target category is updated according to the split point information, and the above tree building steps are continued to be performed based on the updated target category.
  • the termination condition includes at least one of the following: the sum of the number of samples of the same node in at least one client controlled by the control terminal is less than a predetermined parameter value; or the depth of the established decision tree exceeds the predetermined parameter value. Set the depth value.
  • the tree building unit is further configured to: if the number of samples of the node is empty, receive the information broadcast from the node whose number of samples is not empty to continue building the decision tree.
  • the statistical information includes label statistical information; and the device further includes an encryption and decryption unit configured to encrypt the label statistical information.
  • the encryption and decryption unit is further configured to encrypt the label statistical information in a homomorphic encryption manner.
  • the device further includes a combination unit configured to randomly select different sample subsets to generate at least one decision tree; and compose the at least one decision tree into a random forest model.
  • the device further includes a prediction unit configured to: receive user information of the user to be predicted, wherein the user information includes at least one attribute information; and vote on the user information through a random forest model to obtain the user information to be predicted The user's label.
  • a prediction unit configured to: receive user information of the user to be predicted, wherein the user information includes at least one attribute information; and vote on the user information through a random forest model to obtain the user information to be predicted The user's label.
  • the embodiments of the present disclosure provide an electronic device, including: one or more processors; a storage device, on which one or more programs are stored, when one or more programs are processed by one or more The processor executes, so that one or more processors implement the method as in any one of the first aspect.
  • an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, where the program is executed by a processor to implement the method as in any one of the first aspect.
  • the method and device for constructing a decision tree provided by the embodiments of the present disclosure adjust the tree building process of the existing joint modeling tree model based on the parallel prediction algorithm of the multi-data platform joint modeling tree model, and introduce extreme random forest , And improvements have been made to greatly reduce the content of information interaction, to protect privacy while improving the efficiency of the model, making it possible to widely implement the joint modeling tree model.
  • FIG. 1 is an exemplary system architecture diagram in which an embodiment of the present disclosure can be applied
  • Fig. 2 is a flowchart of an embodiment of a method for constructing a decision tree according to the present disclosure
  • FIG. 3 is a flowchart of another embodiment of the method for constructing a decision tree according to the present disclosure
  • 4a and 4b are schematic diagrams of split point selection in the method for constructing a decision tree according to the present disclosure.
  • Fig. 5 is a schematic diagram of an application scenario of the method for constructing a decision tree according to the present disclosure
  • Fig. 6 is a schematic structural diagram of an embodiment of an apparatus for constructing a decision tree according to the present disclosure
  • FIG. 7 is a schematic structural diagram of another embodiment of an apparatus for constructing a decision tree according to the present disclosure.
  • Fig. 8 is a schematic structural diagram of a computer system suitable for implementing an electronic device of an embodiment of the present disclosure.
  • FIG. 1 shows an exemplary system architecture 100 to which an embodiment of a method for building a decision tree or an apparatus for building a decision tree of the present disclosure can be applied.
  • the system architecture 100 may include clients 101, 102, 103 and a control terminal 104.
  • the network is used as a medium for providing communication links between the clients 101, 102, 103 and the control terminal 104.
  • the network may include various connection types, such as wired, wireless communication links, or fiber optic cables.
  • the user can use the client terminals 101, 102, and 103 to interact with the control terminal 104 through the network to receive or send messages and so on.
  • Clients 101, 102, 103 can store samples for training decision trees. Each sample includes attribute information and labels.
  • the statistical information of attribute information and the statistical information of labels can be obtained by neural network or statistical methods. Among them, the label can be encrypted into a cipher text form, and then reported to the control terminal together with the statistical information of the attribute information.
  • Decision Tree is a predictive model. It represents a mapping relationship between object attributes and object values.
  • the forked path in the tree represents a possible attribute value, and each leaf node corresponds to the value of the object represented by the path from the root node to the leaf node.
  • the decision tree has only a single output. If you want to have multiple outputs, you can build an independent tree to process and generate different outputs.
  • the application scenario of the present disclosure is one of federated learning-horizontal federated learning.
  • Horizontal federated learning requires that the characteristics of users contained in each platform are basically the same, but the samples of users are different.
  • the regional bank loan business as an example: the regional bank A has some customers' age information, asset information, wealth management fund product information, loan repayment information, etc. These data are stored in the client 101.
  • the bank in the area B has other customer information with the same characteristics, and these data are stored in the client 102.
  • Regional Bank C has other customer information with the same characteristics. These data are stored in the client 103.
  • the data owned by regional banks A, B, and C are not enough to construct a complete and reliable discriminant model to determine whether Make a loan to a certain customer.
  • the control terminal 104 may receive the statistical information of the attribute information sent by the clients 101, 102, and 103.
  • the control terminal 104 may perform analysis and other processing on data such as statistics information of the received attribute information (if it is encrypted data, it needs to be decrypted), and feed back the processing result (for example, split point and split attribute) to the client.
  • the client uses split points and split attributes to build a decision tree. Each client can randomly use a subset of the sample to generate multiple single decision trees, and then these single decision trees are integrated by voting to form a random forest model.
  • Both the decision tree, the random forest derived therefrom and the tree-based GBM are composed of a single basic decision tree.
  • the impurity of the data which can be Gini coefficient or information gain, etc.
  • the parallel algorithm of the joint modeling tree model based on multiple data platforms also requires a lot of time to find the optimal split point. Therefore, how to reduce the computational cost of finding the optimal split point becomes one of the problems to be solved.
  • the extreme random forest in ensemble learning applies randomness to the selection of split points. The effectiveness of random forest is mainly reflected in the reduction of variance.
  • control terminal can be hardware or software.
  • control terminal When the control terminal is hardware, it can be realized as a distributed control terminal cluster composed of multiple control terminals, or it can be realized as a single control terminal.
  • control terminal When the control terminal is software, it can be implemented as multiple software or software modules (for example, multiple software or software modules for providing distributed services), or as a single software or software module. There is no specific limitation here.
  • the method for constructing a decision tree provided by the embodiments of the present disclosure may be executed by the clients 101, 102, 103 and the control terminal 104 together.
  • the device for constructing a decision tree can be set in the client 101, 102, 103 and the control terminal 104. There is no specific limitation here.
  • FIG. 1 the numbers of clients and control terminals in FIG. 1 are merely illustrative. According to implementation needs, there can be any number of clients and control terminals.
  • FIG. 2 there is shown a process 200 of an embodiment in which the method for constructing a decision tree according to the present disclosure is applied to the control end.
  • the method for building a decision tree includes the following steps:
  • Step 201 Send a request for obtaining statistical information of attribute information of a target category to at least one client.
  • the execution subject of the method for constructing a decision tree may send a message for obtaining attribute information of the target category to at least one client through a wired connection or a wireless connection.
  • the target category refers to the category of the sample's attribute information, for example, age, income.
  • the control terminal can randomly select a category from multiple categories of candidates as the target category.
  • step 203 is performed to obtain the split attribute
  • the split attribute is determined as the target category.
  • Step 202 Receive statistical information of the attribute information of the target category of the sample stored in each client.
  • the method of the present disclosure does not require specific attribute information, but statistical information of attribute information.
  • Each client only needs to send the statistical information of a certain attribute information to the control terminal, such as: the age characteristic of the data of the A client, the maximum value is 60 years old, the minimum is 40 years old, and the age characteristic of the B client data If the minimum is 20 years old and the maximum is 50 years old, then A and B only need to send these data to the control terminal respectively.
  • the statistical information may also include label statistical information of at least one category.
  • the label statistics may include the number of labels and the percentage of labels. For example, the number and percentage of tags of samples belonging to the target category and belonging to other categories.
  • the statistical information received by the control terminal may be encrypted and needs to be decrypted before use.
  • Step 203 Generate split point information according to the statistical information of the attribute information of the target category of the sample stored in each client.
  • the split point information may include split values and split attributes.
  • the maximum value and minimum value of the attribute information of the target category of the sample stored by each client are integrated to obtain the system maximum value and the system minimum value of the attribute information of the target category. Select the split value between the system maximum and system minimum.
  • the split value can be randomly selected between the system maximum and the system minimum.
  • the average value of the maximum value and the minimum value of the system can also be used as the split value. Or determine the intermediate value as the split value according to the maximum and minimum uploaded by different clients.
  • the selection method of the split value is not limited here. If the received data is encrypted, it needs to be decrypted by the control terminal before it can be used.
  • the control terminal decrypts and processes the received encrypted data, and obtains that the minimum value of the system is 20 years old, and the maximum value of the system is 60 years old, and then randomly selects a split value of 42.4 between 20 and 60 years old, and then The corresponding split value is delivered to the nodes where each client participates, and each client divides the sample data according to the obtained split value. To ensure the security of the information, the split value can be encrypted and sent to the client.
  • the candidate category split according to the candidate category is obtained.
  • the candidate category with the largest decline is determined as the split attribute.
  • the control terminal integrates the label statistical information reported by at least one client to obtain the system label statistical information.
  • client A reports 50 defaults, 50 non-defaults, and a default rate of 50%.
  • Client B reported 30 defaults, 70 non-defaults, and a default rate of 30%.
  • the system label statistics information of 80 breaches, 120 non-defaults, and a default rate of 40% are obtained.
  • client A reports 40 defaults, 60 non-defaults, and a default rate of 40%.
  • Client B reported 20 defaults, 80 non-defaults, and a default rate of 20%.
  • the system label statistics information of 60 breaches, 140 non-default and 30% breach rate are obtained.
  • Impurity can be expressed by Gini value, as shown below:
  • Gini represents the Gini value
  • Pi represents the proportion of the number of categories i
  • n represents the number of categories.
  • the target category Calculate the initial impurity based on the label statistics reported by the target category. Then, the expected impurity is calculated successively for the candidate categories in the candidate category set. The difference between the initial impurity and the expected impurity is regarded as the decrease in impurity. The candidate category with the largest decline is determined as the split attribute.
  • the specific implementation method can refer to the prior art, which will not be repeated here.
  • Step 204 Send the split point information to at least one client.
  • the calculated split point information can be sent to each client that reports statistical information.
  • Each client builds a decision tree based on the split point information.
  • the decision tree built by each client is the same.
  • the control terminal then performs steps 202-204 to generate split point information in batches and then returns it to the client.
  • FIG. 3 there is shown a process 300 of an embodiment in which the method for building a decision tree according to the present disclosure is applied to a client.
  • the method for building a decision tree includes the following steps:
  • Step 301 Receive a request for obtaining statistical information of the attribute information of the target category sent by the control terminal.
  • the executor of the method for constructing a decision tree may receive statistical information for obtaining the attribute information of the target category from the control terminal through a wired connection or a wireless connection.
  • the target category refers to the category of the sample's attribute information, for example, age, income.
  • the control terminal can randomly select a category from multiple categories of candidates as the target category for the first split.
  • Step 302 Send locally stored statistical information of the attribute information of the target category of the sample to the control terminal.
  • the method of the present disclosure does not require specific attribute information, but statistical information of attribute information.
  • Each client only needs to send the statistical information of a certain attribute information to the big control terminal, such as: the age characteristic of the data of the A client, its maximum value is 60 years old, the minimum value is 40 years old, and the age characteristic of the B client data If the minimum is 20 years old and the maximum is 50 years old, then A and B only need to send these data to the control terminal respectively.
  • the statistical information may also include label statistical information of at least one category.
  • the label statistics may include the number of labels and the percentage of labels. For example, the number and percentage of tags of samples belonging to the target category and belonging to other categories.
  • the client can send it to the control terminal in an encrypted manner.
  • label information such as default sign and loan sign.
  • the client terminal may encrypt the number and/or proportion of tags of samples in a certain category and send it to the control terminal. For example, there are 20 default samples and 30 non-default samples under the age category in client A.
  • the statistical information 40% default rate, 20 defaults, and 30 non-defaults are encrypted and sent to the control terminal.
  • the client can also encrypt all statistical information and send it to the control terminal.
  • Step 303 Receive split point information returned by the control terminal, split the respective stored samples according to the split point information, and store the split nodes to build a decision tree.
  • the split point information generated in step 203 is received.
  • the split point information may include split values and split attributes. After re-dividing the sample according to the split attribute and split value, the statistical information is reported in batches.
  • the attribute information of the sample is divided into two types: discrete and continuous. For discrete data, it is split according to the attribute value, and each attribute value corresponds to a split node.
  • the continuity attribute the general approach is to sort the data according to the attribute, and then divide the data into several intervals, such as [0,10], [10,20], [20,30]..., one interval corresponds to a node , If the attribute value of the data falls within a certain interval, the data belongs to its corresponding node. These nodes constitute a decision tree.
  • the control terminal continues to calculate the split point information, and then sends it to the client until the preset conditions for terminating the establishment of the tree are met.
  • Step 304 Determine whether the node meets a preset condition for terminating tree establishment.
  • the termination condition includes at least one of the following: the sum of the number of samples of the same node in at least one client controlled by the control terminal is less than a predetermined parameter value; or the depth of the established decision tree exceeds the predetermined parameter value. Set the depth value.
  • the control terminal can obtain the number of samples under each node of each client. Even if the number of samples under a certain node of a certain client is empty, it can also use the number of non-empty samples under the node of other clients to continue the tree building operation. As shown in Figure 4b.
  • a client with an empty sample number can receive broadcast information from a client or a control terminal with a non-empty sample number, and the broadcast information includes split point information.
  • the participating nodes terminate the current platform tree building process to ensure that the decision trees established by the participating nodes are the same.
  • Step 305 If the node satisfies the preset tree-building termination condition, output the decision tree.
  • the final decision tree is obtained.
  • the final decision tree since the decision tree structure established by each client is the same, there is no need for information exchange, and only local prediction is required. For example, input user information into the decision tree: age 40, income 10K, then it can be predicted that his label is default.
  • Step 306 If the node does not meet the preset conditions for terminating tree building, update the target category according to the split point information, and continue to perform the above tree building steps 302-306 based on the updated target category.
  • the tree-building process is repeated until the established node meets the preset conditions for terminating the tree-building.
  • different sample subsets are randomly selected to generate at least one decision tree.
  • At least one decision tree is formed into a random forest model.
  • Random forest It is a classifier that contains multiple decision trees, and the output category is determined by the mode of the category output by individual trees. Each time, only a subset of the random sample is constructed to build a decision tree, and then multiple decision trees are formed. For example, the client has 100 samples, and three decision trees are generated from 50 samples in three times to form a random forest model.
  • the method further includes: receiving user information of the user to be predicted, wherein the user information includes at least one attribute information; voting on the user information through a random forest model, Obtain the label of the user to be predicted.
  • the voting mechanism has a veto system, the minority obeys the majority, and the weighted majority).
  • the method provided by the above-mentioned embodiments of the present disclosure is based on the joint modeling model of the extreme random forest, and references the randomness of the extreme random forest to the joint modeling process.
  • the structure of the decision tree is used The feature ensures the identity and uniqueness of the decision tree constructed by multiple participating nodes.
  • FIG. 5 is a schematic diagram of an application scenario of the method for constructing a decision tree according to this embodiment.
  • the control terminal first randomly selects a target category (for example, age) from the candidate category set, and then sends a request for obtaining statistical information of the attribute information of the target category to the client participating in the decision tree training .
  • Each client needs to send the statistical information of the target category's attribute information (such as age) and label (such as default), including the maximum and minimum values, to the control terminal.
  • the control terminal integrates the statistical information , Get the maximum value (upper bound) and minimum value (lower bound) of the unified attribute information of all clients.
  • control terminal randomly selects a data between the maximum value and the minimum value as the split value, and calculates the split attribute according to the label statistical information in the statistical information.
  • the split value and the split attribute are combined into split point information and sent to all participating nodes, and the client constructs a decision tree according to the split point information. Repeat the establishment process until the number of establishments meets the set parameter conditions.
  • Each client can get a single decision tree. By randomly selecting a sample subset, each client can generate multiple decision trees and form a random forest. Use random forest to predict the user's label.
  • the present disclosure provides an embodiment in which a device for constructing a decision tree is applied to the control end, and the device embodiment is similar to the method embodiment shown in FIG. Correspondingly, the device can be specifically applied to various electronic devices.
  • the apparatus 600 for constructing a decision tree in this embodiment includes: a request unit 601, a statistical information receiving unit 602, a splitting unit 603, and a sending unit 604.
  • the request unit 601 is configured to send a request for obtaining statistical information of the attribute information of the target category to at least one client.
  • the statistical information receiving unit 602 is configured to receive the statistical information of the attribute information of the target category of the sample stored by each client.
  • the splitting unit 603 is configured to generate split point information according to the statistical information of the attribute information of the target category of the sample stored in each client.
  • the sending unit 604 is configured to send split point information to at least one client.
  • the specific processing of the request unit 601, the statistical information receiving unit 602, the splitting unit 603, and the sending unit 604 of the apparatus 600 for constructing a decision tree can refer to the steps 201, 202, and 202 in the corresponding embodiment in FIG. Step 203 and step 204.
  • the statistical information includes the maximum value and the minimum value
  • the split point information includes the split value
  • the splitting unit 603 is further configured to: compare the target category of the sample stored by each client The maximum value and minimum value of the attribute information are integrated to obtain the system maximum value and the system minimum value of the attribute information of the target category; the split value is selected between the system maximum value and the system minimum value.
  • the statistical information further includes label statistical information
  • the split point information also includes split attributes
  • the splitting unit 603 is further configured to: for the candidate categories in the candidate category set, according to each client The label statistics of the attribute information of the candidate category of the respective stored samples are obtained, and the decrease value of the impurity of the data after splitting according to the candidate category is obtained; the candidate category with the largest decrease value is determined as the split attribute.
  • the request unit 601 is further configured to: if there is no split attribute, randomly select a category from the candidate category set as the target category; otherwise, determine the split attribute as the target category ; Send a request for obtaining statistical information of the attribute information of the target category to at least one client.
  • the device 600 further includes an encryption and decryption unit (not shown in the drawings), which is configured to communicate with at least one client in an encrypted manner.
  • an encryption and decryption unit (not shown in the drawings), which is configured to communicate with at least one client in an encrypted manner.
  • the present disclosure provides an embodiment in which a device for constructing a decision tree is applied to a client.
  • the device embodiment is similar to the method embodiment shown in FIG. 3.
  • the device can be specifically applied to various electronic devices.
  • the apparatus 700 for constructing a decision tree in this embodiment includes: a request receiving unit 701, a tree building unit 702, and a loop unit 703.
  • the request receiving unit 701 is configured to receive a request sent by the control terminal for obtaining statistical information of the attribute information of the target category.
  • the tree building unit 702 is configured to perform the following tree building steps based on the target category: send the statistical information of the attribute information of the target category of the locally stored sample to the control terminal; receive the split point information returned by the control terminal, and store each according to the split point information The samples of is split, and the split nodes are stored to build a decision tree; if the node meets the preset conditions for terminating tree building, the decision tree is output.
  • the recurring unit 703 is configured to update the target category according to the split point information if the node does not meet the preset tree-building termination condition, and continue to perform the above-mentioned tree-building step based on the updated target category.
  • the termination establishment condition includes at least one of the following: the sum of the number of samples of the same node in at least one client controlled by the control terminal is less than a predetermined parameter value; or the establishment The depth of the decision tree exceeds the preset depth value.
  • the tree building unit 702 is further configured to: if the number of samples of the node is empty, receive information broadcasts from nodes whose number of samples is not empty to continue building the decision tree.
  • the statistical information includes label statistical information; and the device further includes an encryption and decryption unit (not shown in the figure), configured to encrypt the label statistical information.
  • the encryption and decryption unit is further configured to encrypt the label statistical information in a homomorphic encryption manner.
  • the device 700 further includes a combining unit (not shown in the drawings), configured to randomly select different sample subsets to generate at least one decision tree; and compose at least one decision tree Random forest model.
  • a combining unit (not shown in the drawings), configured to randomly select different sample subsets to generate at least one decision tree; and compose at least one decision tree Random forest model.
  • the apparatus 700 further includes a prediction unit (not shown in the drawings), configured to receive user information of the user to be predicted, where the user information includes at least one type of attribute information ; User information is voted through the random forest model to obtain the label of the user to be predicted.
  • a prediction unit (not shown in the drawings), configured to receive user information of the user to be predicted, where the user information includes at least one type of attribute information ; User information is voted through the random forest model to obtain the label of the user to be predicted.
  • FIG. 8 shows a schematic structural diagram of an electronic device (for example, the control terminal or the client in FIG. 1) 800 suitable for implementing the embodiments of the present disclosure.
  • the terminal device/server shown in 8 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 800 may include a processing device (such as a central processing unit, a graphics processor, etc.) 801, which may be loaded into a random access device according to a program stored in a read-only memory (ROM) 802 or from a storage device 808
  • the program in the memory (RAM) 803 executes various appropriate actions and processing.
  • the RAM 803 also stores various programs and data required for the operation of the electronic device 800.
  • the processing device 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
  • An input/output (I/O) interface 805 is also connected to the bus 804.
  • the following devices can be connected to the I/O interface 805: including input devices 806 such as touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, liquid crystal display (LCD), speakers, vibration An output device 807 such as a device; a storage device 808 such as a magnetic tape and a hard disk; and a communication device 809.
  • the communication device 809 may allow the electronic device 800 to perform wireless or wired communication with other devices to exchange data.
  • FIG. 8 shows an electronic device 800 having various devices, it should be understood that it is not required to implement or have all the illustrated devices. It may alternatively be implemented or provided with more or fewer devices.
  • Each block shown in Figure 8 can represent one device, or can represent multiple devices as needed.
  • the process described above with reference to the flowchart can be implemented as a computer software program.
  • the embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication device 809, or installed from the storage device 808, or installed from the ROM 802.
  • the processing device 801 the above-mentioned functions defined in the method of the embodiment of the present disclosure are executed.
  • the computer-readable medium described in the embodiments of the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), optical fiber, portable compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable signal medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wire, optical cable, RF (Radio Frequency), etc., or any suitable combination of the above.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or it may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: sends to at least one client terminal the statistical information used to obtain the attribute information of the target category Request; receive the statistical information of the attribute information of the target category of the sample stored by each client; generate split point information according to the statistical information of the attribute information of the target category of the sample stored by each client; send the split point information to at least one Client.
  • the electronic device receive a request for obtaining statistical information of the attribute information of the target category sent by the control terminal; based on the target category, perform the following establishment step: send the locally stored statistics of the attribute information of the target category of the sample to the control terminal Information; receive the split point information returned by the control terminal, split the respective stored samples according to the split point information, and store the split nodes to build a decision tree; if the node meets the preset termination conditions, the decision tree is output; if the node If the preset conditions for terminating the tree-building are not met, the target category is updated according to the split point information, and the above-mentioned tree-building steps are continued based on the updated target category.
  • the computer program code for performing the operations of the embodiments of the present disclosure can be written in one or more programming languages or a combination thereof, the programming languages including object-oriented programming languages such as Java, Smalltalk, C++, It also includes conventional procedural programming languages-such as "C" language or similar programming languages.
  • the program code can be executed entirely on the user's computer, partly on the user's computer, executed as an independent software package, partly on the user's computer and partly executed on a remote computer, or entirely executed on the remote computer or server.
  • the remote computer can be connected to the user’s computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to pass Internet connection).
  • LAN local area network
  • WAN wide area network
  • each block in the flowchart or block diagram can represent a module, program segment, or part of code, and the module, program segment, or part of code contains one or more for realizing the specified logical function Executable instructions.
  • the functions marked in the block may also occur in a different order from the order marked in the drawings. For example, two blocks shown in succession can actually be executed substantially in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified function or operation Or it can be realized by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure can be implemented in software or hardware.
  • the described unit may also be provided in the processor.
  • a processor includes a request unit, a statistical information receiving unit, a splitting unit, and a sending unit.
  • the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the request unit can also be described as "sending a request to at least one client to obtain statistical information of the attribute information of the target category. Unit".

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本公开的实施例公开了用于构建决策树的方法和装置。该方法的一具体实施方式包括:向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求;接收各客户端各自存储的样本的目标类别的属性信息的统计信息;根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息;将***点信息发送给至少一个客户端。该实施方式极大地减少了信息交互的内容,做到了保护保护隐私同时又提高了模型的效率。

Description

用于构建决策树的方法和装置
本专利申请要求于2019年4月30日提交的、申请号为201910362975.1、发明名称为“用于构建决策树的方法和装置”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本公开的实施例涉及计算机技术领域,具体涉及用于构建决策树的方法和装置。
背景技术
人工智能时代下,数据就像是行业发展的能源,人工智能的核心就是让算法能够根据给定的数据学习相应的模型,没有足够有效的数据,人工智能就无法高效地应用到人们的日常生活中。但是另一方面对数据的过度搜集则带来了隐私安全的危机。因此如何在保证个人隐私安全的前提下,合理合法地利用数据为人们提供高效的服务成为了研究的热点。
决策树是机器学***台联合建模树模型应运而生。现有的联合建树模型虽然是在保护用户隐私的前提下进行建模,但是由于数据分布在不同的平台中,各个平台样本的异质性、数据的不均衡、网络能力的差异都造成了建树过程中通讯成本高昂的问题,直接影响联合建模树模型的性能,在实际应用中对网络通信的压力极大,不能很好满足现有场景的需求。
发明内容
本公开的实施例提出了用于构建决策树的方法和装置。
第一方面,本公开的实施例提供了一种用于构建决策树的方法,应用于控制端,包括:向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求;接收各客户端各自存储的样本的目标类别的属性信息的统计信息;根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息;将***点信息发送给至少一个客户端。
在一些实施例中,统计信息包括最大值和最小值,***点信息包括***值;以及根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息,包括:将各客户端各自存储的样本的目标类别的属性信息的最大值和最小值进行整合,得到目标类别的属性信息的***最大值和***最小值;在***最大值和***最小值之间选择***值。
在一些实施例中,统计信息还包括标签统计信息,***点信息还包括***属性;根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息,还包括:对于候选类别集合中的候选类别,根据各客户端各自存储的样本的该候选类别的属性信息的标签统计信息,得到按照该候选类别***后的数据不纯度的下降值;将下降值最大的候选类别确定为***属性。
在一些实施例中,向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求,包括:若不存在***属性,则随机从候选类别集合中选择一个类别作为目标类别;否则,将***属性确定为目标类别;向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求。
在一些实施例中,该方法还包括:采用加密的方式与至少一个客户端通信。
第二方面,本公开的实施例提供了一种用于构建决策树的方法,应用于客户端,包括:接收控制端发送的用于获取目标类别的属性信 息的统计信息的请求;基于目标类别,执行如下建树步骤:向控制端发送本地存储的样本的目标类别的属性信息的统计信息;接收控制端返回的***点信息,根据***点信息将各自存储的样本进行***,并存储***得到的节点以建立决策树;若节点满足预设的终止建树条件,输出决策树;若节点不满足预设的终止建树条件,根据***点信息更新目标类别,基于更新后的目标类别继续执行上述建树步骤。
在一些实施例中,终止建树条件,包括以下至少一项:控制端所控制的至少一个客户端中的相同节点的样本数目之和小于预定的参数值;或已建立的决策树的深度超过预设的深度值。
在一些实施例中,该方法还包括:若节点的样本数目为空,接收来自样本数目不为空的节点的信息广播继续建决策树。
在一些实施例中,统计信息包括标签统计信息;以及方法还包括:对标签统计信息加密。
在一些实施例中,对标签统计信息加密,包括:采用同态加密的方式对标签统计信息加密。
在一些实施例中,该方法还包括:随机选择不同样本子集生成至少一个决策树;将至少一个决策树组成随机森林模型。
在一些实施例中,该方法还包括:接收待预测用户的用户信息,其中,用户信息包括至少一种属性信息;通过随机森林模型对用户信息进行投票表决,得到待预测用户的标签。
第三方面,本公开的实施例提供了一种用于构建决策树的***,包括控制端和至少一个客户端,其中,控制端,被配置成实现如第一方面中任一的方法;至少一个客户端,被配置成实现如第二方面中任一的方法。
第四方面,本公开的实施例提供了一种用于构建决策树的装置,应用于控制端,包括:请求单元,被配置成向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求;统计信息接收单元,被配置成接收各客户端各自存储的样本的目标类别的属性信息的统计信息;***单元,被配置成根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息;发送单元,被配置成将*** 点信息发送给至少一个客户端。
在一些实施例中,统计信息包括最大值和最小值,***点信息包括***值;以及***单元进一步被配置成:将各客户端各自存储的样本的目标类别的属性信息的最大值和最小值进行整合,得到目标类别的属性信息的***最大值和***最小值;在***最大值和***最小值之间选择***值。
在一些实施例中,统计信息还包括标签统计信息,***点信息还包括***属性;***单元进一步被配置成:对于候选类别集合中的候选类别,根据各客户端各自存储的样本的该候选类别的属性信息的标签统计信息,得到按照该候选类别***后的数据不纯度的下降值;将下降值最大的候选类别确定为***属性。
在一些实施例中,请求单元进一步被配置成:若不存在***属性,则随机从候选类别集合中选择一个类别作为目标类别;否则,将***属性确定为目标类别;向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求。
在一些实施例中,该装置还包括加解密单元,被配置成:采用加密的方式与所述至少一个客户端通信。
第五方面,本公开的实施例提供了一种用于构建决策树的装置,应用于客户端,包括:请求接收单元,被配置成接收控制端发送的用于获取目标类别的属性信息的统计信息的请求;建树单元,被配置成基于目标类别,执行如下建树步骤:向控制端发送本地存储的样本的目标类别的属性信息的统计信息;接收控制端返回的***点信息,根据***点信息将各自存储的样本进行***,并存储***得到的节点以建立决策树;若节点满足预设的终止建树条件,输出决策树;循环单元,被配置成若节点不满足预设的终止建树条件,根据***点信息更新目标类别,基于更新后的目标类别继续执行上述建树步骤。
在一些实施例中,终止建树条件,包括以下至少一项:控制端所控制的至少一个客户端中的相同节点的样本数目之和小于预定的参数值;或已建立的决策树的深度超过预设的深度值。
在一些实施例中,建树单元进一步被配置成:若节点的样本数目 为空,接收来自样本数目不为空的节点的信息广播继续建决策树。
在一些实施例中,统计信息包括标签统计信息;以及装置还包括加解密单元,被配置成:对标签统计信息加密。
在一些实施例中,加解密单元进一步被配置成:采用同态加密的方式对标签统计信息加密。
在一些实施例中,该装置还包括组合单元,被配置成:随机选择不同样本子集生成至少一个决策树;将至少一个决策树组成随机森林模型。
在一些实施例中,该装置还包括预测单元,被配置成:接收待预测用户的用户信息,其中,用户信息包括至少一种属性信息;通过随机森林模型对用户信息进行投票表决,得到待预测用户的标签。
第六方面,本公开的实施例提供了一种电子设备,包括:一个或多个处理器;存储装置,其上存储有一个或多个程序,当一个或多个程序被一个或多个处理器执行,使得一个或多个处理器实现如第一方面中任一的方法。
第七方面,本公开的实施例提供了一种计算机可读介质,其上存储有计算机程序,其中,程序被处理器执行时实现如第一方面中任一的方法。
本公开的实施例提供的用于构建决策树的方法和装置,基于多数据平台联合建模树模型的并行预测算法对现有联合建模树模型的建树过程进行了调整,引入了极端随机森林,并进行了改进,极大地减少了信息交互的内容,做到了保护保护隐私同时又提高了模型的效率,让联合建模树模型的广泛落地成为可能。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:
图1是本公开的一个实施例可以应用于其中的示例性***架构图;
图2是根据本公开的用于构建决策树的方法的一个实施例的流程图;
图3是根据本公开的用于构建决策树的方法的又一个实施例的流程图;
图4a、4b是根据本公开的用于构建决策树的方法的***点选择的示意图。
图5是根据本公开的用于构建决策树的方法的一个应用场景的示意图;
图6是根据本公开的用于构建决策树的装置的一个实施例的结构示意图;
图7是根据本公开的用于构建决策树的装置的又一个实施例的结构示意图;
图8是适于用来实现本公开的实施例的电子设备的计算机***的结构示意图。
具体实施方式
下面结合附图和实施例对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释相关发明,而非对该发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与有关发明相关的部分。
需要说明的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。下面将参考附图并结合实施例来详细说明本公开。
图1示出了可以应用本公开的用于构建决策树的方法或用于构建决策树的装置的实施例的示例性***架构100。
如图1所示,***架构100可以包括客户端101、102、103和控制端104。网络用以在客户端101、102、103和控制端104之间提供通信链路的介质。网络可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用客户端101、102、103通过网络与控制端104交互,以接收或发送消息等。客户端101、102、103上可以存储用于训练决策树的样本。每个样本包括属性信息和标签。可通过神经网络或统计 的方法得到属性信息的统计信息和标签的统计信息。其中,标签可加密成密文形式,然后和属性信息的统计信息一起上报给控制端。
决策树(Decision Tree)是一个预测模型。它代表的是对象属性与对象值之间的一种映射关系。树中分叉路径代表某个可能的属性值,而每个叶节点则对应从根节点到该叶节点所经历的路径所表示的对象的值。决策树仅有单一输出,若欲有多个输出,可以建立独立的树以处理生成不同的输出。
当多个数据拥有方(例如企业、政府等机构)想要联合他们各自的数据训练机器学习模型时,保证建立统一模型的同时,各方拥有的原数据不出本地。该模型的效果要求和数据聚合模型的效果的差距足够小。
本公开的应用场景是联邦学***台所包含的用户特征基本相同,而用户的样本不同。以区域银行贷款业务为例:区域A银行拥有一些客户的年龄信息、资产信息、理财基金产品信息、贷款还款信息等,这些数据存储在客户端101中。区域B银行拥有另外一些客户同样特征的信息,这些数据存储在客户端102中。区域C银行拥有另外一些客户同样特征的信息,这些数据存储在客户端103中.但是A、B、C区域银行各自所拥有的数据都不足以构建一个完整、可靠的判别模型,用来判别是否对某一客户进行贷款。因此A、B和C银行都希望利用对方的数据,进行联合建模,但是由于法律约束,无法将双发的数据聚合到一起,此时在不交互原始数据信息的前提下基于多数据平台的建模是解决该问题的关键方法。
控制端104可以接收客户端101、102、103发送的属性信息的统计信息。控制端104可以对接收到的属性信息的统计信息等数据进行分析等处理(如果是加密的数据则需要解密),并将处理结果(例如***点和***属性)反馈给客户端。客户端使用***点和***属性构建决策树。每个客户端都可随机采用样本的子集生成多个单个决策树,然后这些单个决策树通过投票法集成起来,形成一个随机森林模型。
无论是决策树,还是由此衍生的随机森林和基于树的GBM(梯度 提升模型),都由基础的单棵决策树组成。而在建立决策树的过程中,如何寻找最优***点,使得根据当前***点***后的数据不纯度(可以是基尼系数或者信息增益等)下降的最多,是计算成本最大的。而基于多数据平台的联合建模树模型的并行算法,同样需要在寻找最优***点上花费大量时间,因此如何降低寻找最优***点的计算成本成为需要解决的问题之一。集成学习中的极端随机森林则将随机性应用到了***点的选择上,随机森林的有效性主要体现在对方差的降低,它利用有放回筛选出来的子数据构建出来的多棵决策树进行多数投票来决定最后的预测结果。而极端随机森林则应用了同样的实现,只是它的随机性体现在对***点的选择上,并不强调单一决策树的最优***点,而是通过多棵决策树进行集成判断,从整体上降低判断的误差。
需要说明的是,控制端可以是硬件,也可以是软件。当控制端为硬件时,可以实现成多个控制端组成的分布式控制端集群,也可以实现成单个控制端。当控制端为软件时,可以实现成多个软件或软件模块(例如用来提供分布式服务的多个软件或软件模块),也可以实现成单个软件或软件模块。在此不做具体限定。
需要说明的是,本公开的实施例所提供的用于构建决策树的方法可以由客户端101、102、103和控制端104共同执行。相应地,用于构建决策树的装置可以设置于客户端101、102、103和控制端104中。在此不做具体限定。
应该理解,图1中的客户端和控制端的数目仅仅是示意性的。根据实现需要,可以具有任意数目的客户端和控制端。
继续参考图2,示出了根据本公开的用于构建决策树的方法应用于控制端的一个实施例的流程200。该用于构建决策树的方法,包括以下步骤:
步骤201,向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求。
在本实施例中,用于构建决策树的方法的执行主体(例如图1所示的控制端)可以通过有线连接方式或者无线连接方式向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求。属性信息 可包括{年龄,收入}={32,22K}等特征。目标类别指的是样本的属性信息的类别,例如,年龄,收入。首次***时,可由控制端从候选的多个类别中随机选择一个类别作为目标类别。
可选地,在执行步骤203得到***属性后,将***属性确定为目标类别。
步骤202,接收各客户端各自存储的样本的目标类别的属性信息的统计信息。
在本实施例中,本公开的方式不需要具体的属性信息,而是属性信息的统计信息。每个客户端只需要将某个属性信息的统计信息发送给控制端,比如:A客户端的数据的年龄特征,它的最大值是60岁,最小值是40岁,B客户端的数据的年龄特征是最小20岁,最大值是50岁,那么A和B只需要分别将这些数据发送到控制端。
统计信息还可包括至少一个类别的标签统计信息。标签统计信息可包括标签数量和标签占比。例如,属于目标类别且属于其它类别的样本的标签数量和标签占比。
可选地,控制端接收到的统计信息可能是加密后的,需要解密后再使用。
步骤203,根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息。
在本实施例中,***点信息可包括***值和***属性。将各客户端各自存储的样本的目标类别的属性信息的最大值和最小值进行整合,得到目标类别的属性信息的***最大值和***最小值。在***最大值和***最小值之间选择***值。可以在***最大值和***最小值之间随机选择***值。也可将最大值和***最小值的平均值作为***值。或者根据不同客户端上传的最大值、最小值的数量来确定中间值作为***值。***值的选择方式在此不作限定。如果接收的数据是加密的,则需要控制端解密才能使用。如图4a所示,控制端对接收到的加密数据进行解密并处理,得到***最小值为20岁,***最大值是60岁,然后在20到60岁之间随机选择一个***值42.4,然后将对应的***值下发到各客户端参与的节点中,各客户端根据得到的***值进行样 本数据的划分。为了保证信息的安全,可将***值加密后发给客户端。
上述方案在对属性信息的统计信息进行加密时,除了传统的密钥加密方法之外,可以采用同态加密等其他加密方式,从而避免控制端对数据内容的直接获取。
在本实施例的一些可选的实现方式中,对于候选类别集合中的候选类别,根据各客户端各自存储的样本的该候选类别的属性信息的标签统计信息,得到按照该候选类别***后的数据不纯度的下降值。将下降值最大的候选类别确定为***属性。
控制端将至少一个客户端上报的标签统计信息进行整合,得到***标签统计信息。下面以违约标签为例,对于属于目标类别且属于X类别的属性信息的标签统计信息:客户端A上报50个违约、50个不违约、违约比例50%。客户端B上报30个违约、70个不违约、违约比例30%。整合后得到***标签统计信息80个违约、120个不违约、违约比例40%。对于属于目标类别且属于Y类别的属性信息的标签统计信息:客户端A上报40个违约、60个不违约、违约比例40%。客户端B上报20个违约、80个不违约、违约比例20%。整合后得到***标签统计信息60个违约、140个不违约、违约比例30%。
不纯度可用基尼值表示,如下所示:
Figure PCTCN2020077579-appb-000001
其中Gini表示基尼值,Pi表示类i的数量占比,n表示类别数量。以二分类例子为例,当两类数量相等时,基尼值等于0.5;当节点数据属于同一类时,基尼值等于0。基尼值越大,数据越不纯。
根据目标类别上报的标签统计信息计算初始不纯度。然后再对候选类别集合中的候选类别依次计算预期不纯度。初始不纯度与预期不纯度的差值作为不纯度的下降值。将下降值最大的候选类别确定为***属性。具体实现方式可参考现有技术,在此不再赘述。
对于上述例子,根据对于属于目标类别且属于X类别的属性信息的标签统计信息,得到基尼值为1-(0.4^2+0.6^2)=0.48。根据对于属于目标类别且属于Y类别的属性信息的标签统计信息,得到基尼值为1-(0.3^2+0.7^2)=0.42。可见,属于目标类别且属于X类别的属性信 息的不纯度大于属于目标类别且属于Y类别的属性信息的不纯度。它们的初始不纯度相同,因此,选择不纯度小的Y类别作为***属性。
步骤204,将***点信息发送给至少一个客户端。
在本实施例中,可将计算出的***点信息发送给每个上报统计信息的客户端。由每个客户端根据***点信息构建决策树。每个客户端构建的决策树都是一样的。客户端判断出未满足终止建树条件时,仍会分批上报重新划分后的样本的属性信息的统计信息。控制端再执行步骤202-204分批生成***点信息后返回给客户端。
继续参考图3,示出了根据本公开的用于构建决策树的方法应用于客户端的一个实施例的流程300。该用于构建决策树的方法,包括以下步骤:
步骤301,接收控制端发送的用于获取目标类别的属性信息的统计信息的请求。
在本实施例中,用于构建决策树的方法的执行主体(例如图1所示的客户端)可以通过有线连接方式或者无线连接方式从控制端接收用于获取目标类别的属性信息的统计信息的请求。性信息可包括{年龄,收入}={32,22K}等特征。目标类别指的是样本的属性信息的类别,例如,年龄,收入。可由控制端从候选的多个类别中随机选择一个类别作为首次***的目标类别。
基于目标类别,执行如下建树步骤302-306:
步骤302,向控制端发送本地存储的样本的目标类别的属性信息的统计信息。
在本实施例中,本公开的方式不需要具体的属性信息,而是属性信息的统计信息。每个客户端只需要将某个属性信息的统计信息发送大控制端,比如:A客户端的数据的年龄特征,它的最大值是60岁,最小值是40岁,B客户端的数据的年龄特征是最小20岁,最大值是50岁,那么A和B只需要分别将这些数据发送到控制端。
统计信息还可包括至少一个类别的标签统计信息。标签统计信息可包括标签数量和标签占比。例如,属于目标类别且属于其它类别的样本的标签数量和标签占比。
可选地,对于一些容易泄露的信息客户端可采用加密的方式发给控制端。例如,违约标志、贷款标志等标签信息。客户端可将某个类别下的样本的标签的数量和/或比例加密后发送给控制端。例如,客户端A中年龄类别下有20个违约的样本和30个不违约的样本。将统计信息:违约比例40%、20个违约、30个不违约一起加密后发给控制端。客户端也可将所有统计信息一起加密后发给控制端。
步骤303,接收控制端返回的***点信息,根据***点信息将各自存储的样本进行***,并存储***得到的节点以建立决策树。
在本实施例中,接收步骤203生成的***信息。***点信息可包括***值和***属性。根据***属性和***值将样本重新划分后,再分批上报统计信息。样本的属性信息分为离散型和连续性两种情况,对于离散型的数据,按照属性值进行***,每个属性值对应一个***节点。对于连续性属性,一般性的做法是对数据按照该属性进行排序,再将数据分成若干区间,如[0,10]、[10,20]、[20,30]…,一个区间对应一个节点,若数据的属性值落入某一区间则该数据就属于其对应的节点。这些节点构成了决策树。控制端继续计算***点信息,然后发给客户端,直到满足预设的终止建树条件。
步骤304,判断节点是否满足预设的终止建树条件。
在本实施例中,终止建树条件,包括以下至少一项:控制端所控制的至少一个客户端中的相同节点的样本数目之和小于预定的参数值;或已建立的决策树的深度超过预设的深度值。控制端可获取到每个客户端每个节点下的样本数目,即使某个客户端的某个节点下样本数目为空,也可利用其它客户端该节点下非空的样本数目继续进行建树操作。如图4b所示。样本数目为空的客户端可接收到来自非空样本数目的客户端或者控制端的广播信息,广播信息中包括了***点信息。
当满足终止建树的条件之后,参与节点就终止当前平台的建树过程,以确保参与节点建立的决策树相同。
步骤305,若节点满足预设的终止建树条件,输出决策树。
在本实施例中,若节点满足预设的终止建树条件,则得到最终的决策树。使用该最终的决策树进行样本预测时,由于各客户端所建立 的决策树结构相同,所以不需要进行信息交互,只需要在本地预测即可。例如,向决策树中输入用户信息:年龄40、收入10K,则可预测出他的标签为违约。
步骤306,若节点不满足预设的终止建树条件,根据***点信息更新目标类别,基于更新后的目标类别继续执行上述建树步骤302-306。
在本实施例中,重复建树过程,直到建立的节点满足预设的终止建树条件。
在本实施例的一些可选的实现方式中,随机选择不同样本子集生成至少一个决策树。将至少一个决策树组成随机森林模型。随机森林:是一个包含多个决策树的分类器,并且其输出的类别是由个别树输出的类别的众数而定。每次只随机取样本的子集建一个决策树,然后组成多个决策树。例如,客户端有100个样本,分三次根据50个样本分别生成三个决策树,组成随机森林模型。
在本实施例的一些可选的实现方式中,该方法还包括:接收待预测用户的用户信息,其中,用户信息包括至少一种属性信息;通过随机森林模型对所述用户信息进行投票表决,得到所述待预测用户的标签。等到预测时将等预测的信息输入上例所述的三个决策树,分别得到结果,通过投票表决结果,决定数据属于哪一类(投票机制有一票否决制、少数服从多数、加权多数)。
本公开的上述实施例提供的方法,基于极端随机森林的联合建模模型,将极端随机森林的随机性引用到联合建模过程中,在确保用户数据隐私性的前提下,利用决策树的结构特征保证了多个参与节点所建造决策树的相同性和唯一性。
继续参见图5,图5是根据本实施例的用于构建决策树的方法的应用场景的一个示意图。在图5的应用场景中,控制端首先从候选类别集合中随机选择一个目标类别(例如,年龄),然后向参与决策树训练的客户端发送用于获取目标类别的属性信息的统计信息的请求。每个客户端需要将目标类别的属性信息(例如年龄)和标签(例如违约)的统计信息,包含最大值和最小值发送到控制端,控制端在接收到统计信息之后,将统计信息进行整合,得到所有客户端统一属性信息的 最大值(上界)和最小值(下界)。然后控制端随机在最大值和最小值之间选择一个数据作为***值,并根据统计信息中的标签统计信息计算出***属性。并将***值和***属性组合成***点信息下发到所有参与节点上,客户端按照***点信息进行决策树的构造。重复建树过程,直到建立的数目满足设定的参数条件。每个客户端可得到单个决策树。每个客户端通过随机选择样本子集,可生成多个决策树,并组成随机森林。使用随机森林来预测用户的标签。
进一步参考图6,作为对上述各图所示方法的实现,本公开提供了一种用于构建决策树的装置应用于控制端的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图6所示,本实施例的用于构建决策树的装置600包括:请求单元601、统计信息接收单元602、***单元603和发送单元604。其中,请求单元601,被配置成向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求。统计信息接收单元602,被配置成接收各客户端各自存储的样本的目标类别的属性信息的统计信息。***单元603,被配置成根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息。发送单元604,被配置成将***点信息发送给至少一个客户端。
在本实施例中,用于构建决策树的装置600的请求单元601、统计信息接收单元602、***单元603和发送单元604的具体处理可以参考图2对应实施例中的步骤201、步骤202、步骤203、步骤204。
在本实施例的一些可选的实现方式中,统计信息包括最大值和最小值,***点信息包括***值;以及***单元603进一步被配置成:将各客户端各自存储的样本的目标类别的属性信息的最大值和最小值进行整合,得到目标类别的属性信息的***最大值和***最小值;在***最大值和***最小值之间选择***值。
在本实施例的一些可选的实现方式中,统计信息还包括标签统计信息,***点信息还包括***属性;***单元603进一步被配置成:对于候选类别集合中的候选类别,根据各客户端各自存储的样本的该 候选类别的属性信息的标签统计信息,得到按照该候选类别***后的数据不纯度的下降值;将下降值最大的候选类别确定为***属性。
在本实施例的一些可选的实现方式中,请求单元601进一步被配置成:若不存在***属性,则随机从候选类别集合中选择一个类别作为目标类别;否则,将***属性确定为目标类别;向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求。
在本实施例的一些可选的实现方式中,装置600还包括加解密单元(附图中未示出),被配置成:采用加密的方式与至少一个客户端通信。
进一步参考图7,作为对上述各图所示方法的实现,本公开提供了一种用于构建决策树的装置应用于客户端的一个实施例,该装置实施例与图3所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图7所示,本实施例的用于构建决策树的装置700包括:请求接收单元701、建树单元702和循环单元703。其中,请求接收单元701,被配置成接收控制端发送的用于获取目标类别的属性信息的统计信息的请求。建树单元702,被配置成基于目标类别,执行如下建树步骤:向控制端发送本地存储的样本的目标类别的属性信息的统计信息;接收控制端返回的***点信息,根据***点信息将各自存储的样本进行***,并存储***得到的节点以建立决策树;若节点满足预设的终止建树条件,输出决策树。循环单元703,被配置成若节点不满足预设的终止建树条件,根据***点信息更新目标类别,基于更新后的目标类别继续执行上述建树步骤。
在本实施例的一些可选的实现方式中,终止建树条件,包括以下至少一项:控制端所控制的至少一个客户端中的相同节点的样本数目之和小于预定的参数值;或已建立的决策树的深度超过预设的深度值。
在本实施例的一些可选的实现方式中,建树单元702进一步被配置成:若节点的样本数目为空,接收来自样本数目不为空的节点的信息广播继续建决策树。
在本实施例的一些可选的实现方式中,统计信息包括标签统计信 息;以及该装置还包括加解密单元(附图中未示出),被配置成:对标签统计信息加密。
在本实施例的一些可选的实现方式中,加解密单元进一步被配置成:采用同态加密的方式对标签统计信息加密。
在本实施例的一些可选的实现方式中,装置700还包括组合单元(附图中未示出),被配置成:随机选择不同样本子集生成至少一个决策树;将至少一个决策树组成随机森林模型。
在本实施例的一些可选的实现方式中,装置700还包括预测单元(附图中未示出),被配置成:接收待预测用户的用户信息,其中,用户信息包括至少一种属性信息;通过随机森林模型对用户信息进行投票表决,得到待预测用户的标签。
下面参考图8,其示出了适于用来实现本公开的实施例的电子设备(例如图1中的控制端或客户端)800的结构示意图。8示出的终端设备/服务器仅仅是一个示例,不应对本公开的实施例的功能和使用范围带来任何限制。
如图8所示,电子设备800可以包括处理装置(例如中央处理器、图形处理器等)801,其可以根据存储在只读存储器(ROM)802中的程序或者从存储装置808加载到随机访问存储器(RAM)803中的程序而执行各种适当的动作和处理。在RAM 803中,还存储有电子设备800操作所需的各种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备800与其他设备进行无线或有线通信以交换数据。虽然图8示出了具有各种装置的电子设备800,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。图8中示出的每个方框可以代表一个装置,也可以根据 需要代表多个装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM 802被安装。在该计算机程序被处理装置801执行时,执行本公开的实施例的方法中限定的上述功能。需要说明的是,本公开的实施例所述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开的实施例中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一 个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求;接收各客户端各自存储的样本的目标类别的属性信息的统计信息;根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息;将***点信息发送给至少一个客户端。或者使得该电子设备:接收控制端发送的用于获取目标类别的属性信息的统计信息的请求;基于目标类别,执行如下建树步骤:向控制端发送本地存储的样本的目标类别的属性信息的统计信息;接收控制端返回的***点信息,根据***点信息将各自存储的样本进行***,并存储***得到的节点以建立决策树;若节点满足预设的终止建树条件,输出决策树;若节点不满足预设的终止建树条件,根据***点信息更新目标类别,基于更新后的目标类别继续执行上述建树步骤。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的实施例的操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是, 框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开的实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括请求单元、统计信息接收单元、***单元和发送单元。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定,例如,请求单元还可以被描述为“向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求的单元”。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的发明范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离所述发明构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。

Claims (18)

  1. 一种用于构建决策树的方法,应用于控制端,包括:
    向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求;
    接收各客户端各自存储的样本的目标类别的属性信息的统计信息;
    根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息;以及
    将所述***点信息发送给所述至少一个客户端。
  2. 根据权利要求1所述的方法,其中,所述统计信息包括最大值和最小值,***点信息包括***值;以及
    所述根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息,包括:
    将各客户端各自存储的样本的目标类别的属性信息的最大值和最小值进行整合,得到目标类别的属性信息的***最大值和***最小值;以及
    在所述***最大值和所述***最小值之间选择***值。
  3. 根据权利要求2所述的方法,其中,所述统计信息还包括标签统计信息,***点信息还包括***属性;
    所述根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息,还包括:
    对于候选类别集合中的候选类别,根据各客户端各自存储的样本的该候选类别的属性信息的标签统计信息,得到按照该候选类别***后的数据不纯度的下降值;以及
    将下降值最大的候选类别确定为***属性。
  4. 根据权利要求3所述的方法,其中,所述向所述至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求,包括:
    若不存在***属性,则随机从所述候选类别集合中选择一个类别作为目标类别;
    否则,将***属性确定为目标类别;以及
    向所述至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求。
  5. 根据权利要求1-4所述的方法,其中,所述方法还包括:
    采用加密的方式与所述至少一个客户端通信。
  6. 一种用于构建决策树的方法,应用于客户端,包括:
    接收控制端发送的用于获取目标类别的属性信息的统计信息的请求;
    基于所述目标类别,执行如下建树步骤:向所述控制端发送本地存储的样本的目标类别的属性信息的统计信息;接收所述控制端返回的***点信息,根据所述***点信息将各自存储的样本进行***,并存储***得到的节点以建立决策树;若所述节点满足预设的终止建树条件,输出所述决策树;以及
    若所述节点不满足预设的终止建树条件,根据所述***点信息更新目标类别,基于更新后的目标类别继续执行上述建树步骤。
  7. 根据权利要求6所述的方法,其中,所述终止建树条件,包括以下至少一项:
    所述控制端所控制的至少一个客户端中的相同节点的样本数目之和小于预定的参数值;或
    已建立的决策树的深度超过预设的深度值。
  8. 根据权利要求6所述的方法,其中,所述方法还包括:
    若所述节点的样本数目为空,接收来自样本数目不为空的节点的信息广播继续建决策树。
  9. 根据权利要求6所述的方法,其中,所述统计信息包括标签统计信息;
    以及所述方法还包括:
    对所述标签统计信息加密。
  10. 根据权利要求6-9所述的方法,其中,所述方法还包括:
    随机选择不同样本子集生成至少一个决策树;以及
    将所述至少一个决策树组成随机森林模型。
  11. 一种用于构建决策树的***,包括控制端和至少一个客户端,其中,
    所述控制端,被配置成实现如权利要求1-5中任一所述的方法;
    所述至少一个客户端,被配置成实现如权利要求6-10中任一所述的方法。
  12. 一种用于构建决策树的装置,应用于控制端,包括:
    请求单元,被配置成向至少一个客户端发送用于获取目标类别的属性信息的统计信息的请求;
    统计信息接收单元,被配置成接收各客户端各自存储的样本的目标类别的属性信息的统计信息;
    ***单元,被配置成根据各客户端各自存储的样本的目标类别的属性信息的统计信息生成***点信息;
    发送单元,被配置成将所述***点信息发送给所述至少一个客户端。
  13. 根据权利要求12所述的装置,其中,所述装置还包括加解密单元,被配置成:
    采用加密的方式与所述至少一个客户端通信。
  14. 一种用于构建决策树的装置,应用于客户端,包括:
    请求接收单元,被配置成接收控制端发送的用于获取目标类别的属性信息的统计信息的请求;
    建树单元,被配置成基于所述目标类别,执行如下建树步骤:向所述控制端发送本地存储的样本的目标类别的属性信息的统计信息;接收所述控制端返回的***点信息,根据所述***点信息将各自存储的样本进行***,并存储***得到的节点以建立决策树;若所述节点满足预设的终止建树条件,输出所述决策树;
    循环单元,被配置成若所述节点不满足预设的终止建树条件,根据所述***点信息更新目标类别,基于更新后的目标类别继续执行上述建树步骤。
  15. 根据权利要求14所述的装置,其中,所述统计信息包括标签统计信息;
    以及所述装置还包括加解密单元,被配置成:
    对所述标签统计信息加密。
  16. 根据权利要求14或15所述的装置,其中,所述装置还包括组合单元,被配置成:
    随机选择不同样本子集生成至少一个决策树;
    将所述至少一个决策树组成随机森林模型。
  17. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,其上存储有一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-10中任一所述的方法。
  18. 一种计算机可读介质,其上存储有计算机程序,其中,所述程序被处理器执行时实现如权利要求1-10中任一所述的方法。
PCT/CN2020/077579 2019-04-30 2020-03-03 用于构建决策树的方法和装置 WO2020220823A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/607,299 US20220230071A1 (en) 2019-04-30 2020-03-03 Method and device for constructing decision tree
EP20798206.7A EP3965023A4 (en) 2019-04-30 2020-03-03 METHOD AND DEVICE FOR CONSTRUCTING DECISION TREES

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910362975.1A CN110084377B (zh) 2019-04-30 2019-04-30 用于构建决策树的方法和装置
CN201910362975.1 2019-04-30

Publications (1)

Publication Number Publication Date
WO2020220823A1 true WO2020220823A1 (zh) 2020-11-05

Family

ID=67418234

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/077579 WO2020220823A1 (zh) 2019-04-30 2020-03-03 用于构建决策树的方法和装置

Country Status (4)

Country Link
US (1) US20220230071A1 (zh)
EP (1) EP3965023A4 (zh)
CN (1) CN110084377B (zh)
WO (1) WO2020220823A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705727A (zh) * 2021-09-16 2021-11-26 四川新网银行股份有限公司 基于差分隐私的决策树建模方法、预测方法、设备及介质
CN114529108A (zh) * 2022-04-22 2022-05-24 北京百度网讯科技有限公司 基于树模型的预测方法、装置、设备、介质及程序产品

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110084377B (zh) * 2019-04-30 2023-09-29 京东城市(南京)科技有限公司 用于构建决策树的方法和装置
CN110516879A (zh) * 2019-08-29 2019-11-29 京东城市(北京)数字科技有限公司 跨平台的建模方法、***和装置
CN111105266B (zh) * 2019-11-11 2023-10-27 建信金融科技有限责任公司 基于改进决策树的客户分群方法及装置
CN113065610B (zh) * 2019-12-12 2022-05-17 支付宝(杭州)信息技术有限公司 基于联邦学习的孤立森林模型构建和预测方法和装置
CN111310819B (zh) * 2020-02-11 2021-07-09 深圳前海微众银行股份有限公司 数据筛选方法、装置、设备及可读存储介质
CN113392164B (zh) * 2020-03-13 2024-01-12 京东城市(北京)数字科技有限公司 构建纵向联邦树的方法、主服务器、业务平台和***
CN113392101B (zh) * 2020-03-13 2024-06-18 京东城市(北京)数字科技有限公司 构建横向联邦树的方法、主服务器、业务平台和***
CN111401570B (zh) * 2020-04-10 2022-04-12 支付宝(杭州)信息技术有限公司 针对隐私树模型的解释方法和装置
US11494700B2 (en) 2020-09-16 2022-11-08 International Business Machines Corporation Semantic learning in a federated learning system
CN112308157B (zh) * 2020-11-05 2022-07-22 浙江大学 一种面向决策树的横向联邦学习方法
WO2022094884A1 (zh) * 2020-11-05 2022-05-12 浙江大学 一种面向决策树的横向联邦学习方法
CN112597379B (zh) * 2020-12-04 2023-09-01 光大科技有限公司 数据识别方法、装置和存储介质及电子装置
CN112699947A (zh) * 2020-12-30 2021-04-23 深圳前海微众银行股份有限公司 基于决策树的预测方法、装置、设备、介质及程序产品
CN112861692B (zh) * 2021-02-01 2024-03-15 电子科技大学中山学院 一种房间分类模型构建方法及装置、房间分类方法及装置
CN112801231B (zh) * 2021-04-07 2021-07-06 支付宝(杭州)信息技术有限公司 用于业务对象分类的决策模型训练方法和装置
CN115602282A (zh) * 2022-09-23 2023-01-13 北京华益精点生物技术有限公司(Cn) 血糖监测的指导方法及相关设备
CN116167624B (zh) * 2023-04-25 2023-07-07 天信达信息技术有限公司 一种目标类别标识的确定方法、存储介质及电子设备
CN116502255B (zh) * 2023-06-30 2023-09-19 杭州金智塔科技有限公司 基于秘密分享的特征提取方法以及装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140074765A1 (en) * 2012-09-07 2014-03-13 Harald Steck Decision forest generation
CN103902591A (zh) * 2012-12-27 2014-07-02 中国科学院深圳先进技术研究院 构建决策树分类器的方法及装置
CN106991437A (zh) * 2017-03-20 2017-07-28 浙江工商大学 基于随机森林预测污水水质数据的方法及***
CN107818344A (zh) * 2017-10-31 2018-03-20 上海壹账通金融科技有限公司 用户行为进行分类和预测的方法和***
CN109409647A (zh) * 2018-09-10 2019-03-01 昆明理工大学 一种基于随机森林算法的薪资水平影响因素的分析方法
CN110084377A (zh) * 2019-04-30 2019-08-02 京东城市(南京)科技有限公司 用于构建决策树的方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10726153B2 (en) * 2015-11-02 2020-07-28 LeapYear Technologies, Inc. Differentially private machine learning using a random forest classifier
CN107292186B (zh) * 2016-03-31 2021-01-12 阿里巴巴集团控股有限公司 一种基于随机森林的模型训练方法和装置
CN107135061B (zh) * 2017-04-17 2019-10-22 北京科技大学 一种5g通信标准下的分布式隐私保护机器学习方法
CN108764273B (zh) * 2018-04-09 2023-12-05 中国平安人寿保险股份有限公司 一种数据处理的方法、装置、终端设备及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140074765A1 (en) * 2012-09-07 2014-03-13 Harald Steck Decision forest generation
CN103902591A (zh) * 2012-12-27 2014-07-02 中国科学院深圳先进技术研究院 构建决策树分类器的方法及装置
CN106991437A (zh) * 2017-03-20 2017-07-28 浙江工商大学 基于随机森林预测污水水质数据的方法及***
CN107818344A (zh) * 2017-10-31 2018-03-20 上海壹账通金融科技有限公司 用户行为进行分类和预测的方法和***
CN109409647A (zh) * 2018-09-10 2019-03-01 昆明理工大学 一种基于随机森林算法的薪资水平影响因素的分析方法
CN110084377A (zh) * 2019-04-30 2019-08-02 京东城市(南京)科技有限公司 用于构建决策树的方法和装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113705727A (zh) * 2021-09-16 2021-11-26 四川新网银行股份有限公司 基于差分隐私的决策树建模方法、预测方法、设备及介质
CN113705727B (zh) * 2021-09-16 2023-05-12 四川新网银行股份有限公司 基于差分隐私的决策树建模方法、预测方法、设备及介质
CN114529108A (zh) * 2022-04-22 2022-05-24 北京百度网讯科技有限公司 基于树模型的预测方法、装置、设备、介质及程序产品

Also Published As

Publication number Publication date
EP3965023A1 (en) 2022-03-09
US20220230071A1 (en) 2022-07-21
CN110084377B (zh) 2023-09-29
EP3965023A4 (en) 2023-01-25
CN110084377A (zh) 2019-08-02

Similar Documents

Publication Publication Date Title
WO2020220823A1 (zh) 用于构建决策树的方法和装置
WO2021179720A1 (zh) 基于联邦学习的用户数据分类方法、装置、设备及介质
CN108491267B (zh) 用于生成信息的方法和装置
CN113505882B (zh) 基于联邦神经网络模型的数据处理方法、相关设备及介质
CN111428887B (zh) 一种基于多个计算节点的模型训练控制方法、装置及***
CN113542228B (zh) 基于联邦学习的数据传输方法、装置以及可读存储介质
CN112039702B (zh) 基于联邦学习和相互学习的模型参数训练方法及装置
CN110929806B (zh) 基于人工智能的图片处理方法、装置及电子设备
WO2023278029A1 (en) Privacy transformations in data analytics
CN117390657A (zh) 数据加密方法、装置、计算机设备和存储介质
CN113362852A (zh) 一种用户属性识别方法和装置
CN116127400B (zh) 基于异构计算的敏感数据识别***、方法及存储介质
CN114329127B (zh) 特征分箱方法、装置及存储介质
CN116208340A (zh) 一种基于隐私计算和区块链的可信数据流通平台***方法
CN116032590A (zh) 一种ddos攻击的检测模型训练方法及相关装置
Wang et al. Blockchain-Enabled Lightweight Fine-Grained Searchable Knowledge Sharing for Intelligent IoT
CN115099875A (zh) 基于决策树模型的数据分类方法及相关设备
CN114595474A (zh) 联邦学习建模优化方法、电子设备、介质及程序产品
CN114493850A (zh) 基于人工智能的在线公证方法、***及存储介质
US20240086923A1 (en) Entity profile for access control
Barolli et al. Advances in Internetworking, Data & Web Technologies: The 5th International Conference on Emerging Internetworking, Data & Web Technologies (EIDWT-2017)
CN114090962B (zh) 一种基于大数据的智能出版***及方法
CN113836566B (zh) 基于区块链***的模型处理方法及装置、设备、介质
CN115378624B (zh) 知识图谱构建方法、装置、电子设备及存储介质
CN115829729B (zh) 一种基于三链架构的供应链金融信用评价***及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20798206

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020798206

Country of ref document: EP

Effective date: 20211130