CN114741593A - Network model training method and device, electronic equipment and storage medium - Google Patents

Network model training method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114741593A
CN114741593A CN202210373218.6A CN202210373218A CN114741593A CN 114741593 A CN114741593 A CN 114741593A CN 202210373218 A CN202210373218 A CN 202210373218A CN 114741593 A CN114741593 A CN 114741593A
Authority
CN
China
Prior art keywords
participant
model
network model
training
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210373218.6A
Other languages
Chinese (zh)
Inventor
郭慧杰
廖旺胜
黄琳莉
黄倩颖
庄恩瀚
丁平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202210373218.6A priority Critical patent/CN114741593A/en
Publication of CN114741593A publication Critical patent/CN114741593A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a network model training method and device, electronic equipment and a storage medium, which can be applied to the field of big data. In the training method of the network model, firstly, a ring network is constructed according to each participant, wherein the participant is a mechanism joining a federal platform. Then, in the annular network, sequentially training an input model of each participant by using local data of each participant to obtain an output model of each participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and the input model of each participant except the first participant is an output model sent by the previous participant. Therefore, by using the method of the application, the network model is jointly trained among a plurality of participants by using local data, so that a more accurate network model is obtained.

Description

Network model training method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of big data technologies, and in particular, to a method and an apparatus for training a network model, an electronic device, and a storage medium.
Background
Along with the business expansion of financial institutions, the types of financial products are more and more, in order to better provide user services, financial products suitable for users need to be recommended for different users, and the purchase rate of the users is also improved.
In the prior art, when financial recommendation is performed, because each financial institution only masters own data information and is subject to client privacy and supervision requirements, data sharing between financial institutions cannot be realized, and a comprehensive and accurate client product recommendation model is usually difficult to construct by only relying on data of one financial institution, so that the constructed model cannot accurately recommend financial products to users and is difficult to apply to product recommendation services.
Disclosure of Invention
In view of this, the present application provides a training method and apparatus for a network model, an electronic device, and a storage medium, so as to solve the problem in the prior art that it is difficult to construct a comprehensive and accurate client product recommendation model, so that the constructed model cannot accurately recommend a financial product to a user, and is difficult to apply to a product recommendation service.
In order to achieve the above purpose, the present application provides the following technical solutions:
the first aspect of the present application discloses a method for training a network model, which includes:
constructing a ring network according to each participant, wherein the participant is a mechanism for joining a federal platform;
in the ring network, training an input model of each participant by sequentially utilizing local data of each participant to obtain an output model of the participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and except the first participant, the input model of each participant is an output model sent by the previous participant.
Optionally, before the method constructs a ring network according to each participant, the method further includes:
and carrying out data preprocessing on the user data of each participant.
Optionally, when the transmission of the output model is performed between the participants, the method includes:
and carrying out encryption transmission on the output model.
Optionally, in the above method, the final neural network model is used to recommend the product preference of the user.
The second aspect of the present application discloses a training apparatus for a network model, comprising:
the system comprises a construction unit, a processing unit and a processing unit, wherein the construction unit is used for constructing a ring network according to each participant, and the participant is a mechanism for joining a federal platform;
the training unit is used for sequentially training the input model of each participant by using the local data of each participant in the annular network to obtain the output model of the participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and except the first participant, the input model of each participant is an output model sent by the previous participant.
Optionally, the above apparatus further includes:
and the preprocessing unit is used for preprocessing the data of the user data of each participant.
Optionally, in the above apparatus, the training unit includes:
and the encryption subunit is used for carrying out encryption transmission on the output model.
Optionally, the above apparatus further includes:
and the recommending unit is used for recommending the product preference of the user by utilizing the final neural network model.
A third aspect of the present application discloses an electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of the first aspects of the present application.
A fourth aspect of the present application discloses a computer storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the method according to any of the first aspects of the present application.
According to the technical scheme, in the training method of the network model, firstly, a ring network is constructed according to each participant, wherein the participant is a mechanism for joining a federal platform. Then, in the annular network, sequentially training an input model of each participant by using local data of each participant to obtain an output model of each participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and the input model of each participant except the first participant is an output model sent by the previous participant. Therefore, by using the method of the application, the network model is jointly trained among a plurality of participants by using local data, so that a more accurate network model is obtained. All the participating parties are positioned in an annular network structure, model transmission is carried out among all the participating parties without introducing a coordinating party or a central server, the cost of parameter transmission between a master party and a slave party is reduced, the original data of the participating parties does not leave the local of the participating parties, the integrity and privacy safety of user data are guaranteed, and the data safety is protected to the greater extent. The problem of among the prior art be difficult to construct comprehensive accurate customer product recommendation model, lead to the model that results in being constructed can't accomplish to recommend financial product to the user's accuracy, be difficult to apply to the business of product recommendation is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for training a network model according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a training apparatus for network models according to another embodiment of the present disclosure;
fig. 3 is a schematic diagram of an electronic device according to another embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Moreover, in this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
It can be known from the background art that, in the prior art, when financial recommendation is performed, since each financial institution only grasps own data information, and data sharing between financial institutions cannot be realized due to client privacy and supervision requirements, a comprehensive and accurate client product recommendation model is usually difficult to construct by means of data of one financial institution, so that the constructed model cannot accurately recommend financial products to users, and is difficult to apply to product recommendation services.
In view of this, the present application provides a training method and apparatus for a network model, an electronic device, and a storage medium, so as to solve the problem in the prior art that it is difficult to construct a comprehensive and accurate client product recommendation model, so that the constructed model cannot accurately recommend a financial product to a user, and is difficult to apply to a product recommendation service.
It should be noted that the method, the device, the electronic device and the storage medium for training the network model provided by the invention can be used in the field of big data. The above description is only an example, and does not limit the application fields of the method, the apparatus, the electronic device, and the storage medium for training the network model provided by the present invention.
The embodiment of the present application provides a method for training a network model, as shown in fig. 1, specifically including:
s101, constructing a ring network according to each participant, wherein the participants are mechanisms joining the federal platform.
It should be noted that, the information of the participating parties on the federation platform is obtained, and a ring network is constructed according to each participating party, wherein the participating parties are mechanisms joining the federation platform, and on the federation platform, the process of exchanging model information among the participating parties is elaborately designed and wholly encrypted on the premise of ensuring user privacy and data security, so that no participating party can guess the private data content of any other organization, but the purpose of joint modeling is realized.
Optionally, in another embodiment of the present application, before performing step S101, the method may further include:
and carrying out data preprocessing on the user data of each participant.
It should be noted that, because the user groups of the participants are different, the business patterns of the participants are similar, but the data dimensions are diverse and dispersed. Therefore, in order to facilitate data processing, the user data of each participant needs to be preprocessed, and a feature space of the user data is constructed by encryption, so that the participants are guaranteed to have different user IDs and consistent feature spaces.
S102, in the annular network, training an input model of each participant by sequentially utilizing local data of each participant to obtain an output model of each participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and the input model of each participant except the first participant is an output model sent by the previous participant.
It should be noted that, in the ring network, the local data of each participant is used to train the input model of the participant in sequence, so as to obtain the output model of the participant until the preset end condition is met, and obtain the final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and the input model of each participant except the first participant is an output model sent by the previous participant.
Specifically, a first participant in the ring network trains a pre-constructed initial neural network model by using local data to obtain a trained model, namely an output model, and then sends the trained model to a second participant, namely sends model parameters of the trained model to the second participant. After receiving the output model from the first participant, the second participant trains the received output model using local data to obtain a trained model, and then sends the trained model to the next participant, for example, participant 1 to participant 2, participant 2 to participant 3, participant (k-1) to participant k, and then participant k returns to participant 1, from the viewpoint of network structure, to form a loop. And circulating the training model until a preset ending condition is met, wherein the ending condition can be set according to the actual situation, for example, until the model converges, or a set training turn is reached, or an allowed maximum training time is reached. And finally, obtaining a final neural network model, and encrypting and sharing the final neural network model to all participants. Each participant may test the performance and model performance of the final model on a local test data set.
The method adopts a distributed application architecture for equally distributing tasks and workloads among participants, and each node in the network has the same function and has no master-slave division. The method has the characteristics of decentralization, expandability, robustness, privacy protection and the like.
Optionally, in another embodiment of the present application, when the transmission of the output model is performed between the participants, the method includes:
and carrying out encrypted transmission on the output model.
For data security, when the output model is transmitted between the participants in the ring network, the output model needs to be encrypted first and then the encrypted output model needs to be transmitted.
Optionally, in another embodiment of the present application, after the step S102 is executed, the method may further include:
and recommending the product preference of the user by using the final neural network model.
It should be noted that after the final neural network model is obtained, the final neural network model is shared by all the participating parties, when financial product recommendation is needed, all the participating parties transmit a local client list to the final neural network model, the model outputs the product preferences of the clients of the participating parties, the product preferences are divided into five categories (which can be flexibly designed), one category is fund preferences, two categories are insurance preferences, three categories are precious metal preferences, four categories are financing preferences, five categories are loan product preferences, and a client manager carries out accurate product recommendation according to the client preferences output by the model.
In the training method of the network model provided by the embodiment of the application, a ring network is constructed according to each participant, wherein the participant is a mechanism for joining a federal platform. Then, in the annular network, sequentially training an input model of each participant by using local data of each participant to obtain an output model of each participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and the input model of each participant except the first participant is an output model sent by the previous participant. Therefore, by using the method of the application, the network model is jointly trained among a plurality of participants by using local data, so that a more accurate network model is obtained. All the participating parties are positioned in an annular network structure, model transmission is carried out among all the participating parties without introducing a coordinating party or a central server, the cost of parameter transmission between a master party and a slave party is reduced, the original data of the participating parties does not leave the local of the participating parties, the integrity and privacy safety of user data are guaranteed, and the data safety is protected to the greater extent. The problem of among the prior art be difficult to construct comprehensive accurate customer product recommendation model, lead to the model that results in being constructed can't accomplish to recommend financial product to the user's accuracy, be difficult to apply to the business of product recommendation is solved.
Optionally, another embodiment of the present application further provides a training apparatus for a network model, specifically as shown in fig. 2, including:
a building unit 201, configured to build a ring network according to each participant, where the participant is an organization joining the federation platform.
A training unit 202, configured to train an input model of each participant in the ring network sequentially using local data of each participant to obtain an output model of the participant until a preset end condition is met, and obtain a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and the input model of each participant except the first participant is an output model sent by the previous participant.
In this embodiment, for the specific implementation processes of the constructing unit 201 and the training unit 202, reference may be made to the contents of the method embodiment corresponding to fig. 1, which are not described herein again.
In the training apparatus for a network model provided in the embodiment of the present application, first, the constructing unit 201 constructs a ring network according to each participant, where the participant is a mechanism joining a federal platform. Then, the training unit 202 trains the input model of each participant in the ring network by sequentially using the local data of each participant to obtain the output model of the participant until the preset ending condition is met, and obtains the final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and the input model of each participant except the first participant is an output model sent by the previous participant. Therefore, by using the method of the application, the network model is jointly trained among a plurality of participants by using local data, so that a more accurate network model is obtained. All the participating parties are positioned in an annular network structure, model transmission is carried out among all the participating parties without introducing a coordinating party or a central server, the cost of parameter transmission between a master party and a slave party is reduced, the original data of the participating parties does not leave the local of the participating parties, the integrity and privacy safety of user data are guaranteed, and the data safety is protected to the greater extent. The problem of among the prior art be difficult to construct comprehensive accurate customer product recommendation model, lead to the model that results in being constructed can't accomplish to recommend financial product to the user's accuracy, be difficult to apply to the business of product recommendation is solved.
Optionally, in another embodiment of the present application, the training apparatus for the network model may further include:
and the preprocessing unit is used for preprocessing the data of the user data of each participant.
In this embodiment, the specific execution process of the preprocessing unit may refer to the content of the method embodiment described above, and is not described herein again.
Optionally, in another embodiment of the present application, an implementation manner of the training unit 202 may include:
and the encryption subunit is used for carrying out encryption transmission on the output model.
In this embodiment, the specific execution process of the encryption subunit may refer to the content of the method embodiment described above, and is not described herein again.
Optionally, in another embodiment of the present application, the training apparatus for the network model may further include:
and the recommending unit is used for recommending the product preference of the user by utilizing the final neural network model.
In this embodiment, for the specific execution process of the recommending unit, reference may be made to the contents of the above method embodiments, and details are not described here.
Another embodiment of the present application further provides an electronic device, as shown in fig. 3, specifically including:
one or more processors 301.
A storage device 302 having one or more programs stored thereon.
The one or more programs, when executed by the one or more processors 301, cause the one or more processors 301 to implement the method as in any one of the embodiments described above.
Another embodiment of the present application further provides a computer storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method in any one of the above embodiments.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are merely illustrative, wherein units described as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for training a network model, comprising:
constructing a ring network according to each participant, wherein the participants are mechanisms joining a federal platform;
in the ring network, training an input model of each participant by sequentially utilizing local data of each participant to obtain an output model of the participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and except the first participant, the input model of each participant is an output model sent by the previous participant.
2. The method of claim 1, wherein before constructing a ring network from the participants, further comprising:
and carrying out data preprocessing on the user data of each participant.
3. The method of claim 1, wherein the transmitting of the output model between the participants comprises:
and carrying out encryption transmission on the output model.
4. The method of claim 1, wherein after obtaining the final neural network model, further comprising:
and recommending the product preference of the user by using the final neural network model.
5. An apparatus for training a network model, comprising:
the system comprises a construction unit, a processing unit and a processing unit, wherein the construction unit is used for constructing a ring network according to each participant, and the participant is a mechanism for joining a federal platform;
the training unit is used for sequentially training the input model of each participant by using the local data of each participant in the annular network to obtain the output model of the participant until a preset ending condition is met, and obtaining a final neural network model; the input model of the first participant is a pre-constructed initial neural network model, and except the first participant, the input model of each participant is an output model sent by the previous participant.
6. The apparatus of claim 5, further comprising:
and the preprocessing unit is used for preprocessing the data of the user data of each participant.
7. The apparatus of claim 5, wherein the training unit comprises:
and the encryption subunit is used for carrying out encryption transmission on the output model.
8. The apparatus of claim 5, further comprising:
and the recommending unit is used for recommending the product preference of the user by utilizing the final neural network model.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-4.
10. A computer storage medium, characterized in that a computer program is stored thereon, wherein the computer program, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
CN202210373218.6A 2022-04-11 2022-04-11 Network model training method and device, electronic equipment and storage medium Pending CN114741593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210373218.6A CN114741593A (en) 2022-04-11 2022-04-11 Network model training method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210373218.6A CN114741593A (en) 2022-04-11 2022-04-11 Network model training method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114741593A true CN114741593A (en) 2022-07-12

Family

ID=82281114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210373218.6A Pending CN114741593A (en) 2022-04-11 2022-04-11 Network model training method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114741593A (en)

Similar Documents

Publication Publication Date Title
US11580417B2 (en) System and method for processing data and managing information
Zhou et al. Blockchain-based decentralized reputation system in E-commerce environment
US20180232828A1 (en) Apparatus and method for providing and/or for processing information for, regarding, and/or for facilitating, the commercialization, development, marketing, sale, transfer, licensing, and/or monetization, of intellectual property
US10997671B2 (en) Methods, systems and computer program products for collaborative tax return preparation
Liu et al. A blockchain-empowered federated learning in healthcare-based cyber physical systems
CA3093718C (en) Method, apparatus, and computer program product for encryption key management within a group-based communication system
CN113127916A (en) Data set processing method, data processing device and storage medium
CN104620276A (en) Attendee suggestion for events based on profile information on a social networking site
US11797948B2 (en) Method, apparatus and computer program product for improving event creation and modification in a group-based communication platform
CN114154194A (en) Information sharing method, device and system
CN111369260A (en) Privacy-protecting risk prediction method and device
US20180247717A1 (en) System and method for facilitating a promotional event
CN115169576B (en) Model training method and device based on federal learning and electronic equipment
CN110610098A (en) Data set generation method and device
Pramanik et al. Crowd computing: The computing revolution
CN108023870B (en) System and method for remotely presenting
CN113807157A (en) Method, device and system for training neural network model based on federal learning
CN117094773A (en) Online migration learning method and system based on blockchain privacy calculation
CN109934567B (en) Knowledge sharing method, device, system and storage medium
CN114741593A (en) Network model training method and device, electronic equipment and storage medium
CN111242431A (en) Information processing method and device, and method and device for constructing customer service conversation workflow
Gershon Intelligent networks and international business communication: A systems theory interpretation
KR102172419B1 (en) Cryptocurrency transaction management method
CN115470958A (en) Federal learning method and device, computer readable medium and electronic equipment
CN112884560A (en) Data processing method, related node and system in public welfare management block chain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination