CN111985000A - Model service output method, device, equipment and storage medium - Google Patents

Model service output method, device, equipment and storage medium Download PDF

Info

Publication number
CN111985000A
CN111985000A CN202010855640.6A CN202010855640A CN111985000A CN 111985000 A CN111985000 A CN 111985000A CN 202010855640 A CN202010855640 A CN 202010855640A CN 111985000 A CN111985000 A CN 111985000A
Authority
CN
China
Prior art keywords
model
modeling
party
cooperative
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010855640.6A
Other languages
Chinese (zh)
Inventor
林文珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010855640.6A priority Critical patent/CN111985000A/en
Publication of CN111985000A publication Critical patent/CN111985000A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioethics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computational Linguistics (AREA)
  • Storage Device Security (AREA)

Abstract

The invention discloses a model service output method, a device, equipment and a storage medium, wherein the method comprises the following steps: sending the federal modeling request to a plurality of second cooperative modeling parties, so that each second cooperative modeling party can perform joint modeling according to the federal modeling request; when modeling completion information returned by the second cooperative modeling party is received, model service is issued; respectively forwarding the received model service calling requests to a second cooperation modeling party, so that the second cooperation modeling party performs model calculation according to a pre-constructed model and returns a model calculation result; and summarizing the model calculation results and returning the model calculation results to the initiator of the model service calling request. The model data in the model construction and model calculation processes cannot cross the local security network boundary, so that the model data is difficult to propagate or leak, and the data privacy security of a model service provider is greatly protected; meanwhile, a user side does not need to seek data cooperation and high-volume technical development, and the service capability of the user side is improved.

Description

Model service output method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a model service output method, a model service output device, model service output equipment and a storage medium.
Background
At present, data information owned by a plurality of small-scale internet enterprises, mechanisms and the like is limited, the data diversity, the coverage and the like are difficult to independently support the completion service of the enterprises and mechanisms, the data appeal of the enterprises and mechanisms is large, and the enterprises and mechanisms cannot support the good and rapid operation of the services due to the fact that the enterprises and mechanisms are not sufficiently modeled or supported by related technical capabilities.
In the prior art, in order to improve the business capability of the internet enterprises and organizations, the problems are mostly solved by directly accessing and calling the model service by means of the formed model service. Currently, providers of model services typically provide their own model services to users in two ways:
(1) the model encryption is deployed or pushed to a user needing the model; the disadvantage of this approach is that the provided model information with high business value is easily leaked and the technical access requirements on the user are relatively high.
(2) The method comprises the steps that data of all parties participating in modeling are concentrated to a trusted environment, combined modeling is carried out in the trusted environment, and then the data are issued to a model service for a user to call; the disadvantage of this approach is that each party involved in the modeling needs to export data from its own network domain to a trusted environment, but the so-called trusted environments are only relative and cannot guarantee the absolute security of the data; in addition, when the user accesses the model service in the trusted environment, the technical cost and the requirement for accessing the common system service are higher.
The above is only for the purpose of assisting understanding of the technical aspects of the present invention, and does not represent an admission that the above is prior art.
Disclosure of Invention
The invention mainly aims to provide a model service output method, a model service output device, model service output equipment and a storage medium, and aims to solve the technical problems that the safety of model data of a model service provider cannot be ensured in the prior art, and a model service user has high technical development difficulty and high cost when using a model service.
In order to achieve the above object, the present invention provides a model service output method, comprising the steps of:
the method comprises the steps that a first cooperative modeling party sends a federal modeling request to a plurality of second cooperative modeling parties so that the second cooperative modeling parties can carry out combined modeling according to the federal modeling request, and the first cooperative modeling party and the second cooperative modeling parties comprise but are not limited to a product platform;
the first cooperation modeling party issues the model service according to the modeling completion information when receiving the modeling completion information returned by all the second cooperation modeling parties;
the first cooperation modeling party respectively forwards the received model service calling requests to the second cooperation modeling party, so that the second cooperation modeling party carries out model calculation according to a pre-constructed model and returns a model calculation result;
and the first cooperative modeling party collects the received model calculation results and returns the collected results to the initiator of the model service calling request.
Optionally, the step of sending, by the first cooperative modeling party, a federal modeling request to a plurality of second cooperative modeling parties, so that each of the second cooperative modeling parties jointly models according to the federal modeling request includes:
and the first cooperative modeling party sends the federal modeling request to a plurality of second cooperative modeling parties, so that each second cooperative modeling party reads model construction data from a local database according to the federal modeling request and performs combined modeling according to the model construction data.
Optionally, when receiving modeling completion information returned by all second cooperating modeling parties, the first cooperating modeling party performs model service release according to the modeling completion information, including:
the first cooperation modeling party receives modeling completion information returned by all the second cooperation modeling parties;
the first cooperation modeling party sends a model issuing request to each second cooperation modeling party so that each second cooperation modeling party can carry out model reasoning on a pre-constructed model according to the model issuing request and returns model reasoning finishing information;
and the first cooperative modeling party performs model service release according to the model reasoning finishing information.
Optionally, before the step of forwarding the received model service invocation request to the second cooperative modeler by the first cooperative modeler respectively, so that the second cooperative modeler performs model computation according to a pre-constructed model, and returns a model computation result, the method further includes:
the first cooperation modeling party receives a model service calling request and acquires an identity verification parameter contained in the model service calling request;
the first cooperation modeling party carries out authority verification on the model service calling request according to the identity verification parameters;
and when the authority verification passes, the first cooperation modeling party executes the step of respectively forwarding the received model service calling requests to the second cooperation modeling party.
In addition, in order to achieve the above object, the present invention further provides a method for outputting a model service, the method comprising the steps of:
a cooperative modeling party obtains modeling demand parameters carried in a federal modeling request, and reads locally pre-stored model construction data according to the modeling demand parameters, wherein the cooperative modeling party comprises but is not limited to a product platform;
the cooperation modeling party executes model building operation according to the model building data to obtain partial model results;
the cooperative modeling party pushes modeling completion information corresponding to the partial model results to the product platform and executes model reasoning operation when receiving a model issuing request sent by the product platform;
and the cooperation modeling party sends model reasoning finishing information generated by the model reasoning operation to the product platform so that the product platform can issue model service according to the model reasoning finishing information.
Optionally, the step of obtaining, by the cooperative modeling party, a modeling demand parameter carried in a federal modeling request, and reading locally pre-stored model construction data according to the modeling demand parameter includes:
a cooperative modeling party acquires application identification information, sample information and modeling configuration information contained in a federal modeling request;
the cooperation modeling party matches the application identification information, and when the matching is successful, a model training sample is obtained in a local database according to the sample information;
the step that the cooperation modeling party executes model construction operation according to the model construction data to obtain partial model results comprises the following steps:
and the cooperative modeling party executes model construction operation according to the model training sample and the modeling configuration information to obtain a partial model result.
Optionally, the step of the cooperative modeling party matching the application identification information, and when the matching is successful, obtaining a model training sample in a local database according to the sample information includes:
the cooperative modeling party reads application identifications of all participants of the federal modeling from the application identification information;
the cooperation modeling party matches the local application identification according to the application identification, and when the matching is successful, a model training sample is obtained in a local database according to the sample matching identification contained in the sample information;
the step that the cooperative modeling party executes model construction operation according to the model training sample and the modeling configuration information to obtain partial model results comprises the following steps:
the cooperative modeling party determines other cooperative modeling parties of the federal modeling according to the rest application identifiers;
the cooperation modeling party performs intersection matching on the model training sample and the model training samples obtained by the other cooperation modeling parties according to the sample matching identification to obtain a target model training sample;
and the cooperative modeling party executes model construction operation according to the target model training sample and the modeling configuration information to obtain a partial model result.
Optionally, the step of pushing, by the cooperative modeling party, modeling completion information corresponding to the partial model result to the product platform, and executing a model inference operation when receiving a model publishing request sent by the product platform, includes:
the cooperative modeling party pushes modeling completion information corresponding to the partial model result to the product platform;
the cooperation modeling party acquires model identification information carried in a model release request when receiving the model release request sent by the product platform;
the cooperative modeling party reads the trained partial model from the model base according to the model identification information;
and the cooperation modeling party performs model reasoning on the trained part model to obtain model reasoning finishing information corresponding to the trained part model.
Optionally, after the step of sending the model inference completion information generated by the model inference operation to the product platform for the product platform to issue the model service according to the model inference completion information, the method further includes:
the method comprises the steps that when a cooperation modeling party receives a model service calling request, a model calling path and model calculation parameters contained in the model service calling request are obtained;
the cooperation modeling party determines a model to be called according to the model calling path, and inputs the model calculation parameters into the model to be called for model calculation to obtain a model calculation result;
and the cooperative modeling party sends the model calculation result to a sender of the model service calling request so that the sender can summarize all the received model calculation results to obtain a summarized result.
Further, to achieve the above object, the present invention also provides a model service output apparatus including:
the sending module is used for sending the federal modeling request to a plurality of second cooperative modeling parties so that each second cooperative modeling party can carry out joint modeling according to the federal modeling request;
the receiving module is used for issuing model service according to the modeling completion information when the modeling completion information returned by all the second cooperative modeling parties is received;
the forwarding module is used for respectively forwarding the received model service calling requests to the second cooperative modeling party so as to enable the second cooperative modeling party to perform model calculation according to a pre-constructed model and return a model calculation result;
and the summarizing module is used for summarizing the received model calculation results and returning the summarizing results to the initiator of the model service calling request.
Further, to achieve the above object, the present invention also provides a model service output apparatus including:
the reading module is used for acquiring modeling demand parameters carried in a federal modeling request and reading locally pre-stored model construction data according to the modeling demand parameters;
the construction module is used for executing model construction operation according to the model construction data to obtain partial model results;
the reasoning module is used for pushing the modeling completion information corresponding to the partial model result to a product platform and executing model reasoning operation when receiving a model issuing request sent by the product platform;
and the sending module is used for sending the model reasoning finishing information generated by the model reasoning operation to the product platform so that the product platform can issue the model service according to the model reasoning finishing information.
In addition, to achieve the above object, the present invention also provides a model service output apparatus, including: a memory, a processor and a model service output program stored on the memory and executable on the processor, the model service output program configured to implement the steps of the model service output method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium having a model service output program stored thereon, which when executed by a processor implements the steps of the model service output method as described above.
In the invention, a first cooperative modeling party sends a federal modeling request to a plurality of second cooperative modeling parties so that each second cooperative modeling party can carry out joint modeling according to the federal modeling request, wherein the first cooperative modeling party and the second cooperative modeling parties comprise but are not limited to a product platform; when modeling completion information returned by all second cooperative modeling parties is received, model service is issued according to the modeling completion information; respectively forwarding the received model service calling requests to the second cooperative modeling party, so that the second cooperative modeling party performs model calculation according to a pre-constructed model and returns a model calculation result; and summarizing the received model calculation results, and returning the summarized results to the initiator of the model service calling request. Compared with the prior art that the model is directly encrypted and pushed to the user side or provided for the user side to use in a trusted environment, in the invention, model data related to each cooperative modeling side in model construction and model calculation cannot cross a local security network boundary, so that the model data is difficult to propagate or leak, and the data privacy security of a model service provider is greatly protected; meanwhile, the user side can efficiently improve the business capability of the user side with low cost under the condition of not seeking data cooperation and high-volume technology development.
Drawings
FIG. 1 is a schematic diagram of a model service output device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a diagram of a scene architecture implemented by the model service output method of the present invention;
FIG. 3 is a flowchart illustrating a first exemplary embodiment of a method for outputting a model service according to the present invention;
FIG. 4 is a flowchart illustrating a second exemplary embodiment of a method for outputting a model service;
FIG. 5 is a block diagram showing the structure of a first embodiment of a model service output apparatus according to the present invention;
FIG. 6 is a block diagram of a second embodiment of a model service output apparatus according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a model service output device of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the model service output device may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of the model service output device, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
As shown in fig. 1, the memory 1005, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and a model service output program.
In the model service output device shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the model service output device of the present invention may be provided in a model service output device that calls a model service output program stored in the memory 1005 through the processor 1001 and executes a model service output method provided in each of the embodiments of the present invention described below.
Referring to fig. 2, fig. 2 is a view illustrating a scene architecture implemented by the model service output method of the present invention.
As shown in fig. 2, the scenario architecture for implementing the model service output method of the present invention includes a product platform and a secure federated network formed by a plurality of (at least two) cooperating modeling parties. In practical application, a user (i.e. a user of the model service) can call a model service which is desired to be used through the product platform and obtain a model calculation result. The product platform can be an application platform or an application program client capable of model publishing, model service calling information disclosure and model calculation result summarizing operations. The cooperative modeling party can be a computing service device which can perform joint modeling with other cooperative modeling parties based on own original data and can perform model calculation according to the owned model, and the computing service device can be a computer, a smart phone, a PC (personal computer) and the like.
Of course, the product platform in this embodiment may be an application platform independent of the secure federated network, or may be assumed by any cooperative modeling party in the secure federated network that has the function of the product platform, and before a certain cooperative modeling party assumes the role of the product platform, it needs to be agreed and authorized by other cooperative modeling parties.
Based on the above scenario architecture, various embodiments of a model service output method are provided.
FIG. 3 is a flowchart illustrating a first embodiment of a method for outputting a model service according to the present invention
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than presented herein. The executing subject of the model service output method of the present embodiment may be the first cooperative modeling party, and the first and second are only for convenience of describing the cooperative modeling party currently serving as the product platform and the cooperative modeling party not serving as the product platform, and do not form a specific limitation on them.
The model service output method provided by this embodiment is applied to a first partner (i.e., a partner serving as a product platform), and the model service output method includes the following steps:
step S10: the method comprises the steps that a first cooperative modeling party sends a federal modeling request to a plurality of second cooperative modeling parties so that the second cooperative modeling parties can carry out combined modeling according to the federal modeling request, and the first cooperative modeling party and the second cooperative modeling parties comprise but are not limited to a product platform;
in this embodiment, the federal modeling request may be initiated by a first cooperating modeling party (product platform), or may be initiated by any (second) cooperating modeling party in the secure federal network, and when a certain cooperating modeling party in the secure federal network initiates the federal modeling request, the cooperating modeling party defaults to be the first cooperating modeling party. The federal modeling request carries modeling requirement parameters required during modeling, and the modeling requirement parameters can be used for determining data required by joint modeling, such as identity information or application identification information (part ID) of all cooperative modeling parties participating in the current modeling, sample information (for example, sample matching ID for determining model training samples, sample expression data for judging model prediction results), modeling configuration information (for example, related configurations such as algorithm types, parameters, modeling steps and the like), and the like.
In a specific implementation, the first cooperative modeling party may send the federal modeling request to the corresponding plurality of second cooperative modeling parties, so that the second cooperative modeling parties receiving the request perform joint modeling according to the received federal modeling request.
In consideration of Secure Multi-Party computing (SMC), the method provides powerful underlying technical support for a data fusion landing scene, has strong expandability in practical application, can be flexibly adapted, has a wide prospect, can ensure that respective data of cooperative modeling parties cannot be out of range, and ensures data privacy and security. In this embodiment, each cooperative modeling party performs model construction and training by adopting secure multi-party calculation: such as MPC techniques or Federated Learning (fed Learning) joint modeling. For example, a cooperative modeling party can combine multiple data together by using an MPC technology, the unification of data formats and the privacy of the data are ensured by MPC preprocessing, and a trained model is more accurate on the basis of the multiple data, so that more reasonable prediction is provided for unknown situations.
Furthermore, considering that the federal learning can enable a plurality of participants to continue machine learning on the premise of protecting data privacy and meeting legal compliance requirements, the problem of data island is solved. The present embodiment will roughly describe the process of joint modeling by taking federal learning as an example.
In the joint modeling process based on federal learning, each cooperative modeling party (a first cooperative modeling party and a second cooperative modeling party) can judge whether the cooperative modeling party belongs to the current modeling party according to application identification information (namely, the part ID) contained in a federal modeling request, and simultaneously determine other cooperative modeling parties participating in the current modeling. After the identity is determined, each cooperative modeling party can obtain a model training sample for training the constructed model according to sample information (namely, the sample matching ID) contained in the federal modeling request, namely, all the cooperative modeling parties participating in the current modeling use the same training sample to train (part of) the respective constructed model through the sample matching ID, so that the consistency and the reliability of the model training sample are ensured.
In addition, in this embodiment, each cooperative modeling party needs to perform model construction according to the algorithm type and parameters carried in the modeling configuration information and according to the modeling steps to obtain a partial model when performing model construction. All the partial models built by the cooperative modeling parties are integrated to form a complete model. And after the model is built, each cooperative modeling party returns modeling completion information to the first cooperative modeling party to inform the first cooperative modeling party that the modeling is finished.
Step S20: the first cooperation modeling party issues the model service according to the modeling completion information when receiving the modeling completion information returned by all the second cooperation modeling parties;
it should be noted that the model service publishing means to publish the model service invocation information of the complete model built on the network for different users to select and use according to their own needs, and in this embodiment, the model service invocation information does not include the model information stored locally in the partner modeling party.
Further, in order to ensure that the prediction result of the constructed model can have higher accuracy, in this embodiment, before the first cooperative modeling party issues the model service, a model issue request is also issued to each second cooperative modeling party, so that each second cooperative modeling party verifies the prediction result, executes related model inference, and then pushes the model to the locally centralized model information storage area for storage.
Step S30: the first cooperation modeling party respectively forwards the received model service calling requests to the second cooperation modeling party, so that the second cooperation modeling party carries out model calculation according to a pre-constructed model and returns a model calculation result;
it will be appreciated that the model service invocation request may be an invocation request of a model service initiated by a user to a first cooperating modeler. And when receiving the request, the first cooperation modeling party forwards the model service calling request to each second cooperation modeling party, and each second cooperation modeling party performs model calculation on the model service calling request according to the respectively constructed partial model and then returns the model calculation result to the first cooperation modeling party.
Step S40: and the first cooperative modeling party collects the received model calculation results and returns the collected results to the initiator of the model service calling request.
It should be understood that after each second cooperative modeling party calculates corresponding model calculation results according to the model service invocation request and feeds the model calculation results back to the first cooperative modeling party, the first cooperative modeling party can summarize the model calculation results and return the summarized results to the initiator of the model service invocation request, so as to realize the response to the model service invocation request.
In this embodiment, a first cooperative modeling party sends a federal modeling request to a plurality of second cooperative modeling parties, so that each second cooperative modeling party performs joint modeling according to the federal modeling request, wherein the first cooperative modeling party and the second cooperative modeling parties include but are not limited to a product platform; when modeling completion information returned by all second cooperative modeling parties is received, model service is issued according to the modeling completion information; respectively forwarding the received model service calling requests to the second cooperative modeling party, so that the second cooperative modeling party performs model calculation according to a pre-constructed model and returns a model calculation result; and summarizing the received model calculation results, and returning the summarized results to the initiator of the model service calling request. Compared with the prior art that the model is directly encrypted and pushed to the user side or provided for the user side to use in a trusted environment, in the model service output method of the embodiment, model data involved in model construction and model calculation of each cooperative modeling side cannot exceed the boundary of a local secure network, so that the model data is difficult to propagate or leak, data privacy and safety of a model service provider are greatly protected, and meanwhile, the user side is enabled to efficiently promote own service capability at low cost under the condition of not seeking data cooperation and high-volume technical development.
Further, the step S10 includes:
step S101: and the first cooperative modeling party sends the federal modeling request to a plurality of second cooperative modeling parties, so that each second cooperative modeling party reads model construction data from a local database according to the federal modeling request and performs combined modeling according to the model construction data.
In this step, because the model building data used by each cooperative modeling party is from the local, the original modeling data will not cross the secure network boundary of itself, and the problem of data privacy security can be solved fundamentally.
Further, the step S20 includes:
step S201: the first cooperation modeling party receives modeling completion information returned by all the second cooperation modeling parties;
step S202: the first cooperation modeling party sends a model issuing request to each second cooperation modeling party so that each second cooperation modeling party can carry out model reasoning on a pre-constructed model according to the model issuing request and returns model reasoning finishing information;
step S203: and the first cooperative modeling party performs model service release according to the model reasoning finishing information.
It should be noted that the sample performance data carried in the federal modeling request can be used to verify the model prediction result to verify whether the trained model effect can meet the actual accuracy requirement. In this embodiment, the first cooperative modeling party verifies the prediction result of the model before issuing the model service, and performs model inference when the verification passes. Wherein model inference is the transformation of a trained model into a computer service similar to an application program interface, API.
Further, before the step S30, the method includes:
step S31: the first cooperation modeling party receives a model service calling request and acquires an identity verification parameter contained in the model service calling request;
step S32: the first cooperation modeling party carries out authority verification on the model service calling request according to the identity verification parameters;
step S33: and when the authority verification passes, the first cooperation modeling party executes the step of respectively forwarding the received model service calling requests to the second cooperation modeling party.
It can be understood that, in order to ensure data security when the model service is called as much as possible, the first partner modeling party in this embodiment verifies the request authority when receiving the model service calling request sent by the user. The identity verification parameters include, but are not limited to, an account id, an account key, a service id, and the like corresponding to the account initiating the request. The account id and the account key are used for identity authentication, and the service id is used for determining the specific model service required to be called by the request initiator.
Further, in order to enable the first cooperative modeling party to quickly and accurately determine the model service which needs to be called by the user currently according to the service id, a mapping table containing the corresponding relation between the service id and the model service API can be maintained at the first cooperative modeling party side, so that the efficiency of calling the whole model service is improved.
In the embodiment, model reasoning is carried out on the built model, model reasoning completion information is generated, and the model service is issued, so that the model service can be disclosed to a user only, the situation that original detail data used in modeling does not go out of a safe network boundary is really realized, and the privacy safety problem is fundamentally solved; meanwhile, the authority verification is carried out on the model service calling request, so that the data safety in the model calling process is greatly ensured.
Referring to fig. 4, fig. 4 is a flowchart illustrating a second embodiment of a model service output method according to the present invention.
While a logical order is shown in the flow chart, in some cases, the steps shown or described may be performed in an order different than presented herein. The executing entity of the model service output method in this embodiment may be the (second) partner in fig. 3, and for convenience of description, the following second embodiment is described with the partner as the executing entity.
The model service output method provided by this embodiment is applied to a (second) cooperative modeling party not serving as a product platform in a secure federated network, and includes the following steps:
step S10': a cooperative modeling party obtains modeling demand parameters carried in a federal modeling request, and reads locally pre-stored model construction data according to the modeling demand parameters, wherein the cooperative modeling party comprises but is not limited to a product platform;
in this embodiment, the federated modeling request may be initiated by a cooperative modeling party that is itself a product platform and participates in the present federated modeling, or may be initiated by a product platform that is independent of the secure federated network and does not participate in the present federated modeling. The federal modeling request carries modeling requirement parameters required in modeling, such as identity information or application identification information (paratyid) of all cooperative modeling parties participating in the modeling, sample information (for example, sample matching ID for determining model training samples and sample performance data for judging model prediction results), modeling configuration information (for example, related configurations such as algorithm types, parameters, modeling steps, and the like), and the like.
The identity information or the application identification information is used for the cooperative modeling party to verify the identity of the cooperative modeling party and simultaneously determine other cooperative modeling parties participating in the modeling. The sample information is used for determining sample data required by model training and model prediction result verification, and in order to ensure the privacy and safety of the sample data and avoid data leakage in the modeling process, the sample data is stored in respective local databases of the cooperative modeling parties and can be read and used through the sample information carried in the federal modeling request when needed. For example, the cooperative modeling party may search a sample for model training in the local data by using the sample matching ID, may judge the prediction result of the trained model by using the sample expression data, and determine whether the model can be directly used or whether model optimization is needed according to the judgment result.
It can be understood that the modeling configuration information includes some basic data to be used when building some models, and specific process steps of modeling. In actual operation, the cooperative modeling party can perform model construction and model training according to model construction data such as model training samples, modeling configuration information and the like.
Step S20': the cooperation modeling party executes model building operation according to the model building data to obtain partial model results;
it should be understood that the accuracy of the data model depends on the data amount, the data type and the data quality, for example, when the data amount of the training sample is small, the application range of the trained model is small, and the prediction result is not accurate enough; different types of data in the training samples have large difference, and the trained model has poor generalization capability and is easy to over-fit.
The method has the advantages that the strong underlying technical support is provided for the landing scene of data fusion by considering the safe multi-party calculation, the expandability is strong in practical application, the flexible adaptation can be realized, the prospect is wide, meanwhile, the condition that the respective data of the cooperative modeling parties cannot be out of the domain can be ensured, and the data privacy safety is ensured. In the embodiment, when the cooperative modeling party performs model construction and training, the model construction is realized by adopting a safe multi-party calculation (such as MPC technology or federal learning) joint modeling mode. For example, multi-party data are combined together by using an MPC technology, the data format uniformity and the data privacy are ensured through MPC preprocessing, and a model trained on the basis of the multi-party data is more accurate, so that more reasonable prediction is provided for unknown situations.
Furthermore, considering that the federal learning can enable a plurality of participants to continue machine learning on the premise of protecting data privacy and meeting legal compliance requirements, the problem of data island is solved. The present embodiment will roughly describe the process of joint modeling by taking federal learning as an example.
In the federated modeling process based on federated learning, each cooperative modeling party needs to judge whether the cooperative modeling party belongs to the current modeling party according to the application identification information (namely, the partyID) contained in the federated modeling request, and determine other cooperative modeling parties participating in the current modeling at the same time. After the identity is determined, the cooperative modeling party can obtain a model training sample for training the constructed model according to sample information (namely the sample matching ID) contained in the federal modeling request, the process can be called intersection matching or sample intersection, then (part of) model training is carried out through the obtained model training sample, and finally, effect judgment is carried out on the quality of the model prediction result through sample expression data.
In this embodiment, after completing the construction of the partial model, each cooperative modeling party may also push the completed partial model to a local centralized model information storage area for storage.
Step S30': the cooperative modeling party pushes modeling completion information corresponding to the partial model results to the product platform and executes model reasoning operation when receiving a model issuing request sent by the product platform;
it should be noted that the model inference operation is to convert the trained model into a computer service similar to an application program interface API, so that the user can access the calling model service.
In specific implementation, after each cooperative modeling party completes construction of a part of models, modeling completion information corresponding to a part of model results can be pushed to a product platform, and when a model issuing request sent by the product platform is received, a trained model is converted into computer service similar to an Application Program Interface (API).
Step S40': and the cooperation modeling party sends model reasoning finishing information generated by the model reasoning operation to the product platform so that the product platform can issue model service according to the model reasoning finishing information.
After the cooperative modeling party generates model reasoning completion information corresponding to part of the models, the information can be pushed to the product platform, the product platform issues the information and informs different users of accessing to invoke the built models.
Step S50': the method comprises the steps that when a cooperation modeling party receives a model service calling request, a model calling path and model calculation parameters contained in the model service calling request are obtained;
it should be noted that the model calculation parameters are model input parameters provided by a user who calls a model service, and each cooperative modeling party involved in the model calculation may input the model input parameters into the model for calculation to obtain a part of the model calculation results.
Further, in order to ensure data security as much as possible when the model service is called, in this embodiment, the product platform verifies the request right when receiving the model service calling request sent by the user. Specifically, when a model service calling request is received, an identity verification parameter contained in the model service calling request is obtained; then, carrying out authority verification on the model service calling request according to the identity verification parameters; and when the authority check passes, respectively forwarding the received model service calling request to each cooperative modeling party.
The identity verification parameters include, but are not limited to, an account id, an account key, a service id, and the like corresponding to the account initiating the request. The account id and the account key are used for identity authentication, and the service id is used for determining the specific model service required to be called by the request initiator.
Furthermore, in order to quickly and accurately determine the model service which needs to be called by the user at present according to the service id, a mapping table containing the corresponding relation between the service id and the model service API can be maintained on the product platform side, so that the efficiency of calling the whole model service is improved.
Step S60': the cooperation modeling party determines a model to be called according to the model calling path, and inputs the model calculation parameters into the model to be called for model calculation to obtain a model calculation result;
it should be appreciated that the number of models that a partner modeler may own is enormous, and to accurately determine which model is to be invoked by a different user, this may be determined based on the model invocation path included in the model service invocation request.
Step S70': and the cooperative modeling party sends the model calculation result to a sender of the model service calling request so that the sender can summarize all the received model calculation results to obtain a summarized result.
It should be understood that after each cooperative modeling party calculates a corresponding model calculation result according to the model service invocation request and feeds the model calculation result back to a sender of the model service invocation request (which may be the product platform), the sender of the model service invocation request may summarize the model calculation results and return the summarized results to the user, so as to implement a response to the model service invocation request.
In the embodiment, the cooperative modeling party obtains modeling demand parameters carried in the federal modeling request, and reads locally pre-stored model construction data according to the modeling demand parameters; executing model construction operation according to the model construction data to obtain partial model results; pushing modeling completion information corresponding to part of the model results to a product platform, and executing model reasoning operation when receiving a model issuing request sent by the product platform; sending the model service generated by the model reasoning operation to a product platform for release; when a model service calling request forwarded by a product platform is received, a model calling path and model calculation parameters contained in the model service calling request are obtained; determining a model to be called according to the model calling path, and inputting model calculation parameters into the model to be called for model calculation to obtain a model calculation result; and sending the model calculation result to a product platform so that the product platform can collect all the received model calculation results to obtain a collection result. Compared with the prior art that the model is directly encrypted and pushed to the user side or provided for the user side to use in a trusted environment, in the model service output method of the embodiment, model data involved in model construction and model calculation of each cooperative modeling side cannot exceed the boundary of a local secure network, so that the model data is difficult to propagate or leak, data privacy and safety of a model service provider are greatly protected, and meanwhile, the user side is enabled to efficiently promote own service capability at low cost under the condition of not seeking data cooperation and high-volume technical development.
Further, the step S10' includes:
step S101': a cooperative modeling party acquires application identification information, sample information and modeling configuration information contained in a federal modeling request;
it should be understood that the application identification information is matched, that is, whether the stored local application identification belongs to one of the application identifications recorded in the application identification information is judged, if so, the matching is successful, and then the subsequent model building and training operation can be started.
Step S102': the cooperation modeling party matches the application identification information, and when the matching is successful, a model training sample is obtained in a local database according to the sample information;
in specific implementation, when the matching of the local application identifier is successful, the cooperative modeling party can obtain the model training sample from the local database according to the sample information carried in the request. Because the sample information carried in the federal modeling request is the same, different cooperative modeling parties can ensure the accuracy of sample acquisition and the model construction efficiency by acquiring the training samples through the samples. The sample information in this step may only include a sample matching identifier, which may be a sample name, a serial number ID, or the like, capable of characterizing the uniqueness of the sample.
Further, in order to guarantee consistency and reliability of model training. In the embodiment, the cooperative modeling party can also read the application identifications of all the participants of the federal modeling from the application identification information; then, matching the local application identifier according to the application identifier, and acquiring a model training sample in a local database according to the sample matching identifier contained in the sample information when the matching is successful; determining other cooperative modeling parties of the federal modeling according to the rest application identifiers; and then performing intersection matching on the model training sample and model training samples obtained by other cooperative modelers according to the same sample matching identifier to obtain a target model training sample.
The intersection matching, which may be referred to as encrypted sample alignment, is an encryption-based user sample alignment technique (e.g., RSA), that is, common data of the user a and the user B are confirmed on the premise that the data are not disclosed by the user a and the user B, and data that do not overlap with each other are not exposed, so as to perform modeling in combination with features of the data.
The step S20' includes:
step S201': and the cooperative modeling party executes model construction operation according to the model training sample and the modeling configuration information to obtain partial model results.
In the concrete implementation, each cooperative modeling party needs to construct a model according to the modeling steps and the algorithm types and parameters carried in the modeling configuration information when constructing the model; and then carrying out model training according to the model training samples. Certainly, in order to ensure the model effect, the cooperative modeling party may also perform prediction result judgment on the trained model through the sample expression data carried in the sample information, and if the model effect is better, output the trained partial model (result).
Further, step S30' includes:
step S301': the cooperative modeling party pushes modeling completion information corresponding to part of the model results to a product platform;
in specific implementation, after each cooperative modeling party completes construction of each partial model, corresponding modeling completion information can be pushed to a product platform to inform that modeling is finished, and model release can be started.
Step S302': the cooperation modeling party acquires model identification information carried in a model release request when receiving the model release request sent by the product platform;
it should be noted that the model identification information may be a character or a character combination for characterizing model uniqueness, such as a number, a code, a name, and the like of the model.
Step S303': the cooperative modeling party reads the trained partial model from the model base according to the model identification information;
the partner modeler may read the trained partial model in the model library (i.e., the model information storage area described above) based on the model identification information. Of course, in order to enable the cooperative modeling party to read the model according to the model identification information, when the model is stored, the model identification information and the storage path of the model may be associated first.
Step S304', the cooperation modeling party: and carrying out model reasoning on the trained partial model to obtain model reasoning finishing information corresponding to the trained partial model.
After each cooperative modeling party completes the construction of the partial model, modeling completion information corresponding to the partial model result can be pushed to the product platform, and when a model issuing request sent by the product platform is received, the trained model is converted into computer service similar to an Application Program Interface (API), and corresponding model reasoning completion information is generated.
In this embodiment, the cooperative modeling parties participating in modeling can be accurately identified through the application identification information and the modeling configuration information carried in the federal modeling request, the data included in the modeling configuration information also ensures the sequential implementation of joint modeling, and meanwhile, the consistency and reliability of model training are also ensured by determining model training samples through the sample information and aligning the samples.
Furthermore, an embodiment of the present invention further provides a storage medium, where a model service output program is stored, and the model service output program, when executed by a processor, implements the steps of the model service output method as described above.
Referring to fig. 5, fig. 5 is a block diagram illustrating a first exemplary embodiment of a model service output apparatus according to the present invention.
As shown in fig. 5, the model service output apparatus according to the embodiment of the present invention includes:
a sending module 501, configured to send a federal modeling request to multiple second cooperative modeling parties, so that each second cooperative modeling party performs joint modeling according to the federal modeling request;
a receiving module 502, configured to, when modeling completion information returned by all second cooperative modeling parties is received, issue a model service according to the modeling completion information;
a forwarding module 503, configured to forward the received model service invocation requests to the second cooperative modeling parties respectively, so that the second cooperative modeling parties perform model calculation according to a pre-constructed model, and return a model calculation result;
and the summarizing module 504 is configured to summarize the received model calculation results, and return the summarizing results to the initiator of the model service invocation request.
Further, in this embodiment, the sending module 501 is further configured to send a federal modeling request to a plurality of second cooperative modeling parties, so that each second cooperative modeling party reads model building data from a local database according to the federal modeling request and performs joint modeling according to the model building data.
Further, in this embodiment, the receiving module 502 is further configured to receive modeling completion information returned by all the second partner modelers; sending a model issuing request to each second cooperative modeling party so that each second cooperative modeling party performs model reasoning on a pre-constructed model according to the model issuing request and returns model reasoning finishing information; and issuing the model service according to the model reasoning finishing information.
Further, in this embodiment, the model service output device further includes a verification module, configured to receive a model service invocation request, and obtain an identity verification parameter included in the model service invocation request; and performing authority verification on the model service calling request according to the identity verification parameters.
Other embodiments or specific implementations of the model service output device of the present invention may refer to the first method embodiment, and are not described herein again.
Referring to fig. 6, fig. 6 is a block diagram illustrating a second embodiment of a model service output apparatus according to the present invention.
As shown in fig. 6, the model service output apparatus according to the embodiment of the present invention includes:
the reading module 601 is used for acquiring modeling demand parameters carried in a federal modeling request and reading locally pre-stored model construction data according to the modeling demand parameters;
a construction module 602, configured to perform a model construction operation according to the model construction data to obtain a partial model result;
the inference module 603 is configured to push modeling completion information corresponding to the partial model result to a product platform, and execute a model inference operation when receiving a model release request sent by the product platform;
the sending module 604 is configured to send model inference completion information generated by the model inference operation to the product platform, so that the product platform issues the model service according to the model inference completion information.
Further, in this embodiment, the reading module 601 is further configured to obtain application identification information, sample information, and modeling configuration information included in the federal modeling request; matching the application identification information, and acquiring a model training sample in a local database according to the sample information when the matching is successful; correspondingly, the building module 602 is further configured to perform a model building operation according to the model training sample and the modeling configuration information, and obtain a partial model result.
Further, in this embodiment, the reading module 601 is further configured to read application identifiers of all participants of the federal modeling from the application identifier information; matching the local application identifier according to the application identifier, and acquiring a model training sample in a local database according to the sample matching identifier contained in the sample information when the matching is successful; correspondingly, the building module 602 is further configured to determine other cooperative modeling parties for federated modeling according to the remaining application identifiers; performing intersection matching on the model training sample and the model training samples obtained by the other cooperative modelers according to the sample matching identifier to obtain a target model training sample; and executing model construction operation according to the target model training sample and the modeling configuration information to obtain partial model results.
Further, in this embodiment, the inference module 603 is further configured to push modeling completion information corresponding to the partial model result to a product platform; when a model release request sent by the product platform is received, obtaining model identification information carried in the model release request; reading a trained partial model in a model library according to the model identification information; and carrying out model reasoning on the trained partial model to obtain model reasoning finishing information corresponding to the trained partial model.
Further, in this embodiment, the model service output device further includes a calculation module, configured to obtain a model call path and model calculation parameters included in the model service call request when the model service call request is received; determining a model to be called according to the model calling path, and inputting the model calculation parameters into the model to be called for model calculation to obtain a model calculation result; and sending the model calculation result to a sender of the model service calling request so that the sender can collect all the received model calculation results to obtain a collected result.
Other embodiments or specific implementations of the model service output device of the present invention may refer to the second method embodiment, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., a rom/ram, a magnetic disk, an optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (13)

1. A model service output method, characterized in that the model service output method comprises:
the method comprises the steps that a first cooperative modeling party sends a federal modeling request to a plurality of second cooperative modeling parties so that the second cooperative modeling parties can carry out combined modeling according to the federal modeling request, and the first cooperative modeling party and the second cooperative modeling parties comprise product platforms;
the first cooperation modeling party issues the model service according to the modeling completion information when receiving the modeling completion information returned by all the second cooperation modeling parties;
the first cooperation modeling party respectively forwards the received model service calling requests to the second cooperation modeling party, so that the second cooperation modeling party carries out model calculation according to a pre-constructed model and returns a model calculation result;
and the first cooperative modeling party collects the received model calculation results and returns the collected results to the initiator of the model service calling request.
2. The model service output method of claim 1, wherein the step of the first cooperating modeler sending a federated modeling request to a plurality of second cooperating modelers for each second cooperating modeler to jointly model according to the federated modeling request comprises:
and the first cooperative modeling party sends the federal modeling request to a plurality of second cooperative modeling parties, so that each second cooperative modeling party reads model construction data from a local database according to the federal modeling request and performs combined modeling according to the model construction data.
3. The method for outputting model services according to claim 1, wherein the step of the first partner modeler issuing the model services according to the modeling completion information when receiving the modeling completion information returned by all the second partner modelers comprises:
the first cooperation modeling party receives modeling completion information returned by all the second cooperation modeling parties;
the first cooperation modeling party sends a model issuing request to each second cooperation modeling party so that each second cooperation modeling party can carry out model reasoning on a pre-constructed model according to the model issuing request and returns model reasoning finishing information;
and the first cooperative modeling party performs model service release according to the model reasoning finishing information.
4. The model service output method of any one of claims 1 to 3, wherein before the steps of forwarding the received model service call requests to the second partner modelers respectively for the second partner modelers to perform model calculations according to the pre-constructed model and returning the model calculation results, the method further comprises:
the first cooperation modeling party receives a model service calling request and acquires an identity verification parameter contained in the model service calling request;
the first cooperation modeling party carries out authority verification on the model service calling request according to the identity verification parameters;
and when the authority verification passes, the first cooperation modeling party executes the step of respectively forwarding the received model service calling requests to the second cooperation modeling party.
5. A model service output method, characterized in that the model service output method comprises:
a cooperative modeling party acquires modeling demand parameters carried in a federal modeling request, and reads locally pre-stored model construction data according to the modeling demand parameters, wherein the cooperative modeling party comprises a product platform;
the cooperation modeling party executes model building operation according to the model building data to obtain partial model results;
the cooperative modeling party pushes modeling completion information corresponding to the partial model results to the product platform and executes model reasoning operation when receiving a model issuing request sent by the product platform;
and the cooperation modeling party sends model reasoning finishing information generated by the model reasoning operation to the product platform so that the product platform can issue model service according to the model reasoning finishing information.
6. The model service output method of claim 5, wherein the step of the cooperative modeling party obtaining modeling demand parameters carried in a federal modeling request and reading locally pre-stored model construction data according to the modeling demand parameters comprises:
a cooperative modeling party acquires application identification information, sample information and modeling configuration information contained in a federal modeling request;
the cooperation modeling party matches the application identification information, and when the matching is successful, a model training sample is obtained in a local database according to the sample information;
the step that the cooperation modeling party executes model construction operation according to the model construction data to obtain partial model results comprises the following steps:
and the cooperative modeling party executes model construction operation according to the model training sample and the modeling configuration information to obtain a partial model result.
7. The model service output method of claim 6, wherein the step of the cooperative modeler matching the application identification information and, when matching is successful, obtaining a model training sample in a local database based on the sample information comprises:
the cooperative modeling party reads application identifications of all participants of the federal modeling from the application identification information;
the cooperation modeling party matches the local application identification according to the application identification, and when the matching is successful, a model training sample is obtained in a local database according to the sample matching identification contained in the sample information;
the step that the cooperative modeling party executes model construction operation according to the model training sample and the modeling configuration information to obtain partial model results comprises the following steps:
the cooperative modeling party determines other cooperative modeling parties of the federal modeling according to the rest application identifiers;
the cooperation modeling party performs intersection matching on the model training sample and the model training samples obtained by the other cooperation modeling parties according to the sample matching identification to obtain a target model training sample;
and the cooperative modeling party executes model construction operation according to the target model training sample and the modeling configuration information to obtain a partial model result.
8. The model service output method of any one of claims 5 to 7, wherein the step of the cooperative modeling party pushing modeling completion information corresponding to the partial model result to the product platform and executing model inference operation upon receiving a model publishing request sent by the product platform comprises:
the cooperative modeling party pushes modeling completion information corresponding to the partial model result to the product platform;
the cooperation modeling party acquires model identification information carried in a model release request when receiving the model release request sent by the product platform;
the cooperative modeling party reads the trained partial model from the model base according to the model identification information;
and the cooperation modeling party performs model reasoning on the trained part model to obtain model reasoning finishing information corresponding to the trained part model.
9. The model service delivery method of any one of claims 5 to 7, wherein after the step of sending model inference completion information generated by model inference operations to the product platform for model service publishing by the product platform in accordance with the model inference completion information, the method further comprises:
the method comprises the steps that when a cooperation modeling party receives a model service calling request, a model calling path and model calculation parameters contained in the model service calling request are obtained;
the cooperation modeling party determines a model to be called according to the model calling path, and inputs the model calculation parameters into the model to be called for model calculation to obtain a model calculation result;
and the cooperative modeling party sends the model calculation result to a sender of the model service calling request so that the sender can summarize all the received model calculation results to obtain a summarized result.
10. A model service output apparatus, characterized in that the model service output apparatus comprises:
the sending module is used for sending the federal modeling request to a plurality of second cooperative modeling parties so that each second cooperative modeling party can carry out joint modeling according to the federal modeling request;
the receiving module is used for issuing model service according to the modeling completion information when the modeling completion information returned by all the second cooperative modeling parties is received;
the forwarding module is used for respectively forwarding the received model service calling requests to the second cooperative modeling party so as to enable the second cooperative modeling party to perform model calculation according to a pre-constructed model and return a model calculation result;
and the summarizing module is used for summarizing the received model calculation results and returning the summarizing results to the initiator of the model service calling request.
11. A model service output apparatus, characterized in that the model service output apparatus comprises:
the reading module is used for acquiring modeling demand parameters carried in a federal modeling request and reading locally pre-stored model construction data according to the modeling demand parameters;
the construction module is used for executing model construction operation according to the model construction data to obtain partial model results;
the reasoning module is used for pushing the modeling completion information corresponding to the partial model result to a product platform and executing model reasoning operation when receiving a model issuing request sent by the product platform;
and the sending module is used for sending the model reasoning finishing information generated by the model reasoning operation to the product platform so that the product platform can issue the model service according to the model reasoning finishing information.
12. A model service output device, characterized in that the device comprises: a memory, a processor, and a model service output program stored on the memory and executable on the processor, the model service output program configured to implement the steps of the model service output method of any of claims 1 to 4, or 5 to 9.
13. A storage medium having stored thereon a model service output program which, when executed by a processor, implements the steps of the model service output method of any one of claims 1 to 4, or 5 to 9.
CN202010855640.6A 2020-08-21 2020-08-21 Model service output method, device, equipment and storage medium Pending CN111985000A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010855640.6A CN111985000A (en) 2020-08-21 2020-08-21 Model service output method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010855640.6A CN111985000A (en) 2020-08-21 2020-08-21 Model service output method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111985000A true CN111985000A (en) 2020-11-24

Family

ID=73443633

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010855640.6A Pending CN111985000A (en) 2020-08-21 2020-08-21 Model service output method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111985000A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434334A (en) * 2020-11-25 2021-03-02 深圳前海微众银行股份有限公司 Data processing method, device, equipment and storage medium
WO2022222152A1 (en) * 2021-04-23 2022-10-27 Oppo广东移动通信有限公司 Federated learning method, federated learning system, first device, and third device
WO2023125879A1 (en) * 2021-12-30 2023-07-06 维沃移动通信有限公司 Data processing method and apparatus, and communication device
CN116611113A (en) * 2023-06-25 2023-08-18 福建润楼数字科技有限公司 Credit scoring card model construction method based on privacy protection calculation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109167695A (en) * 2018-10-26 2019-01-08 深圳前海微众银行股份有限公司 Alliance Network construction method, equipment and readable storage medium storing program for executing based on federation's study
CN109189825A (en) * 2018-08-10 2019-01-11 深圳前海微众银行股份有限公司 Lateral data cutting federation learning model building method, server and medium
CN111444523A (en) * 2020-03-26 2020-07-24 国网信通亿力科技有限责任公司 Artificial intelligence modeling and service management platform

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109189825A (en) * 2018-08-10 2019-01-11 深圳前海微众银行股份有限公司 Lateral data cutting federation learning model building method, server and medium
CN109167695A (en) * 2018-10-26 2019-01-08 深圳前海微众银行股份有限公司 Alliance Network construction method, equipment and readable storage medium storing program for executing based on federation's study
CN111444523A (en) * 2020-03-26 2020-07-24 国网信通亿力科技有限责任公司 Artificial intelligence modeling and service management platform

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112434334A (en) * 2020-11-25 2021-03-02 深圳前海微众银行股份有限公司 Data processing method, device, equipment and storage medium
WO2022222152A1 (en) * 2021-04-23 2022-10-27 Oppo广东移动通信有限公司 Federated learning method, federated learning system, first device, and third device
WO2023125879A1 (en) * 2021-12-30 2023-07-06 维沃移动通信有限公司 Data processing method and apparatus, and communication device
CN116611113A (en) * 2023-06-25 2023-08-18 福建润楼数字科技有限公司 Credit scoring card model construction method based on privacy protection calculation

Similar Documents

Publication Publication Date Title
CN110633805B (en) Longitudinal federal learning system optimization method, device, equipment and readable storage medium
CN109167695B (en) Federal learning-based alliance network construction method and device and readable storage medium
CN108650270B (en) Data sharing method and system based on alliance chain and incentive mechanism
CN111985000A (en) Model service output method, device, equipment and storage medium
CN110555029B (en) Ticket management method, device and storage medium based on block chain
CN110096857B (en) Authority management method, device, equipment and medium for block chain system
CN101356773B (en) Ad-hoc creation of group based on contextual information
CN109635585A (en) Method, agent node and the medium of Transaction Information are inquired in block chain network
CN109146679A (en) Intelligent contract call method and device, electronic equipment based on block chain
CN113409045B (en) Data processing method and device based on block chain and electronic equipment
CN113837761A (en) Block chain and trusted execution environment based federated learning method and system
CN111080270A (en) Collaborative system application design method, system, device, server and storage medium
CN110535648A (en) Electronic certificate is generated and verified and key controlling method, device, system and medium
CN111047321A (en) Service processing method and device, electronic equipment and storage medium
CN110365711B (en) Multi-platform user identity association method and device, computer equipment and computer readable storage medium
CN109743321A (en) Block chain, application program, the user authen method of application program and system
CN111294356A (en) Block chain based method and system for organizing node uplink
CN111178840A (en) Service processing method, device, system, electronic equipment and storage medium
CN113569263A (en) Secure processing method and device for cross-private-domain data and electronic equipment
CN111163467A (en) Method for 5G user terminal to access 5G network, user terminal equipment and medium
CN114266680A (en) Block chain-based electronic contract signing method, device and system
CN110880157B (en) Map data processing method and device, electronic equipment and storage medium
CN116932617A (en) Cross-platform data processing method, device, equipment and medium
CN116521509A (en) Intelligent contract testing method, device, equipment, storage medium and product
CN111832055A (en) Authority verification system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination