CN109635422A - Joint modeling method, device, equipment and computer readable storage medium - Google Patents

Joint modeling method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN109635422A
CN109635422A CN201811501956.4A CN201811501956A CN109635422A CN 109635422 A CN109635422 A CN 109635422A CN 201811501956 A CN201811501956 A CN 201811501956A CN 109635422 A CN109635422 A CN 109635422A
Authority
CN
China
Prior art keywords
gradient value
loss gradient
back end
model
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811501956.4A
Other languages
Chinese (zh)
Other versions
CN109635422B (en
Inventor
刘洋
范涛
陈天健
杨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN201811501956.4A priority Critical patent/CN109635422B/en
Publication of CN109635422A publication Critical patent/CN109635422A/en
Priority to PCT/CN2019/116081 priority patent/WO2020114184A1/en
Application granted granted Critical
Publication of CN109635422B publication Critical patent/CN109635422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Geometry (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Hardware Design (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of joint modeling method, device, equipment and storage mediums, comprising: initialization model parameter, and the model parameter of initialization is passed into each back end;The chance of a loss gradient value in neutral coordination side is obtained, chance of a loss gradient value is divided into each first-loss gradient value equal with the total quantity of back end, and each first-loss gradient value is distributed to each back end;The second loss gradient value of each back end is obtained based on model parameter and first-loss gradient value;Each second loss gradient value is transmitted to neutral coordination side, and determines the true loss gradient value of back end according to each second loss gradient value and chance of a loss gradient value;Model parameter is updated to construct model based on true loss gradient value, and whether judgment models restrain;If model is restrained, model has constructed completion.Invention achieves while more companies carry out information sharing and private data is not revealed, the effect of different types of data answer prediction also can be carried out.

Description

Joint modeling method, device, equipment and computer readable storage medium
Technical field
The present invention relates to field of computer technology more particularly to a kind of joint modeling method, device, equipment and computers Readable storage medium storing program for executing.
Background technique
Internet finance was developed rapidly into several years, and all kinds of financial technology companies presentations are let a hundred schools contend, a hundred flowers blossom Situation, along with the glad Rong Fazhan of industry, company it should be understood that data resource it is also more and more, and due to currently without which Total data needed for company will appreciate that air control, so information sharing problem is following, but since more companies exist It carries out easilying lead to the certain leakage of private information of itself when information sharing, be built so while having more companies and having to combine The demand of formwork erection type often or individually establishes model, combine with other companies establishing model.And combine foundation Although modelling technique refers in reality, also it is simply present in theory, is not applied in actual production, and is usually Combine establishing model to solve the purpose of a certain Single-issue.Therefore, how to solve total in more companies progress information While enjoying, and can guarantee that each company's private data is not revealed, it also can be carried out the prediction of different types of data answer, become A technical problem to be solved urgently.
Summary of the invention
The main purpose of the present invention is to provide a kind of joint modeling method, device, equipment and computer storage medium, purports While solving to carry out information sharing in more companies, and can guarantee that each company's private data is not revealed, also it can be carried out not The prediction of same type data answer.
To achieve the above object, the present invention provides a kind of joint modeling method, device, equipment and computer-readable storage Medium, the joint modeling method include:
The model parameter of model and the total quantity of back end are obtained, and node initializing model is joined based on the data Number, to determine primary mold parameter;
The chance of a loss gradient value in neutral coordination side is obtained, the chance of a loss gradient value is divided into and the data The equal each first-loss gradient value of the total quantity of node, and each first-loss gradient value is distributed to each data section Point;
Based on the first-loss gradient value of each back end, the total losses gradient value of the back end is obtained;
The primary mold parameter is updated based on the total losses gradient value and the chance of a loss gradient value, and judges institute State whether model restrains;
If the model convergence, the model have constructed completion.
Optionally, described that the of each back end is obtained based on the model parameter and the first-loss gradient value The step of two loss gradient values, comprising:
Iteration based on back end described in the model parameter calculation loses gradient value;
Between the iteration loss gradient value and the first-loss gradient value and value is obtained, and will described and value conduct Second loss gradient value of the back end.
Optionally, described that the data section is determined according to each second loss gradient value and the chance of a loss gradient value The step of true loss gradient value of point, comprising:
Between each second loss gradient value in the neutral coordination side and value is obtained, and will described and value conduct Total losses gradient value;
The true loss ladder of the back end is determined based on the total losses gradient value and the chance of a loss gradient value Angle value.
Optionally, described that the back end is determined based on the total losses gradient value and the chance of a loss gradient value The step of true loss gradient value, comprising:
Between the total losses gradient value and the chance of a loss gradient value and value is obtained, and regard described and value as institute State the true loss gradient value of back end.
Optionally, it is described judge the step of whether model restrains after, comprising:
If the model is not restrained, continue to obtain the new true loss gradient value of the back end, and described in update The updated model parameter of model, until the model is restrained.
Optionally, if the model is not restrained, continue to obtain the new true loss gradient value of the back end The step of, comprising:
If the model is not restrained, the updated model parameter of model is obtained, and the updated model parameter is transmitted to Each back end, to obtain the new true loss gradient value of the back end.
Optionally, if after the step of model is restrained, and the model has constructed completion, comprising:
The sample characteristics to be predicted in each back end are obtained, and the sample characteristics input to be predicted is described Building is completed to carry out on-line prediction in model, to obtain prediction result.
In addition, to achieve the above object, the present invention also provides a kind of joint model building device, the joint model building device packet It includes:
Transfer module is used for initialization model parameter, and the model parameter of initialization is passed to each back end;
Distribution module draws the chance of a loss gradient value for obtaining the chance of a loss gradient value in neutral coordination side It is divided into each first-loss gradient value equal with the total quantity of the back end, and each first-loss gradient value is distributed To each back end;
Module is obtained, for obtaining each back end based on the model parameter and the first-loss gradient value Second loss gradient value;
Determining module, for each second loss gradient value to be transmitted to the neutral coordination side, and according to each described Second loss gradient value and the chance of a loss gradient value determine the true loss gradient value of the back end;
Judgment module for updating the model parameter based on the true loss gradient value to construct model, and judges Whether the model restrains;
Module is restrained, if restraining for the model, the model has constructed completion.
In addition, to achieve the above object, the present invention also provides a kind of joint modelling apparatus;
The joint modelling apparatus includes: memory, sense channel, processor and is stored on the memory and can be The computer program run on the processor, in which:
The step of joint modeling method as described above is realized when the computer program is executed by the processor.
In addition, to achieve the above object, the present invention also provides computer storage mediums;
Computer program, the realization when computer program is executed by processor are stored in the computer storage medium Such as the step of above-mentioned joint modeling method.
A kind of joint modeling method, device, equipment and the readable storage medium storing program for executing that the embodiment of the present invention proposes, pass through initialization Model parameter, and the model parameter of initialization is passed into each back end;Obtain the chance of a loss gradient in neutral coordination side The chance of a loss gradient value, is divided into each first-loss gradient value equal with the total quantity of the back end by value, and Each first-loss gradient value is distributed to each back end;Based on the model parameter and the first-loss gradient Value obtains the second loss gradient value of each back end;Each second loss gradient value is transmitted to the neutral coordination Side, and determine that the true loss of the back end is terraced according to each second loss gradient value and the chance of a loss gradient value Angle value;The model parameter is updated to construct model based on the true loss gradient value, and judges whether the model restrains; If the model convergence, the model have constructed completion.By obtaining the loss ladder in each back end in this programme Angle value models to guarantee that each back end has participated in joint, and before obtaining the loss gradient value in each back end, Each first-loss gradient value can be transmitted to each back end, then obtain all loss gradient values in each back end again, and It is calculated in third party, i.e., neutral coordination side, gradient value is really lost to obtain each back end, to ensure that The privacy of each back end data, and obtained the loss gradient value of each back end by then passing through and established model , so also can guarantee that this model can solve the different problems prediction of each back end.It is corresponding to solve each back end Company also can be carried out different types of data while carrying out information sharing, and can guarantee that each company's private data is not revealed The prediction of answer.
Detailed description of the invention
Fig. 1 be the hardware running environment that the embodiment of the present invention is related to terminal apparatus structure schematic diagram;
Fig. 2 is the flow diagram of present invention joint modeling method first embodiment;
Fig. 3 is the flow diagram of present invention joint modeling method second embodiment;
Fig. 4 is the system structure diagram of present invention joint one embodiment of modelling apparatus;
Fig. 5 is the scene that each telework node transmits data to local working node in present invention joint modeling method Schematic diagram.
The object of the invention is realized, the embodiments will be further described with reference to the accompanying drawings for functional characteristics and advantage.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, Fig. 1 is the terminal structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to.
The terminal of that embodiment of the invention is joint modelling apparatus.
As shown in Figure 1, the terminal may include: processor 1001, such as CPU, network interface 1004, user interface 1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is for realizing the connection communication between these components. User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), optional user interface 1003 can also include standard wireline interface and wireless interface.Network interface 1004 optionally may include that the wired of standard connects Mouth, wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable memory (non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor 1001 storage device.
Optionally, terminal can also include camera, RF (Radio Frequency, radio frequency) circuit, sensor, audio Circuit, WiFi module etc..Wherein, sensor such as optical sensor, motion sensor and other sensors.Specifically, light Sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can according to the light and shade of ambient light come The brightness of display screen is adjusted, proximity sensor can close display screen and/or backlight when terminal device is moved in one's ear.Certainly, Terminal device can also configure the other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, herein no longer It repeats.
It will be understood by those skilled in the art that the restriction of the not structure paired terminal of terminal structure shown in Fig. 1, can wrap It includes than illustrating more or fewer components, perhaps combines certain components or different component layouts.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage medium Believe module, Subscriber Interface Module SIM and joint modeling program.
In terminal shown in Fig. 1, network interface 1004 is mainly used for connecting background server, carries out with background server Data communication;User interface 1003 is mainly used for connecting client (user terminal), carries out data communication with client;And processor 1001 can be used for calling the joint modeling program stored in memory 1005, and execute following operation:
Initialization model parameter, and the model parameter of initialization is passed into each back end;
The chance of a loss gradient value in neutral coordination side is obtained, the chance of a loss gradient value is divided into and the data The equal each first-loss gradient value of the total quantity of node, and each first-loss gradient value is distributed to each data section Point;
The second loss gradient of each back end is obtained based on the model parameter and the first-loss gradient value Value;
Each second loss gradient value is transmitted to the neutral coordination side, and according to each second loss gradient value The true loss gradient value of the back end is determined with the chance of a loss gradient value;
The model parameter is updated to construct model based on the true loss gradient value, and judges whether the model is received It holds back;
If the model convergence, the model have constructed completion.
The present invention provides a kind of joint modeling method, in joint modeling method first embodiment, referring to Fig. 2, combines and builds Mould method the following steps are included:
Step S10, initialization model parameter, and the model parameter of initialization is passed into each back end;
In systems, multi-party sample characteristics X is first obtainedowner, and its corresponding class label is assigned to multiple samples Yowner, to distinguish, whereinN is owner-o sample number }, Dim indicates sample characteristics dimension size, and each side sample characteristics dimension dim is equal, and each characteristic dimension meaning is consistent, such as [loan value, Loan duration, debt situation].Since each sample characteristics have a corresponding back end, works as and get in system When the quantity of each sample characteristics, the total quantity of back end can be also got.At the same time, it is also necessary to obtain a mould Shape parameter is simultaneously initialized, and the model parameter of initialization is then transmitted to each back end again.
Step S20 obtains the chance of a loss gradient value in the neutral side of coordinationing, the chance of a loss gradient value is divided into and The equal each first-loss gradient value of the total quantity of the back end, and each first-loss gradient value is distributed to each institute State back end;
Chance of a loss gradient value can be the loss function value and gradient value got at random in systems.It adopts in systems With a neutral coordination side, neutral coordination side generates random array loss_init (random function) and grad_init (gradient letter Number), that is, get the chance of a loss gradient value in neutral coordination side.And according in the system originally got back end it is total When quantity, need for this chance of a loss gradient value to be divided into each first-loss gradient identical with the total quantity of back end Value, i.e., each back end has a corresponding first-loss gradient value, then each first-loss gradient value is sequentially allocated To in each back end, the first-loss gradient that neutral coordination side passes over can be received to guarantee each back end Value.For example, as the neutral random array loss_init and grad_init of the side's of coordinationing generation, then by loss_init and grad_ Init is divided into n parts, and wherein n is the number of nodes after data are split in total, and due to every part of data of each data owning side A working node is both corresponded to, so,<loss_init>i and<grad_init>i are transmitted to i-th of section by neutral coordination side Point, that is, guaranteeing each working node has corresponding loss_init and grad_init.
Step S30 obtains the second of each back end based on the model parameter and the first-loss gradient value Lose gradient value;
Second loss gradient value may include the second loss function value and the second gradient value, and the second loss function value is at certain In the data sample of one back end, the iteration loss function value of current iteration is calculated, and by this iteration loss function value and this First-loss functional value that back end receives is added to obtain the second loss function value, similarly, the second gradient value be also In the data sample of a certain back end, the iterative gradient value of current iteration is calculated, and by this iterative gradient value and this data section The first gradient value that point receives is added to obtain the second gradient value, it should be noted that the second loss function value and the second ladder Angle value obtains simultaneously, also, the second loss function value and the second gradient value in all back end in such a system Acquisition modes are identical, that is, the mode for obtaining the corresponding second loss gradient value of each back end is identical.Wherein, iteration loss is obtained The mode of functional value and iterative gradient value is obtained by the model parameter calculation on back end.
Each second loss gradient value is transmitted to the neutral coordination side by step S40, and according to each second damage It loses gradient value and the chance of a loss gradient value determines the true loss gradient value of the back end;
After having calculated that the second loss gradient value of each back end, it is also necessary to by second on all back end Loss gradient value is all transmitted to neutral coordination side, i.e., by the loss function loss_init and gradient in all nodes got Grad_init is transmitted to neutral coordinations side, and calculates in the neutral side of coordinationing each second lose gradient value and value, i.e., always damages Gradient value is lost, total losses gradient value is then subtracted into chance of a loss gradient value in neutral coordination side, to get real section Point loss gradient value, it is however noted that, it can only be loss when total losses gradient value is subtracted chance of a loss gradient value Subtract each other between function and loss function, subtracts each other between gradient value and gradient value.Wherein, total losses gradient value may include total damage Mistake value and total gradient value.True loss gradient value can be penalty values caused by each back end itself summarize and ladder Angle value summarizes.
Step S50 updates the model parameter based on the true loss gradient value to construct model, and judges the mould Whether type restrains;
After getting the true loss gradient value of all back end in neutral coordination side, it is also necessary to true accordingly Gradient value is lost to update model parameter, it should be noted that it is not direct for updating model parameter using true loss gradient value Gradient value will really be lost instead of model parameter, but get new model parameter by certain calculation, and This new model parameter is transmitted to each back end.And model can be constructed by being continuously updated model parameter, directly The update of model parameter can just be stopped to model convergence, it can think that model has constructed completion.That is in building model During also need whether judgment models restrain.
Step S60, if the model is restrained, the model has constructed completion.
When by judging to find that this model has been restrained, then this model has constructed completion, each so as to terminate to collect The loss gradient value of back end, and need the model parameter built passing to each back end, each data Sample to be predicted is inputted and carries out operation in this model by node, to obtain the prediction result of each back end.
In the present embodiment, by initialization model parameter, and the model parameter of initialization is passed into each back end; The chance of a loss gradient value in neutral coordination side is obtained, the chance of a loss gradient value is divided into total with the back end The equal each first-loss gradient value of quantity, and each first-loss gradient value is distributed to each back end;It is based on The model parameter and the first-loss gradient value obtain the second loss gradient value of each back end;By each described Two loss gradient values are transmitted to the neutral coordination side, and according to each second loss gradient value and the chance of a loss gradient Value determines the true loss gradient value of the back end;The model parameter is updated based on the true loss gradient value with structure Established model, and judge whether the model restrains;If the model convergence, the model have constructed completion.In this programme Guarantee that each back end has participated in joint modeling by obtaining the loss gradient value in each back end, and each obtaining Before loss gradient value in a back end, each first-loss gradient value can be transmitted to each back end, then obtained again each All loss gradient values in back end, and calculated in third party, i.e., neutral coordination side, to obtain each data section Point really loses gradient value, to ensure that the privacy of each back end data, and each by then passing through acquisition The loss gradient value of back end updates model parameter, so also can guarantee that this model can solve the difference of each back end Problem prediction.Information sharing is being carried out to solve the corresponding company of each back end, and can guarantee each company's private data While leakage, the prediction of different types of data answer also can be carried out.
Further, on the basis of first embodiment of the invention, propose present invention joint modeling method second is real Example is applied, the step of the present embodiment is the step S30 of first embodiment of the invention refinement includes: referring to Fig. 3, the step S30
Step S31, the iteration based on back end described in the model parameter calculation lose gradient value;
After getting the model parameter sended over by neutral coordination side in back end, it is also necessary to according to this model Parameter loses gradient value to calculate the iteration on determining back end, it should be noted that each back end obtains iteration loss The mode of gradient value is all the same, is got by model parameter calculation.Wherein, iteration loss gradient value can be data Gradient value is really lost caused by node itself.
Step S32 obtains between the iteration loss gradient value and the first-loss gradient value and value, and will be described Gradient value is lost as the second of the back end with value.
Second loss gradient value may include the second loss function value and the second gradient value, and the second loss function value is at certain In the data sample of one back end, the iteration loss function value of current iteration is calculated, and by this iteration loss function value and this First-loss functional value that back end receives is added to obtain the second loss function value, similarly, the second gradient value be also In the data sample of a certain back end, the iterative gradient value of current iteration is calculated, and by this iterative gradient value and this data section The first gradient value that point receives is added to obtain the second gradient value, it should be noted that the second loss function value and the second ladder Angle value obtains simultaneously, also, the second loss function value and the second gradient value in all back end in such a system Acquisition modes are identical, that is, the mode for obtaining the corresponding second loss gradient value of each back end is identical.
In the present embodiment, total losses gradient value is determined by obtaining the second loss gradient value of each back end, To guarantee the accuracy of the total losses gradient value got, and the due to only obtaining each back end second loss gradient Value, to ensure the data-privacy protection of each back end of message.
Further, the present invention first to second embodiment any one on the basis of, propose the present invention and combine and build The 3rd embodiment of mould method, the present embodiment are the step S40 of first embodiment of the invention, according to each second loss gradient The refinement for the step of value and the chance of a loss gradient value determine the true loss gradient value of the back end, comprising:
Step S41 obtains between each second loss gradient value in the neutral coordination side and value, and will be described Total losses gradient value is used as with value;
After getting the second loss gradient value passed over by each back end in neutral coordination side, it is also necessary to All second loss gradient values are added in neutral coordination side to obtain itself and value, and by this and value as total losses ladder Angle value.Wherein, it should be noted that when calculating total losses gradient value, need to separate to calculate by penalty values and gradient value, with Get total penalty values and total gradient value.
Supplemented by assistant solve obtain total losses gradient value working principle, below with a specific example explanation:
For example, as shown in figure 5, when back end has one there are four neutral coordination side, and this four data at this time When respective second loss gradient value is transmitted to neutral coordination side by node, need this four data in neutral coordination side Second loss gradient value of node is added to get total losses gradient value.
Step S42 determines the true of the back end based on the total losses gradient value and the chance of a loss gradient value Real loss gradient value.
After getting total losses gradient value, it is also necessary to total losses gradient value be subtracted the chance of a loss in neutral coordination side That is, in neutral coordination side, total losses gradient value is subtracted with obtaining the true loss gradient value of all back end for gradient value Total losses functional value in total losses gradient value is subtracted chance of a loss gradient to obtain its difference by chance of a loss gradient value Total gradient value in total losses gradient value is also subtracted chance of a loss gradient value at the same time by the chance of a loss functional value in value In boarding steps angle value, also, this difference is the true loss gradient value of each back end.
In the present embodiment, by being obtained between total losses gradient value and chance of a loss gradient value in neutral coordination side Difference really loses gradient value to determine in each back end, and the model of gradient value more new model is really lost according to this Parameter avoids data in transmittance process and the hair of leakage phenomenon occurs to ensure that the secret protection of each back end data It is raw.
Further, the present invention first to 3rd embodiment any one on the basis of, propose the present invention and combine and build The fourth embodiment of mould method, the present embodiment are the step S50 of first embodiment of the invention, judge whether the model is convergent After step, comprising:
Step A10 continues to obtain the new true loss gradient value of the back end if the model is not restrained, and The updated model parameter of the model is updated, until the model is restrained.
When by judgement find update model parameter after model do not restrain, then will continue to the iteration of a new round, i.e., will more Model parameter after new is transmitted to each back end and replaces original model parameter, and continue by the neutral side of coordinationing by it is each newly Chance of a loss gradient value is transmitted in each back end, and in each back end, calculates the iteration of back end again Loss gradient value and neutral coordination side distribute to the sum between the new chance of a loss gradient value of back end, and its whole is summarized Coordinate Fang Zhongxiang to neutrality and got new total losses gradient value, and is each using the acquisition of same method in neutral coordination side A back end really loses gradient value in this iteration, and the updated model parameter of more new model again, until discovery When model restrains or reaches maximum number of iterations, stop the loss gradient value for obtaining other each nodes, that is, stops updating mould Shape parameter.
In the present embodiment, by judging whether established model restrains, to determine whether to update model parameter, thus It ensure that model can rapidly and accurately construct completion, improve the efficiency of model construction.
Specifically, if the model is not restrained, continue the step for obtaining the new true loss gradient value of the back end Suddenly, comprising:
Step A11 obtains the updated model parameter of model, and the updated model is joined if the model is not restrained Number is transmitted to each back end, to obtain the new true loss gradient value of the back end.
When finding that the model constructed is not restrained by judgement, then need to obtain the updated model ginseng in model again Number, and this updated model parameter is transmitted to each back end to substitute original model parameter, and in each back end It is middle that loss gradient value in back end is calculated according to this updated model parameter again, and again in transmitting in neutral coordination side, The true loss gradient value for calculating determine that back end is new again is determined in neutral coordination side, i.e., is updating model parameter every time When, the step of obtaining the true loss function value of each back end is the same.
In the present embodiment, by the way that updated model parameter is transmitted to each back end, to ensure that each acquisition To true loss gradient value be all different, improve the efficiency of model foundation.
Further, first to fourth embodiment of the invention any one on the basis of, propose advertisement text of the present invention Case generates the 5th embodiment of optimization method, if the present embodiment is the step S60 model convergence of first embodiment of the invention, After the step of then model has constructed completion, comprising:
Step S80 obtains the sample characteristics to be predicted in each back end, and the sample characteristics to be predicted are defeated Enter described constructed to complete to carry out on-line prediction in model, to obtain prediction result.
When by judgement discovery, well-established model reaches convergence state, then needs to obtain on this model and instruct The parameter perfected, and this parameter is transmitted in each back end, then again in each back end, obtain each data section The sample characteristics to be predicted of point, and the input of these forecast sample features has been constructed and has completed to carry out on-line prediction in model.
In the present embodiment, by the way that on-line prediction will be carried out in the sample input model to be predicted in each back end, from And the Accurate Prediction of multiple data sides is realized, improve the usage experience sense of user.
In addition, the embodiment of the present invention also proposes that a kind of joint model building device, the joint model building device include: referring to Fig. 4
Transfer module is used for initialization model parameter, and the model parameter of initialization is passed to each back end;
Distribution module draws the chance of a loss gradient value for obtaining the chance of a loss gradient value in neutral coordination side It is divided into each first-loss gradient value equal with the total quantity of the back end, and each first-loss gradient value is distributed To each back end;
Module is obtained, for obtaining each back end based on the model parameter and the first-loss gradient value Second loss gradient value;
Determining module, for each second loss gradient value to be transmitted to the neutral coordination side, and according to each described Second loss gradient value and the chance of a loss gradient value determine the true loss gradient value of the back end;
Judgment module for updating the model parameter based on the true loss gradient value to construct model, and judges Whether the model restrains;
Module is restrained, if restraining for the model, the model has constructed completion.
Optionally, the acquisition module, is also used to:
Iteration based on back end described in the model parameter calculation loses gradient value;
Between the iteration loss gradient value and the first-loss gradient value and value is obtained, and will described and value conduct Second loss gradient value of the back end.
Optionally, the determining module, is also used to:
Between each second loss gradient value in the neutral coordination side and value is obtained, and will described and value conduct Total losses gradient value;
The true loss ladder of the back end is determined based on the total losses gradient value and the chance of a loss gradient value Angle value.
Optionally, the determining module, is also used to:
Between the total losses gradient value and the chance of a loss gradient value and value is obtained, and regard described and value as institute State the true loss gradient value of back end.
Optionally, the joint model building device, further includes:
If the model is not restrained, continue to obtain the new true loss gradient value of the back end, and described in update The updated model parameter of model, until the model is restrained.
Optionally, the joint model building device, further includes:
If the model is not restrained, the updated model parameter of model is obtained, and the updated model parameter is transmitted to Each back end, to obtain the new true loss gradient value of the back end.
Optionally, the joint model building device, further includes:
The sample characteristics to be predicted in each back end are obtained, and the sample characteristics input to be predicted is described Building is completed to carry out on-line prediction in model, to obtain prediction result.
Wherein, it can refer to each of the present invention joint modeling method the step of combining each Implement of Function Module of model building device A embodiment, details are not described herein again.
The present invention also provides a kind of terminal, the terminal include: memory, sense channel, processor, communication bus and The joint modeling program being stored on the memory:
The communication bus is for realizing the connection communication between processor and memory;
The processor is for executing the joint modeling program, to realize the step of above-mentioned each embodiment of joint modeling method Suddenly.
The present invention also provides a kind of computer readable storage medium, the computer-readable recording medium storage has one Perhaps more than one program the one or more programs can also be executed by one or more than one processor with The step of embodiment each for realizing above-mentioned joint modeling method.
Computer readable storage medium specific embodiment of the present invention combines the basic phase of each embodiment of modeling method with above-mentioned Together, details are not described herein.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the system that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or system.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in one as described above In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone, Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of joint modeling method, which is characterized in that the joint modeling method the following steps are included:
Initialization model parameter, and the model parameter of initialization is passed into each back end;
The chance of a loss gradient value in neutral coordination side is obtained, the chance of a loss gradient value is divided into and the back end The equal each first-loss gradient value of total quantity, and each first-loss gradient value is distributed to each back end;
The second loss gradient value of each back end is obtained based on the model parameter and the first-loss gradient value;
Each second loss gradient value is transmitted to the neutral coordination side, and according to each second loss gradient value and institute State the true loss gradient value that chance of a loss gradient value determines the back end;
The model parameter is updated to construct model based on the true loss gradient value, and judges whether the model restrains;
If the model convergence, the model have constructed completion.
2. joint modeling method as described in claim 1, which is characterized in that described to be based on the model parameter and described first Loss gradient value obtains the step of the second loss gradient value of each back end, comprising:
Iteration based on back end described in the model parameter calculation loses gradient value;
It obtains the iteration and loses between gradient value and the first-loss gradient value and value, and will be described in described and value conduct Second loss gradient value of back end.
3. as described in claim 1 joint modeling method, which is characterized in that it is described according to it is each it is described second loss gradient value and The chance of a loss gradient value determines the step of true loss gradient value of the back end, comprising:
Between each second loss gradient value in the neutral coordination side and value is obtained, and by described and value as always damage Lose gradient value;
The true loss gradient value of the back end is determined based on the total losses gradient value and the chance of a loss gradient value.
4. joint modeling method as claimed in claim 3, which is characterized in that described based on the total losses gradient value and described Chance of a loss gradient value determines the step of true loss gradient value of the back end, comprising:
Between the total losses gradient value and the chance of a loss gradient value and value is obtained, and regard described and value as the number According to the true loss gradient value of node.
5. joint modeling method as described in claim 1, which is characterized in that described that the step of whether model restrains judged Later, comprising:
If the model is not restrained, continue to obtain the new true loss gradient value of the back end, and update the model Updated model parameter, until the model restrain.
6. joint modeling method as claimed in claim 5, which is characterized in that if the model is not restrained, continue to obtain The step of taking the back end new true loss gradient value, comprising:
If the model is not restrained, the updated model parameter of model is obtained, and the updated model parameter is transmitted to each number According to node, to obtain the new true loss gradient value of the back end.
7. joint modeling method as described in claim 1, which is characterized in that if the model is restrained, the model After the step of having constructed completion, comprising:
The sample characteristics to be predicted in each back end are obtained, and will have been constructed described in the sample characteristics input to be predicted It completes to carry out on-line prediction in model, to obtain prediction result.
8. a kind of joint model building device, which is characterized in that the joint model building device includes:
Transfer module is used for initialization model parameter, and the model parameter of initialization is passed to each back end;
The chance of a loss gradient value is divided by distribution module for obtaining the chance of a loss gradient value in neutral coordination side Each first-loss gradient value equal with the total quantity of the back end, and each first-loss gradient value is distributed to each The back end;
Module is obtained, for obtaining the second of each back end based on the model parameter and the first-loss gradient value Lose gradient value;
Determining module, for each second loss gradient value to be transmitted to the neutral coordination side, and according to each described second Loss gradient value and the chance of a loss gradient value determine the true loss gradient value of the back end;
Judgment module, for updating the model parameter based on the true loss gradient value to construct model, and described in judgement Whether model restrains;
Module is restrained, if restraining for the model, the model has constructed completion.
9. a kind of joint modelling apparatus, which is characterized in that the joint modelling apparatus includes: memory, processor and is stored in On the memory and the joint modeling program that can run on the processor, the joint modeling program is by the processor The step of joint modeling method as described in any one of claims 1 to 7 is realized when execution.
10. a kind of computer readable storage medium, which is characterized in that be stored with to combine on the computer readable storage medium and build Mold process sequence, the joint modeling program realize the joint modeling as described in any one of claims 1 to 7 when being executed by processor The step of method.
CN201811501956.4A 2018-12-07 2018-12-07 Joint modeling method, device, equipment and computer readable storage medium Active CN109635422B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811501956.4A CN109635422B (en) 2018-12-07 2018-12-07 Joint modeling method, device, equipment and computer readable storage medium
PCT/CN2019/116081 WO2020114184A1 (en) 2018-12-07 2019-11-06 Joint modeling method, apparatus and device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811501956.4A CN109635422B (en) 2018-12-07 2018-12-07 Joint modeling method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN109635422A true CN109635422A (en) 2019-04-16
CN109635422B CN109635422B (en) 2023-08-25

Family

ID=66072239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811501956.4A Active CN109635422B (en) 2018-12-07 2018-12-07 Joint modeling method, device, equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN109635422B (en)
WO (1) WO2020114184A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020114184A1 (en) * 2018-12-07 2020-06-11 深圳前海微众银行股份有限公司 Joint modeling method, apparatus and device, and computer-readable storage medium
CN112182636A (en) * 2019-07-03 2021-01-05 北京百度网讯科技有限公司 Method, device, equipment and medium for realizing joint modeling training
CN112435755A (en) * 2020-11-23 2021-03-02 平安科技(深圳)有限公司 Disease analysis method, disease analysis device, electronic device, and storage medium
WO2021092980A1 (en) * 2019-11-14 2021-05-20 深圳前海微众银行股份有限公司 Longitudinal federated learning optimization method, apparatus and device, and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113178191B (en) * 2021-04-25 2024-07-12 平安科技(深圳)有限公司 Speech characterization model training method, device, equipment and medium based on federal learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110046924A1 (en) * 2009-08-24 2011-02-24 International Business Machines Corporation Method for joint modeling of mean and dispersion
CN108133294A (en) * 2018-01-10 2018-06-08 阳光财产保险股份有限公司 Forecasting Methodology and device based on information sharing
WO2018217635A1 (en) * 2017-05-20 2018-11-29 Google Llc Application development platform and software development kits that provide comprehensive machine learning services
US20180365089A1 (en) * 2015-12-01 2018-12-20 Preferred Networks, Inc. Abnormality detection system, abnormality detection method, abnormality detection program, and method for generating learned model

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330516B (en) * 2016-04-29 2021-06-25 腾讯科技(深圳)有限公司 Model parameter training method, device and system
US20180225391A1 (en) * 2017-02-06 2018-08-09 Neural Algorithms Ltd. System and method for automatic data modelling
CN108491928B (en) * 2018-03-29 2019-10-25 腾讯科技(深圳)有限公司 Model parameter sending method, device, server and storage medium
CN108520220B (en) * 2018-03-30 2021-07-09 百度在线网络技术(北京)有限公司 Model generation method and device
CN109635422B (en) * 2018-12-07 2023-08-25 深圳前海微众银行股份有限公司 Joint modeling method, device, equipment and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110046924A1 (en) * 2009-08-24 2011-02-24 International Business Machines Corporation Method for joint modeling of mean and dispersion
US20180365089A1 (en) * 2015-12-01 2018-12-20 Preferred Networks, Inc. Abnormality detection system, abnormality detection method, abnormality detection program, and method for generating learned model
WO2018217635A1 (en) * 2017-05-20 2018-11-29 Google Llc Application development platform and software development kits that provide comprehensive machine learning services
US20200125956A1 (en) * 2017-05-20 2020-04-23 Google Llc Application Development Platform and Software Development Kits that Provide Comprehensive Machine Learning Services
CN108133294A (en) * 2018-01-10 2018-06-08 阳光财产保险股份有限公司 Forecasting Methodology and device based on information sharing

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
STEPHEN HARDY等: ""Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption", ARXIV:1711.10766V1, pages 1 - 60 *
张玉清;董颖;柳彩云;雷柯楠;孙鸿宇;: "深度学习应用于网络空间安全的现状、趋势与展望", 计算机研究与发展, no. 06, pages 1117 - 1142 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020114184A1 (en) * 2018-12-07 2020-06-11 深圳前海微众银行股份有限公司 Joint modeling method, apparatus and device, and computer-readable storage medium
CN112182636A (en) * 2019-07-03 2021-01-05 北京百度网讯科技有限公司 Method, device, equipment and medium for realizing joint modeling training
CN112182636B (en) * 2019-07-03 2023-08-15 北京百度网讯科技有限公司 Method, device, equipment and medium for realizing joint modeling training
WO2021092980A1 (en) * 2019-11-14 2021-05-20 深圳前海微众银行股份有限公司 Longitudinal federated learning optimization method, apparatus and device, and storage medium
CN112435755A (en) * 2020-11-23 2021-03-02 平安科技(深圳)有限公司 Disease analysis method, disease analysis device, electronic device, and storage medium

Also Published As

Publication number Publication date
WO2020114184A1 (en) 2020-06-11
CN109635422B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN109635422A (en) Joint modeling method, device, equipment and computer readable storage medium
CN102356375B (en) Smooth layout animation of continuous and non-continuous properties
CN113010896B (en) Method, apparatus, device, medium and program product for determining abnormal object
CN111582504A (en) Federal modeling method, device, equipment and computer readable storage medium
CN104899136A (en) Method and device used for generating test case
WO2022135138A1 (en) Robot task deployment method and system, device, and storage medium
CN105407323A (en) Screen splitting method and device of monitor video
TWI552067B (en) Techniques for multiple pass rendering
CN103218112A (en) Information processing method and information processing system
CN110211017B (en) Image processing method and device and electronic equipment
CN111617473A (en) Display method and device of virtual attack prop, storage medium and electronic equipment
CN113205601B (en) Roaming path generation method and device, storage medium and electronic equipment
CN109976744B (en) Visual programming method, system and terminal equipment
CN112000259A (en) Method and device for controlling camera based on touch event of mobile terminal
KR102445530B1 (en) Method and apparatus for visualization of public welfare activities
CN107729686B (en) Building model component display method and device, electronic equipment and storage medium
CN110648402A (en) Method, device and equipment for placing virtual object along curve
US9344733B2 (en) Feature-based cloud computing architecture for physics engine
CN105203092A (en) Information processing method and device and electronic equipment
US11281890B2 (en) Method, system, and computer-readable media for image correction via facial ratio
CN115951852A (en) Information display method and device, electronic equipment and storage medium
CN107038176B (en) Method, device and equipment for rendering web graph page
CN114139731A (en) Longitudinal federated learning modeling optimization method, apparatus, medium, and program product
CN113010939A (en) Processing method of visual BIM model and related product thereof
CN106304410A (en) A kind of data migration method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant