CN114679316A - Safety prediction method and system for personnel mobility, client device and server - Google Patents

Safety prediction method and system for personnel mobility, client device and server Download PDF

Info

Publication number
CN114679316A
CN114679316A CN202210299560.6A CN202210299560A CN114679316A CN 114679316 A CN114679316 A CN 114679316A CN 202210299560 A CN202210299560 A CN 202210299560A CN 114679316 A CN114679316 A CN 114679316A
Authority
CN
China
Prior art keywords
client
data
server
output
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210299560.6A
Other languages
Chinese (zh)
Inventor
柳林
付绍静
黄雪伦
罗玉川
王勇军
赵文涛
陈荣茂
施江勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202210299560.6A priority Critical patent/CN114679316A/en
Publication of CN114679316A publication Critical patent/CN114679316A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0407Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the identity of one or more communicating identities is hidden
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0816Key establishment, i.e. cryptographic processes or cryptographic protocols whereby a shared secret becomes available to two or more parties, for subsequent use
    • H04L9/0838Key agreement, i.e. key establishment technique in which a shared key is derived by parties as a function of information contributed by, or associated with, each of these
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0869Generation of secret information including derivation or calculation of cryptographic keys or passwords involving random numbers or seeds

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention provides a safety prediction method and system for personnel mobility, client equipment and a server, wherein the method comprises the following steps: obtaining track data of a user to which a client belongs, and randomly dividing the track data into first secret shared data according to the track data; according to the trajectory data, second secret shared data obtained by randomly dividing model parameters of the staff mobility prediction model transmitted by the server side and a first random number set generated by a trusted third party, applying a secure interaction protocol to carry out reasoning on the staff mobility prediction model which is pre-trained in interaction with the server side, and obtaining a first reasoning result; and receiving a second reasoning result obtained by the server side through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by the trusted third party, and obtaining the prediction result by combining the first reasoning result and the second reasoning result. The invention can realize safe and effective prediction, and reduce communication overhead while ensuring accuracy.

Description

Safety prediction method and system for personnel mobility, client device and server
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a method and a system for safely predicting personnel mobility, client equipment and a server.
Background
With the widespread use of smart mobile devices (e.g., smartphones, watches, etc.), and their increasing data collection capabilities, people's mobile data is being collected in large quantities. Analyzing and utilizing the collected data, such as predicting the mobility of people, can better improve the quality of life of people, and has attracted special attention in academia and industry. People mobility prediction refers to predicting the next location of people in an area where they are constantly moving. The prediction result has important significance in the aspects of intelligent transportation, social and economic models of city planning, resource management in mobile communication, personalized recommendation systems, mobile medical services and the like. For example, by predicting the next place people tend to go, the government may design better traffic planning and scheduling strategies to alleviate traffic congestion; the taxi platform can better predict the trip of the user and provide better service; it is even possible to predict the next public safety place for the criminal, prepare it in advance, etc.
With the rapid development of computer technology and the gradual deepening of neural network research, many enterprises provide widely-applied neural network prediction services for users. However, current neural network prediction services risk privacy disclosure. Existing neural network prediction services require a user to provide his or her own sensitive information to a service provider, or the service provider stores his or her own neural network on the user's device. Once information is disclosed, it can cause immeasurable loss of property and privacy for the user or service provider. To solve the above privacy protection problem, homomorphic encryption, secure multiparty computation, and differential privacy are common techniques. Homomorphic encryption enables linear operations in a neural network to be directly executed on encrypted data, but homomorphic encryption has high calculation and storage overhead and is low in efficiency. Differential privacy hides the characteristics of the sample by adding perturbations to the data, but the resulting problem is that the accuracy of the neural network is affected. Secure multi-party computing (SMC) allows multiple participants to operate on data without actually sharing the input. The basic cryptographic protocols of SMC include oblivious transfer protocol (OT), Garbled Circuits (GC), Secret Sharing (SS), and the like. The operation in the neural network mainly comprises linear operation and nonlinear activation function calculation. Nonlinear operations are often implemented by approximate conversion to linear operations, or by GC. Many schemes for machine-learned privacy protection are therefore built with a basic cryptographic protocol that mixes multiple SMCs. In a secure multiparty computing scheme based on secret sharing, each additional party increases communication overhead and the risk of attack.
At present, no one has combined the prediction of staff mobility and privacy protection, and the existing scheme calculates the activation function by either using Garbled Circuits (GC) or converting a nonlinear function into a linear function in an approximate manner; the former has low efficiency and reusability, and the latter has certain influence on the accuracy of the neural network.
Disclosure of Invention
The invention provides a method and a system for safely predicting personnel mobility, client equipment and a server, and aims to solve the problems that the conventional personnel mobility prediction is not safe and effective enough and the communication cost is high.
Based on the above purpose, an embodiment of the present invention provides a method for safely predicting mobility of a person, including: acquiring track data of a user to which a client belongs, and randomly dividing the track data into first secret shared data according to the track data; acquiring a first inference result according to the trajectory data, second secret shared data transmitted by a server and inference of a personnel mobility prediction model for pre-training interaction between a first random number set generated by a trusted third party and the server, wherein the second secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server; and receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining a prediction result by combining the first reasoning result and the second reasoning result.
Optionally, the secure interaction protocol includes: at least one of a multiply-to-add conversion protocol, a secure sigmoid protocol, a secure tanh protocol, a secure softmax protocol, and a secure re-add protocol.
Optionally, the obtaining of the first inference result according to the inference of the trajectory data, the second secret shared data transmitted by the server, and the staff mobility prediction model pre-trained by applying the secure interaction protocol to the first random number set and interacting with the server includes: applying a secure interaction protocol to interact with the server side according to the current trajectory data, the second secret shared data and the first random number set to perform gate control cycle unit reasoning to obtain a first gate control output, and obtaining a second gate control output at the server side; according to the historical track data, the first gating output, the second gating output shared by the server secret, the second secret shared data and the first random number set, applying a secure interaction protocol to interact with the server to carry out inference of a historical attention module, so as to obtain a first normalized output, and simultaneously obtaining a second normalized output at the server; and applying a secure interaction protocol to interact with the server side to perform full connection layer processing according to the first gating output, the first normalization output, the second gating output and the second normalization output shared by the server side secret, the second secret sharing data and the first random number set, so as to obtain the first inference result.
Optionally, the gating cycle unit includes: the reset gate, the update gate, the candidate state output and the update mode respectively satisfy the following relations:
rt=σ(Wir*xt+bir+Whr*ht-1+bhr),
zt=σ(Wiz*xt+biz+Whz*ht-1+bhz),
Figure BDA0003564798680000021
Figure BDA0003564798680000022
wherein, Wir,bir,Whr,bhr,Wiz,biz,Whz,bhz,Win,bin,Whn,bhnAs a model parameter, xtAs input vector at the t-th time step, ht-1Information stored for the previous time step t-1, rtOutput of the t-th time step for resetting the gate, ztTo update the output of the t-th time step of the gate, ntNew memory contents, h, representing the t-th time steptInformation saved for the t time step.
Based on the same inventive concept, the embodiment of the invention also provides a safety prediction method for personnel mobility, which comprises the following steps: obtaining model parameters of a pre-trained personnel mobility prediction model, and randomly dividing the model parameters into second secret shared data according to the model parameters; acquiring a second inference result according to the model parameter, first secret shared data transmitted by the client and inference of a personnel mobility prediction model for applying a secure interaction protocol to interact with the client to pre-train according to a second random number set generated by a trusted third party, wherein the first secret shared data is obtained by randomly dividing track data of a user to which the client belongs by the client; and transmitting the second reasoning result to the client so that the client acquires a predicted result by combining the first reasoning result and the second reasoning result acquired in the interactive reasoning process.
Optionally, the obtaining of the second inference result according to the inference of the model parameter, the first secret shared data transmitted by the client, and the second random number set generated by the trusted third party by applying the secure interaction protocol to interact with the client for the pre-trained staff mobility prediction model includes: according to the current track sharing data, the model parameters and the second random number set, applying a secure interaction protocol to interact with a client to carry out gate control cycle unit reasoning to obtain a second gate control output, and simultaneously obtaining a first gate control output at the client; according to the historical track sharing data, the second gating output, the first gating output shared by the client secret, the model parameters and the second random number set, applying a secure interaction protocol to interact with the client to carry out inference of a historical attention module so as to obtain a second normalized output, and meanwhile, obtaining a first normalized output at the client; and carrying out full connection layer processing on the interaction with the client by applying a secure interaction protocol according to the second gating output, the second normalized output, the first gating output and the first normalized output shared by the client secret, the model parameter and the second random number set to obtain a second reasoning result.
Based on the same inventive concept, an embodiment of the present invention further provides a client device, including: the system comprises a first data acquisition unit, a second data acquisition unit and a first secret sharing unit, wherein the first data acquisition unit is used for acquiring track data of a user to which a client belongs and randomly dividing the track data into first secret sharing data according to the track data; the first prediction unit is used for deducing a personnel mobility prediction model which is used for pre-training interaction between the track data, second secret shared data transmitted by the server end and a first random number set generated by a trusted third party and the server end according to a safety interaction protocol to obtain a first deduction result, wherein the second secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server end; and the result obtaining unit is used for receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining the prediction result by combining the first reasoning result and the second reasoning result.
Based on the same inventive concept, an embodiment of the present invention further provides a server, including: the second data acquisition unit is used for acquiring model parameters of a pre-trained personnel mobility prediction model and randomly dividing the model parameters into second secret shared data; the second prediction unit is used for applying a secure interaction protocol to carry out inference on a personnel mobility prediction model pre-trained with the client according to the model parameters, first secret shared data transmitted by the client and a second random number set generated by a trusted third party to obtain a second inference result, wherein the first secret shared data is obtained by randomly dividing track data of a user to which the client belongs by the client; a data sending unit, configured to transmit the second inference result to the client, so that the client obtains a predicted result by combining the first inference result and the second inference result obtained in the interactive inference process
Based on the same inventive concept, the embodiment of the invention also provides a safety prediction system for personnel mobility, which comprises: a trusted third party, the aforementioned client device and the aforementioned server.
Based on the same inventive concept, an embodiment of the present invention further provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the method described in any one of the above items.
The technical effects of the invention are that, as can be seen from the above, the method, the system, the client device and the server for predicting the safety of the mobility of the personnel provided by the embodiment of the invention acquire the track data of the user to which the client belongs, and randomly divide the track data into the first secret shared data according to the track data; acquiring a first inference result according to the trajectory data, second secret shared data transmitted by a server and inference of a personnel mobility prediction model for pre-training interaction between a first random number set generated by a trusted third party and the server, wherein the second secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server; and receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining a prediction result by combining the first reasoning result and the second reasoning result, so that safe and effective prediction can be realized, the accuracy is ensured, and the communication overhead is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart illustrating a method for predicting the safety of mobility of a person according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a model for predicting human mobility according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the method of step S12 in FIG. 1;
FIG. 4 is a graph illustrating the accuracy of top1 and top5 under a plurality of different scenarios in an embodiment of the present invention;
FIG. 5 is a diagram illustrating the total runtime of the client and server running different scenarios in an embodiment of the present invention;
FIG. 6 is a diagram illustrating the time consumption of the sub-processes of the client in an embodiment of the present invention;
FIG. 7 is a diagram illustrating the time consumption of sub-processes at the server side according to an embodiment of the present invention;
FIG. 8 is a schematic overall runtime diagram of various aspects in an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a client device in an embodiment of the present invention;
FIG. 10 is a flow chart illustrating another method for predicting the safety of staff mobility according to an embodiment of the present invention;
FIG. 11 is a block diagram of a server according to an embodiment of the present invention;
FIG. 12 is a diagram of an electronic device according to an embodiment of the invention.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It is to be noted that technical terms or scientific terms used in the embodiments of the present invention should have the ordinary meanings as understood by those having ordinary skill in the art to which the present disclosure belongs, unless otherwise defined. The use of "first," "second," and similar language in the embodiments of the present invention does not denote any order, quantity, or importance, but rather the terms "first," "second," and similar language are used to distinguish one element from another. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items. The terms "connected" or "coupled" and the like are not restricted to physical or mechanical connections, but may include electrical connections, whether direct or indirect. "upper", "lower", "left", "right", and the like are used merely to indicate relative positional relationships, and when the absolute position of the object being described is changed, the relative positional relationships may also be changed accordingly.
The embodiment of the invention also provides a safety prediction method for the mobility of the personnel. The safety prediction method is applied to the client equipment. The client device may be a smart mobile device such as a smart phone, a watch, a mobile terminal, and the like. As shown in fig. 1, the method for safely predicting the mobility of people includes:
step S11: the method comprises the steps of obtaining track data of a user to which a client belongs, and randomly dividing the track data into first secret shared data according to the track data.
In the embodiment of the invention, the track data of the user to which the client belongs comprises current track data and historical track data. Each trajectory data includes a location, a time, and a user ID. And randomly dividing the track data needing to be shared with the server side at the client side into first secret shared data. The first secret shared data thus includes current track shared data and historical track shared data. In step S11, the trajectory data needs to be preprocessed in a normalization manner, and the specific preprocessing method is the same as the data preprocessing method of the existing neural network model, and is not described herein again.
Step S12: and obtaining a first reasoning result according to the trajectory data, second secret shared data transmitted by the server and reasoning of a personnel mobility prediction model which is generated by the server and interacts with the server by applying a secure interaction protocol, wherein the first secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server.
In the embodiment of the present invention, the human mobility prediction model is pre-trained already on the server side before step S12. The first set of random numbers is randomly generated by the trusted third party and transmitted to the client, and the trusted third party may randomly generate the random numbers using a standard normal distribution or a uniform distribution.
The staff mobility prediction model of the embodiment of the invention applies an attention recursion model DeepMove, the framework of which is shown in figure 2, and the staff mobility prediction model is divided into three stages of feature extraction and embedding, recursion, historical attention and prediction. The default first phase is done by the client itself. All trajectory data is first embedded by a multi-modal embedding layer, and spatio-temporal features and personal features are jointly embedded into a dense representation to help model complex transitions. Dense representations are better able to capture the exact semantic spatiotemporal relationships, another advantage is that such dense representations are always low dimensional, facilitating subsequent computations.
The recursion and history attention stage comprises a recursion module and a history attention module. The current track data is processed by a recursion module, and complex sequence information is modeled; and the history attention module processes the history track and extracts the regularity of the motion. The recursion module uses a Gated Recursive Unit (GRU) as a basic cyclic Unit, and inputs current trajectory data including spatio-temporal information to capture complex sequence information or long-term dependency relationship included in a current trajectory. The historical attention module parallels the loop module to capture multiple levels of periodicity of human movement. The historical attention module firstly generates a rule of user mobility by using an attention candidate generator according to historical track data containing space-time information; and then, the hidden result of the loop layer and the movement rule of the user are matched with the attention selector to obtain the mobility state. And then splicing the results of the recursive module and the historical attention module to be used as the input of the prediction stage. The prediction stage is composed of several complete connection layers and an output layer, and the final prediction result can be obtained. In the embodiment of the present invention, it is noted that the candidate generator takes a GRU as a basic unit; the attention selector uses a dot product model of the attention mechanism.
In an embodiment of the present invention, the gated loop unit includes: the reset gate, the update gate, the candidate state output and the update mode respectively satisfy the following relations:
reset gate (Reset gate): r ist=σ(Wir*xt+bir+Whr*ht-1+bhr),
Update gate (Up-gate): z is a radical oft=σ(Wiz*xt+biz+Whz*ht-1+bhz),
Candidate states:
Figure BDA0003564798680000061
the updating mode is as follows:
Figure BDA0003564798680000062
wherein, Wir,bir,Whr,bhr,Wiz,biz,Whz,bhz,Win,bin,Whn,bhnAs a model parameter, xtAs input vector at the t-th time step, ht-1Information stored for the previous time step t-1, rtFor resetting the output of the t-th time step of the gate, ztTo update the output of the t-th time step of the gate, ntNew memory contents, h, representing the t-th time steptThe information saved for the t-th time step,
Figure BDA0003564798680000063
representing matrix dot multiplication.
The calculation of the attention mechanism is mainly divided into two steps, namely, calculating the attention distribution on all input information, and calculating the weighted average of the input information according to the attention distribution, wherein the weighted average satisfies the following relational expression:
Figure BDA0003564798680000071
Figure BDA0003564798680000072
wherein att (X, q)) is the calculated weighted average result, αnFor attention distribution, s (x)nAnd q) is an attention scoring function, generally comprising an additive model, a dot product model, a scaled dot product model and a bilinear model, and the embodiment of the invention calculates by applying the dot product model, namely 5 (x)n,q)=xn Tq。
The safety interaction protocol of the embodiment of the invention comprises the following steps: at least one of a multiply-to-add conversion protocol, a secure sigmoid protocol, a secure tanh protocol, a secure softmax protocol, and a secure re-add protocol. The secure interaction protocol is executed between two parties, the input of each party is respective private data or a secret shared value generated by calculation, and after the protocol is carried out, a new secret shared value is generated. The random values involved in the protocol process are all generated in advance by a third party, i.e. performed off-line during the initialization phase. Where < > denotes additive secret sharing, [ ] denotes multiplicative secret sharing, subscript 1 denotes shared data of the client, and subscript 2 denotes shared data of the server.
The main idea of the multiply-to-add conversion protocol (mul _ to _ add) is that two parties calculate the product of two parties' data without knowing that data. Specifically, participant A has x, participant B has y, and after agreement is carried out, the two parties respectively obtain < xy > 1 and < xy > 2. Before running the protocol, the participant A has random numbers a and c1, and the participant B has random numbers B and c 2; the participant A locally calculates p1 as x-a, the participant B locally calculates p2 as y-B, and then the two parties exchange data p1 and p 2; participant a calculated locally < xy > 1 ═ c1+ a × p2, and participant B calculated locally < xy > 2 ═ c2+ B × p1+ p1 × p 2. Specifically, the following are shown:
mul_to_add
A:x1、a、c1,B:x2、b、c2;
p1=x1-a,p2=x2-b;
both parties exchange p1 and p2, and then calculate:
Ans1=c1+a*p2,Ans2=c2+b*p1+p1*p2
ans1+ ans2 is x1 x 2.
And (3) verifying correctness:
ans1+ans2=c1+a*p2+c2+b*p1+p1*p2
=c1+c2+a*(x2-b)+b*(x1-a)+(x2-b)*(x1-a)
=c1+c2+a*x2-a*b+b*x1-a*b+x1*x2+a*b-a*x2-b*x1
=x1*x2。
the sigmoid of the embodiment of the invention has the calculation formula as follows:
Figure BDA0003564798680000073
the value of x is shared by the client and server additive secrets, i.e. x ═ x > 1+ < x > 2. The formula can be deformed as:
Figure BDA0003564798680000081
thus, for the secure sigmoid protocol of the embodiment of the present invention, the following is input: client has < x >1、a、c1The server side has < x >2、b、c2(ii) a And (3) outputting: client and server obtain [ sigma (x) respectively]1And [ sigma (x)]2Where σ (x) ═ σ (x)]1·[σ(x)]2. The secure sigmoid protocol process is described as follows:
firstly, the server side calculates lambda2And is sent to the client side,
Figure BDA0003564798680000082
the client then calculates λ1And is sent to the server side,
Figure BDA0003564798680000083
then the server side executes calculation:
Figure BDA0003564798680000084
the client performs the calculation:
Figure BDA0003564798680000085
and (3) verifying correctness:
Figure BDA0003564798680000086
wherein, a, b, c1、c2The server is generated by a trusted third party in an off-line manner and sent to the client and the server, and the following requirements are met: a, b ═ c1+c2. Both parties have a secret shared value of x.
The implementation of the safe tanh protocol of the embodiment of the invention is mainly based on the safe sigmoid. The calculation formula of Tanh in the embodiment of the invention is as follows:
Figure BDA0003564798680000087
thus, the safe tanh protocol of the embodiment of the present invention inputs: client has < x >1、a、c1The server side has < x >2、 b、c2(ii) a And (3) outputting: the client and the server respectively obtain < tanh (x) > 1 and < tanh (x) >2Wherein tan h (x) < tan h (x) >, and1+<tanh(x)>2. The secure tanh protocol procedure is described as follows:
firstly, both sides respectively call the safety sigmoid function and respectively input 2 x1,2*<x>2Respectively obtain output records as [ y]1,[y]2
Then both parties call the mul _ to _ add function to input [ y ] respectively]1,[y]2Respectively obtain output < z >1,<z>2
Then the client executes the calculation of < tanh (x) >1=2*<z>1The server side performs the calculation of < tanh (x) >2=2*<z>2-1。
And (3) verifying correctness:
<tanh(x)>1+<tanh(x)>2=2*<z>1+2*<z>2-1
=2*(<z>1+<z>2)-1=2*[y]1*[y]2-1
=2*σ(2*<x>1+2*<x>2)-1=2*σ(2*x)-1=tanh(x)。
the calculation formula of softmax in the embodiment of the invention is as follows:
Figure BDA0003564798680000091
wherein s (x)nQ) is the attention scoring function, s (x)n,q)=xn Tq。
Thus, for the secure softmax protocol of the embodiment of the present invention, the following are input: client has [ x ]]1={[x1]1,...,[xN]1The server side has [ x }]2={[x1]2,...,[xN]2}; and (3) outputting: the client and the server respectively obtain < softmax (x) >1And < softmax (x >)2. The secure softmax protocol process is described as follows:
firstly, the client selects a random number r and compares the random number r with the random number r
Figure BDA0003564798680000092
Perform a calculation [ x ]i′]1=r*[xi]1
Then to
Figure BDA0003564798680000093
Both sides respectively input [ xi′]1,[xi]2Calling mul _ to _ add function to obtain output < yi1,<yi2
The client then performs the computation
Figure BDA0003564798680000094
Server-side execution of computations
Figure BDA0003564798680000095
Then both parties recover the denominator D ═ REC (< D >)1,<D>2);
To pair
Figure BDA0003564798680000096
Client-side execution of computations
Figure BDA0003564798680000097
Server-side execution of computations
Figure BDA0003564798680000098
And (3) verifying correctness:
Figure BDA0003564798680000099
experiments show that if the calculation is always carried out according to the safety interaction protocols, certain values exceed the maximum limit which can be expressed by a computer, namely a maximum value or a minimum value, so that the calculation cannot be carried out, and the calculation precision is sharply reduced, so that the safety adding protocol (re _ add) is designed. For the safety re-adding protocol of the embodiment of the invention, the following inputs are input: client has < x >1The server side has < x >2(ii) a And (3) outputting: the client and the server respectively obtain < x' >1And < x' >2. The secure re-attach protocol process is described as follows:
firstly, a client selects a random number r and calculates mid ═ x >1R, and sending to the server side, then < x' >, if1R; then the server side executes the calculation of < x' >2=mid+<x>2
And (3) verifying correctness: < x' >1+<x′>2=mid+<x>2+r=<x>1-r+<x>2+r。
The random number r selected by the client is limited to be in the range of 0-1, because the data is preprocessed before the neural network is trained. The data preprocessing can eliminate the correlation among different characteristics, reduce the training difficulty of the neural network and improve the training efficiency. A common method is normalization, i.e., mapping the data between [0, 1[ or [ -1, 1[ or to a standard normal distribution with a mean of 0 and a variance of 1. Because the data of the neural network often has scale invariance, the data can be preprocessed to obtain a more ideal result. Although the calculation process of the embodiment of the invention is an inference process of the neural network and does not relate to the training gradient problem, the parameters of the neural network are obtained by training on the normalized data, so the data appearing in the process of the embodiment of the invention also meet the normalization requirement as much as possible, and the range of the random number is limited.
The safety interaction protocols are penetrated into the process of the prediction reasoning and calculation of the personnel mobility prediction model. Initially, the client inputs private data, namely track data (including time and place); the server side inputs model parameters of a neural network (a personnel mobility prediction model). It should be noted that the input data of both parties are private, and both parties do not obtain the data input by the other party in the whole calculation process.
In step S12, optionally, as shown in fig. 3, the method includes:
step S121: and applying a secure interaction protocol to interact with the server side according to the current trajectory data, the second secret shared data and the first random number set to perform gate control cycle unit reasoning to obtain a first gate control output, and simultaneously obtaining a second gate control output at the server side.
The gated cycle cell includes: and (5) resetting the gate, updating the gate, outputting the candidate state and updating the state, and in step S121, the client performs four parts of computational reasoning according to the current trajectory data, the second secret shared data and the first random number set application security interaction protocol and the server side in sequence. The reasoning calculation process of each part of the gating cycle unit is as follows: inputting: client has X ═ X1,...,XN}, the server side has Wir,,bir,Whr,bhr,Wiz,biz,Whz,bhz,Win,bin,Whn,bhnAnd (3) outputting: both parties respectively obtain output result < H >1={<h11,...,<hN1},<H>2={<h12,...,<hN2}。
For each cycle t e { 1.,. N }, there is:
reset gate (Reset gate):
first both parties call mul _ to _ add (X)t,Wir) Function of obtaining
Figure BDA0003564798680000101
Wherein
Figure BDA0003564798680000102
Both parties then call mul _ to _ add (h)t-1,Whr) Function of obtaining
Figure BDA0003564798680000111
Wherein
Figure BDA0003564798680000112
The client then performs the calculations:
Figure BDA0003564798680000113
the server side executes calculation:
Figure BDA0003564798680000114
last two parties call a security sigmoid
Figure BDA0003564798680000115
Then the obtained output is used as input to call the mul _ to _ add function to respectively obtain < rt1+<rt2
Update gate (Up-gate):
first both parties call mul _ to _ add (X)t,Wiz) To obtain
Figure BDA0003564798680000116
Wherein
Figure BDA0003564798680000117
Both parties then call mul _ to _ add (h)t-1,Whz) To obtain
Figure BDA0003564798680000118
Wherein
Figure BDA0003564798680000119
The client then performs the calculations:
Figure BDA00035647986800001110
the server side executes calculation:
Figure BDA00035647986800001111
last two parties call a security sigmoid
Figure BDA00035647986800001112
Then the resulting output is used as input to recall mul _ to _ add, which respectively yields < zt1+<zt2
Candidate status (Candidate status):
first both parties call mul _ to _ add (X)t,Win) To obtain
Figure BDA00035647986800001113
Wherein
Figure BDA00035647986800001114
Both parties then call mul _ to _ add (h)t-1,Whn) To obtain
Figure BDA00035647986800001115
Wherein
Figure BDA00035647986800001116
After-clothesThe server side executes the calculation:
Figure BDA00035647986800001117
both parties then call Had _ to _ add
Figure BDA00035647986800001118
To obtain
Figure BDA00035647986800001119
Both parties recall Had _ to _ add
Figure BDA00035647986800001120
To obtain
Figure BDA00035647986800001121
And finally, the client executes calculation:
Figure BDA00035647986800001122
the server side executes calculation:
Figure BDA00035647986800001123
update mode (Status update):
double-sided invocation Had _ to _ add (0.5- < z)t1,<nt2) To obtain
Figure BDA00035647986800001124
Double-sided invocation Had _ to add (< n)t1,0.5-<zt2) To obtain
Figure BDA00035647986800001125
Double-sided invocation Had _ to _ add (< z)t1,<nt2) To obtain
Figure BDA00035647986800001126
Double-sided invocation Had _ to _ add (< n)t1,<zt2) To obtain
Figure BDA00035647986800001127
The client performs the calculation:
Figure BDA0003564798680000121
and (3) server side calculation:
Figure BDA0003564798680000122
by applying the reasoning calculation process of each part of the gate control circulation unit, the client side sequentially interacts with the server side to perform calculation reasoning of the four parts according to the current track data, the second secret shared data and the first random number set application safety interaction protocol to obtain first gate control output; meanwhile, the server side obtains second gating output through interactive reasoning with the client side. It should be noted that, the application of the secure override protocol may be added as needed in the process of the inference of the gate control loop unit performed by the interaction between the client and the server. For example, the re _ add operation may be used only once at the final output portion of the gated round-robin unit (GRU), may be used at each output of the GRU, may be used at each output of all mul _ to _ add functions, or may be used after each interaction with the server.
Step S122: and applying a secure interaction protocol to interact with the server end according to the historical track data, the first gating output, the second gating output shared by the server end secret, the second secret shared data and the first random number set to carry out inference of a historical attention module so as to obtain a first normalized output, and simultaneously obtaining a second normalized output at the server end.
Since the attention candidate generator is based on the GRU, the attention selector uses a dot product model of the attention mechanism. Thus, in step S122, the client interacts with the server according to the historical track data, the second secret shared data, and the first random number set application security interaction protocol to perform inference on the gate control cycle unit in the attention candidate generator, so as to obtain a first candidate output, and obtain a second candidate output at the server. And then, applying a secure interaction protocol to interact with the server side according to the first candidate output, the first gated output, the second gated output and the second candidate output shared by the server side secret, the first random number set and the second secret shared data to carry out reasoning of a secure attention model (ATTN), so as to obtain a first normalized output, and simultaneously obtaining a second normalized output at the server side. The process of the client side applying the secure interaction protocol to interact with the server side to perform inference of the gated loop unit in the attention candidate generator is similar to the inference process in step S121, and is not described herein again. The reasoning process of the safety attention model is as follows: inputting: client has < X >1={<x11,...,<xn1},<Y>1={<y11,...,<ym1}; the server side has a value of < X >2={<x12,...,<xn2},<Y>2={<y12,...,<ym2}; and (3) outputting: client and server obtain < softmax (X) respectivelyTY)>1,<softmax(XTY)>2
for j∈{1,...,m}
for i∈{1,...,n}
Both sides respectively input < xi1 T,<yj2Invoking the mul _ to _ add function to obtain the output < f1i1,<f1i2
Both sides can respectively transportIn < yj1 T,<xi2Invoking the mul _ to _ add function to obtain the output < f2i1,<f2i2
The client performs the calculation:
Figure BDA0003564798680000131
the server side executes calculation:
Figure BDA0003564798680000132
last pair of
Figure BDA0003564798680000133
Double-side input [ smi,j]1,[smi,j]2Calling the safe softmax function to obtain < softmax (x)i Tyj)>1,<softmax(xi Tyj)>2
Here < X >)1And < Y >1Respectively representing a first candidate output and a first gated output of the client, and < X >2And < Y >2And respectively representing a second candidate output and a second gating output of the server side. < softmax (x)i Tyj)>1,<softmax(xi Tyj)>2Respectively for a first normalized output obtained by the client and a second normalized output obtained by the server.
It should be noted that, the application of the security re-emphasis protocol may also be added as needed in the process of reasoning the security attention model by the interaction between the client and the server. For example, the re-add operation may be used only once in the final output portion of the security attention model (ATTN), or may be used for each output of all mul _ to _ add functions, or may be used after each interaction with the server side.
Step S123: and applying a secure interaction protocol to interact with the server side to perform full connection layer processing according to the first gating output, the first normalization output, the second gating output and the second normalization output shared by the server side secret, the second secret sharing data and the first random number set, so as to obtain the first inference result.
In the embodiment of the present invention, before the client and the server interactively perform processing on the full connection layer (FC), the client needs to perform splicing of the first gated output and the first normalized output to obtain a first spliced output, and at the same time, the server performs splicing of the second gated output and the second normalized output to obtain a second spliced output. And then, carrying out full connection layer processing on the client side according to the first splicing output, the second splicing output of server side secret sharing, the second secret sharing data and the first random number set application security interaction protocol and the server side interaction. The processing procedure of the full connection layer is as follows: inputting: client has < x >1The server side has < x >2,Wfc,bfcAnd outputting: the client and the server respectively obtain < y >1,<y>2
First, both sides call mul _ to _ add (< x >)1,Wfc) Obtaining an output < ans >1,<ans>2. Then < y >1=<ans>1
Then the server side executes the calculation: < y >2=<ans>2+bfc+<x>2*Wfc
And (3) safety verification: < y >1+<y>2=<ans>1+<ans>2+bfc+<x>2*Wfc
=<x>1*Wfc+<x>2*Wfc+bfc=x*Wfc+bfc
Here < x >)1And < x >2Respectively, a first stitched output of the client and a second stitched output of the server. < y >1And < y >2Respectively representing client-side obtrusionsAnd the first inference result of the server side and the second inference result obtained by the server side.
It should be noted that, during the process of performing interaction between the client and the server to perform the full connection layer, the application of the security re-application protocol may also be added as needed. For example, the re _ add operation may be used at each output of all mul _ to _ add functions, or may be used after each interaction with the server side.
Step S13: and receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining a prediction result by combining the first reasoning result and the second reasoning result.
In the embodiment of the present invention, in step S12, the client performs inference on the staff mobility prediction model pre-trained by interacting with the server according to the trajectory data, the second secret shared data transmitted by the server, and the first random number set, and the application security interaction protocol, and obtains the first inference result, and at the same time, the server performs interactive inference on the staff mobility prediction model according to the first secret shared data, the model parameters, and the second random number set generated by the trusted third party, and obtains the second inference result. And the server side sends the acquired second inference result to the client side. In step S13, the client receives the second inference result sent by the server, and obtains the predicted result by combining the first inference result obtained at the client. The prediction result indicates possible places of the user to which the client belongs and corresponding probabilities.
In embodiments of the present invention, correctness is part of computing the feasibility of a secure interaction protocol. The accuracy of the safety prediction method for the personnel mobility of the seven embodiments of the invention is verified through tests. The initialization stage can be completed off line, and has little influence on the performance of the human mobility prediction model DeepMove, so that the initialization experiment is omitted. To evaluate the performance of DeepMove, it was implemented using python3, and DeepMove was performed on a computer of Intel Core i7 at 2.8GHz and 16GB RAM, and DeepMove was performed on a server of Tesla K80 and 62GB RAM. The experimental data is public Foursquare check-in data which records check-in data of a user at a certain place for a period of time, and each piece of data comprises a user ID, time, a user position and an interest point. And generating a session according to the adjacent records of the user, and filtering out the users with less record quantity and session quantity. The effect of the solution is evaluated in terms of accuracy, runtime and traffic.
FIG. 4 shows the accuracy of top1 and top5 under a plurality of different scenarios, respectively. Wherein a) in FIG. 4 is the accuracy of top1 for scenario r-0 to scenario r-6, b) in FIG. 4 is the accuracy of top5 for scenario r-0 to scenario r-6, and c) in FIG. 4 is the accuracy of top1 and top5 for scenario r-7, scenario r-8, and scenario ora. top1 indicates that there is a true location in the first possible location predicted, and top5 indicates that there is a true location in the first five possible locations predicted. The schemes r-0 to r-8 apply the prediction by the safety prediction method of the staff mobility according to the embodiment of the present invention, except for the use of the override (re _ add). Scheme r-0 represents a scheme in which re _ add is not used, r-1 represents a scheme in which re _ add is used only once at the final output parts of GRU and attn, respectively, r-2 represents a scheme in which re _ add is used only once at the final output parts of GRU, r-3 represents a scheme in which re _ add is used only once at the final output parts of attn, r-4 represents a scheme in which re _ add is used at each output of GRU, r-5 represents a scheme in which re _ add is used once at the final output parts of attn on the basis of r-4, r-6 represents a scheme in which re _ add is used at each output of all mul _ to _ add on the basis of r-5, r-7 represents a scheme in which re _ add is used only at each output of all mul _ to _ add, and r-8 represents a scheme in which re _ add is used after each interaction. The solution ora then represents the prediction of the application of the original model. There are two ways to generate random numbers, one is to generate random numbers with uniform distribution of uniform, and the other is to generate random numbers with normal distribution of normal. The above protocol was tested with these two random values in figure 4, respectively. The results in fig. 4 are all averages of results performed ten times or more.
From a) and b) in fig. 4, no matter the random value of the uniform distribution or the random value of the standard normal distribution, there is no obvious advantage in the accuracy of the schemes r-0 to r-6, and the accuracy is very low and almost negligible. While the accuracy of the uniformly distributed random values in c) in fig. 4 in both the scheme r-7 and the scheme r-8 can be equal to that of the scheme ora, the accuracy of the standard normally distributed random values is higher only in the scheme r-8. Thus uniformly distributed random values were selected in subsequent experiments.
In addition, the scheme r-0, the scheme r-7 and the scheme r-8 are compared to find that the addition of the re _ add has a very obvious effect on the improvement of the prediction accuracy, when the re _ add is not used at all, the accuracy is about equal to zero, and after the re _ add is added, the accuracy can be consistent with the original model result. However, it can be seen from the scheme r-1 to the scheme r-6 that the adding position of the re _ add is very critical, and the accuracy rate is not guaranteed as long as the re _ add is used. Before re _ add is not added, random numbers a, b, c1 and c2 are added for many times in the calculation process of the two parties, reciprocal and exponential operations are involved, the maximum value or the minimum value often occurs in the intermediate values of the two parties, but the sum of the intermediate values is small, so that the intermediate values of the two parties can be ensured to be in a small range through the re _ add, and the calculation accuracy is ensured.
Fig. 5 shows the total running time for running different scenarios on the client and server side. The runtime here does not include the transmission process of the two-party interaction. As can be seen from FIG. 5, the scenario ora does not differ much in the client-side and server-side runtime, whereas scenario r-0 through scenario r-8 have a consistent trend in the client-side and server-side runtime, almost all differing by about 3 s. Although the running time at the client side is higher than that at the server side, the prediction can be executed once within 6 s. The time for the two parties to jointly execute once does not exceed 7s, and the time is far higher than the scheme ora but is completely acceptable.
As can be seen from fig. 4 and 5, adding the re _ add does not add a large burden in time, and the difference is only about 1s, but the accuracy is greatly improved.
Next, each safety interaction protocol and sub-stage involved in the safety prediction method for personnel mobility according to the embodiment of the present invention are analyzed. The runtime of different secure interaction protocols at the client (local) and server (server) is shown in table 1.
TABLE 1 runtime of different secure interaction protocols at local and server
mul1 mul2 mul3 sigmoid re_add tanh
local single run time 26.04429 28.92319 25.24988 48.90807 13.78187 101.3744
Single runtime of server 5.593141 4.683733 6.953319 13.66059 3.06503 28.90889
The mul1, mul2 and mul3 are all applied to mul _ to _ add, and the difference is that the input data dimensions are different, so that the statistics are respectively carried out. The run times of mul1, mul2, mul3 are relatively similar, within 3 ms. The two parties only need to exchange data once, namely, the client locally (local) sends a copy of data to the server, and the server also sends a copy of data to the client, and the sequence of the two has no relation. There is no additional latency other than the time to transmit the data. In a relatively long running time of sigmoid and tanh, indexes and reciprocals are involved in the calculation of sigmoid, and data exchange is only needed once, but data are sent by a server first and then sent by a client. And the realization of tanh needs mul besides the sigmoid calling, so that data are interacted for 2 times, the first time has the requirement of sending sequence, and the second time does not.
Although the calculation time of softmax is short, two interaction rounds are needed, the two parties call mul interaction and then interact to recover the D value, and no time sequence is required. The time consumption of the re _ add is shortest, data interaction between two parties is only needed once, and the time sequence is also not available.
The different schemes are shown in table 2 for different sub-phases and total traffic. Among three sub-stages of a gating cycle unit (GRU), a safety attention model (ATTN) and a full connection layer (FC), the communication volume of the FC is the lowest and does not exceed 1.5KB at most. The GRU and ATTN traffic dominates the total traffic (ALL). From scenario r-0 to scenario r-8, it was found that as the number of calls for re _ add increases, the amount of traffic increases. Even the total traffic with re _ add added after each interaction is within 20MB, within an acceptable range. It can be seen that re _ add ensures accuracy and does not impose a large burden on traffic.
Table 2 different schemes in different sub-phases and total traffic
GRU(MB) ATTN(MB) FC(KB) ALL(MB)
r-0 3.53 4.00 0.5 7.53
r-1 4.01 6.02 0.75 10.03
r-3 4.01 4.54 0.75 8.56
r-5 5.20 6.02 0.5 11.22
r-7 11.74 7.50 1.25 19.24
r-8 14.12 7.40 1.25 21.52
Fig. 6 and 7 show the time of the three sub-processes of the client and the server, respectively. The time ratio is sequentially reduced from ATTN, GRU and FC no matter at the client side or the server side. This is because the ATTN process requires a little time to calculate the similarity of each path and perform the sequence comparison.
Fig. 8 shows the overall run time for the different schemes at wire speed 4.8 MB. The overall running time is between 18s and 45s, namely, the time is less than 1min, and the future path prediction of a certain user can be completed, and the time is considerable. The transmission time is the larger of the whole time, and the local running time of the two parties is shorter. The transmission time is more dependent on the network speed of transmission, 4.8MB in the experiment is more centered in reality, and a higher-speed network can be arranged for data transmission in practical use.
The safety prediction method for personnel mobility is a safety DeepMove neural network privacy protection method based on secret sharing, and a plurality of safety interaction protocols are designed aiming at the execution process of DeepMove, and the safety interaction protocols allow two parties to convert and calculate data and execute neural network reasoning. By randomly splitting the historical path and model parameters into secret portions, calculations are performed without the participants knowing the metadata, without revealing data of either party, and without compromising data privacy. Specifically, a safe and efficient two-party safe interaction protocol is designed for calculation of nonlinear functions such as sigmoid, tanh and softmax in a human mobility prediction model; aiming at the precision problem in the calculation, a safe and effective solution is designed; the security of the protocol is also demonstrated in the semi-honest adversary model. The combination experiment proves that the safe interaction protocol has no influence on the reasoning precision, the accuracy is not reduced, and the calculation and communication expenses are small.
The embodiment of the invention obtains the track data of the user to which the client belongs, and randomly divides the track data into first secret shared data according to the track data; acquiring a first inference result according to the trajectory data, second secret shared data transmitted by a server and inference of a personnel mobility prediction model for pre-training interaction between a first random number set generated by a trusted third party and the server, wherein the second secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server; and receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining a prediction result by combining the first reasoning result and the second reasoning result, so that safe and effective prediction can be realized, the accuracy is ensured, and the communication overhead is reduced.
Based on the same inventive concept, the embodiment of the invention also provides the client device. As shown in fig. 9, the client device includes: the device comprises a first data acquisition unit, a first prediction unit and a result acquisition unit. Wherein the content of the first and second substances,
the system comprises a first data acquisition unit, a second data acquisition unit and a first secret sharing unit, wherein the first data acquisition unit is used for acquiring track data of a user to which a client belongs and randomly dividing the track data into first secret sharing data according to the track data;
the first prediction unit is used for obtaining a first inference result according to the trajectory data, second secret shared data transmitted by the server and inference of a personnel mobility prediction model which is generated by the server and interacts with the server in a pre-training mode by applying a secure interaction protocol to a first random number set generated by a trusted third party, wherein the second secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server;
and the result obtaining unit is used for receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining the prediction result by combining the first reasoning result and the second reasoning result.
For convenience of description, the above client devices are described as being divided into various modules by functions, and described separately. Of course, the functions of the modules may be implemented in the same or multiple software and/or hardware in implementing embodiments of the invention.
The client device in the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
The embodiment of the invention also provides a safety prediction method for the mobility of the personnel. The safety prediction method of the embodiment of the invention is applied to a server side, and as shown in the attached figure 10, the safety prediction method of the personnel mobility comprises the following steps:
step S21: and obtaining model parameters of a pre-trained personnel mobility prediction model, and randomly dividing the model parameters into second secret shared data according to the model parameters.
In the embodiment of the present invention, before step S21, the server trains the human mobility prediction model according to the historical data, and obtains the model parameters of the converged human mobility prediction model. In step S21, the server randomly divides the model parameters into second secret shared data according to the model parameters of the pre-trained staff mobility prediction model, so as to perform subsequent interactive processing with the client.
Step S22: and obtaining a second inference result according to the model parameter, first secret shared data transmitted by the client and inference of a personnel mobility prediction model for pre-training interaction between a second random number set generated by a trusted third party and the client by applying a secure interaction protocol, wherein the first secret shared data is obtained by randomly dividing trajectory data of a user to which the client belongs by the client.
In the embodiment of the present invention, before step S22, the second set of random numbers is randomly generated and transmitted to the client by the trusted third party, and the trusted third party may randomly generate the random numbers by applying a standard normal distribution or a uniform distribution.
The track data of the user to which the client belongs comprises current track data and historical track data, and correspondingly, the first secret shared data comprises the current track shared data and the historical track data. In step S22, optionally, a gate control cycle unit is first inferred by applying a secure interaction protocol to interact with a client according to the current trajectory sharing data, the model parameter, and the second random number set, so as to obtain a second gate control output, and a first gate control output is obtained at the client. The secure interaction protocol comprises: at least one of a multiply-to-add conversion protocol, a secure sigmoid protocol, a secure tanh protocol, a secure softmax protocol, and a secure re-add protocol. The inference process of the gate control loop unit performed by the server side application secure interaction protocol and the client side is the same as the method in the step S121, and is not described herein again.
And then, according to the historical track shared data, the second gating output, the first gating output shared by the client secret, the model parameters and the second random number set, applying a secure interaction protocol to interact with the client to carry out inference of a historical attention module, so as to obtain a second normalized output, and simultaneously obtaining a first normalized output at the client. Since the attention candidate generator is based on the GRU, the attention selector uses a dot product model of the attention mechanism. Therefore, firstly, the server side conducts inference of a gating cycle unit in the attention candidate generator according to historical track shared data, model parameters and interaction of a second random number set application safety interaction protocol and the client side to obtain second candidate output, and meanwhile, the client side obtains first candidate output. And then, carrying out reasoning of a security attention model (ATTN) according to the second candidate output, the second gated output, the first gated output and the first candidate output shared by the client secret, the second random number set and the model parameter application security interaction protocol and the client interaction to obtain a second normalized output, and simultaneously obtaining the first normalized output at the client. The specific process of the server side applying the secure interaction protocol to interact with the client side to perform inference of the gated cycle unit in the attention candidate generator and inference of the secure attention model is similar to the foregoing inference process of step S122, and is not described herein again.
And finally, according to the second gating output, the second normalization output, the first gating output shared by the client secret, the first normalization output, the model parameters and the second random number set, applying a secure interaction protocol to interact with the client to perform full-connection layer processing, and obtaining a second inference result. In the embodiment of the present invention, before the server and the client perform the processing of the full connection layer (FC) interactively, the server needs to perform the splicing of the second gated output and the second normalized output to obtain the second spliced output, and at the same time, the client performs the splicing of the first gated output and the first normalized output to obtain the first spliced output. And then, carrying out full connection layer processing on the server side according to the second splicing output, the first splicing output shared by the client secret, the model parameters and the second random number set application security interaction protocol and the client interaction, and finally obtaining a second reasoning result at the server side and a first reasoning result at the client side.
It should be noted that, the application of the security override protocol may be added as needed in the process of performing the gate control loop unit (GRU), the security attention model (ATTN) and the inference process of the full connection layer (FC) by the server and the client in an interactive manner. For example, a re-wrap (re _ add) operation may be used only once at the final output portion of the gated loop unit, or a re _ add operation may be used at each output of the GRU; the re _ add operation may be used only once in the final output portion of the secure attention model (ATTN), may be used for each output of all mul _ to _ add functions, and may be used after each interaction with the server side.
Step S23: and transmitting the second reasoning result to the client so that the client acquires a predicted result by combining the first reasoning result and the second reasoning result acquired in the interactive reasoning process.
And the server side sends the second inference result obtained in the step S22 to the client side, so that the second inference result and the first inference result locally obtained by the client side are combined at the client side to obtain the prediction result. The prediction result indicates possible places of the user to which the client belongs and corresponding probabilities.
The foregoing description of specific embodiments of the present application has been presented. In some cases, acts or steps recited in embodiments of the invention may be performed in an order different than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
Based on the same inventive concept, the embodiment of the invention provides a server. As shown in fig. 11, the server includes: a second data acquisition unit, a second prediction unit, and a data transmission unit. Wherein the content of the first and second substances,
the second data acquisition unit is used for acquiring model parameters of a pre-trained personnel mobility prediction model and randomly dividing the model parameters into second secret shared data;
the second prediction unit is used for applying a secure interaction protocol to carry out inference on a personnel mobility prediction model pre-trained with the client according to the model parameters, first secret shared data transmitted by the client and a second random number set generated by a trusted third party to obtain a second inference result, wherein the first secret shared data is obtained by randomly dividing track data of a user to which the client belongs by the client;
and the data sending unit is used for transmitting the second reasoning result to the client so that the client can obtain a prediction result by combining the first reasoning result and the second reasoning result obtained in the interactive reasoning process.
For convenience of description, the above server is described as being divided into various modules by functions and described separately. Of course, the functions of the modules may be implemented in the same or multiple software and/or hardware in implementing embodiments of the invention.
The server in the foregoing embodiment is used to implement the corresponding method in the foregoing embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Based on the same inventive concept, the embodiment of the invention also provides a safety prediction system for personnel mobility, which comprises a trusted third party, the client device and the server. The safety prediction system in the embodiment of the invention provides a personnel mobility prediction model, applies a DeepMove framework, and realizes safe personnel mobility prediction reasoning through the interaction of client equipment and a server. The DeepMove framework protects the privacy of user data and the privacy of the predicted neural network parameters under the condition of not reducing the prediction accuracy. The server provides prediction service by using an internal model thereof, the client device can realize prediction only by calling an Application Program Interface (API) provided by the server, and the client device transmits own data to the server to predict the own data. The server and the client device implement security prediction by jointly executing a security protocol. The server inputs parameters of the neural network, and the user to which the client device belongs inputs private data to be predicted. The trusted third party is only responsible for generating random numbers required in the operation of the security protocol and does not participate in specific operation of the security protocol, namely, the trusted third party can be a light server or even a personal computer.
Before the client device and the server interactively perform inference on the pre-trained personnel mobility prediction model, the trusted third party is used for applying uniform distribution or standard normal distribution to generate a first random number set and a second random number set in an off-line mode, transmitting the first random number set to the client device and transmitting the second random number set to the server. The client device locally acquires track data of a user to which the client belongs, and the client device randomly divides the track data into first secret shared data. The server obtains model parameters of a personnel mobility prediction model with pre-training, and the server randomly divides the model parameters into second secret shared data.
In the inference process of the personnel mobility prediction model which is pre-trained by the interaction of the client device and the server, the client device obtains a first inference result according to the inference of the personnel mobility prediction model which is pre-trained by the interaction of the trajectory data, the second secret shared data and the first random number set application safety interaction protocol and the server. Meanwhile, the server carries out interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and the second random number set to obtain a second reasoning result.
Specifically, the inference process of the personnel mobility prediction model which is pre-trained by interaction of the client device and the server is divided into inference of a gating cycle unit, inference of a historical attention module and processing of a full connection layer, wherein the inference of the historical attention module comprises inference of the gating cycle unit and inference of a safety attention model. The secure interaction protocol comprises: at least one of a multiply-to-add conversion protocol, a secure sigmoid protocol, a secure tanh protocol, a secure softmax protocol, and a secure re-add protocol. The track data comprises current track data and historical track data, and the corresponding first secret shared data comprises the current track shared data and the historical track shared data.
In the embodiment of the present invention, first, a client device performs inference of a gating cycle unit according to interaction between the current trajectory data, the second secret shared data, and the first random number set and the server by applying a secure interaction protocol, so as to obtain a first gating output. And meanwhile, the server applies a safety interaction protocol to interact with the client side according to the current track sharing data, the model parameters and the second random number set to carry out gate control cycle unit reasoning so as to obtain second gate control output. For a more specific method, reference is made to the aforementioned method embodiment section, which is not described herein again.
Then, the client device applies a secure interaction protocol to interact with the server according to the historical track data, the first gating output, the second gating output shared by the server secret, the second secret shared data and the first random number set to perform inference of a historical attention module, so as to obtain a first normalized output, and obtain a second normalized output at the server. Meanwhile, the server applies a secure interaction protocol to interact with the client according to the historical track sharing data, the second gating output, the first gating output shared by the client secret, the model parameters and the second random number set to carry out inference of a historical attention module, and a second normalized output is obtained. For a more specific method, reference is made to the aforementioned method embodiment section, which is not described herein again.
And finally, the client equipment applies a secure interaction protocol to interact with the server according to the first gating output, the first normalization output, the second gating output and the second normalization output shared by the server secret, the second secret sharing data and the first random number set to perform full-connection layer processing so as to obtain the first inference result. And the server applies a secure interaction protocol to interact with the client side according to the second gating output, the second normalized output, the first gating output and the first normalized output shared by the client side secret, the model parameter and the second random number set to perform full-connection-layer processing so as to obtain a second reasoning result. For a more specific method, reference is made to the aforementioned method embodiment section, which is not described herein again.
Based on the same inventive concept, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the method according to any of the above embodiments is implemented.
Fig. 12 is a schematic diagram illustrating a more specific hardware structure of an electronic device according to this embodiment, where the electronic device may include: a processor 1201, a memory 1202, an input/output interface 1203, a communication interface 1204, and a bus 1205. Wherein the processor 1201, the memory 1202, the input/output interface 1203 and the communication interface 1204 enable communication connections with each other within the device via the bus 1205.
The processor 1201 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solutions provided by the method embodiments of the present invention.
The Memory 1202 may be implemented in the form of a ROM (Read Only Memory), a RAM (Random access Memory), a static storage device, a dynamic storage device, or the like. The memory 1202 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the method of the present invention is implemented by software or firmware, the relevant program codes are stored in the memory 1202 and called to be executed by the processor 1201.
The input/output interface 1203 is used for connecting an input/output module to realize information input and output. The i/o module may be configured as a component in a device (not shown) or may be external to the device to provide a corresponding function. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various sensors, etc., and the output devices may include a display, speaker, vibrator, indicator light, etc.
The communication interface 1204 is used for connecting a communication module (not shown in the figure) to realize communication interaction between the device and other devices. The communication module can realize communication in a wired mode (such as USB, network cable and the like) and also can realize communication in a wireless mode (such as mobile network, WIFI, Bluetooth and the like).
The bus 1205 includes a path to transfer information between the various components of the device, such as the processor 1201, memory 1202, input/output interface 1203, and communication interface 1204.
It should be noted that although the above-mentioned device only shows the processor 1201, the memory 1202, the input/output interface 1203, the communication interface 1204 and the bus 1205, in a specific implementation, the device may also include other components necessary for normal operation. Furthermore, those skilled in the art will appreciate that the above-described apparatus may also include only those components necessary to implement embodiments of the present invention, and need not include all of the components shown in the figures.
Those of ordinary skill in the art will understand that: the discussion of any embodiment above is merely exemplary in nature, and is not intended to intimate that the scope of the disclosure is limited to these examples; within the context of this application, features from the above embodiments or from different embodiments may also be combined, steps may be implemented in any order, and there are many other variations of the different aspects of the invention as described above, which are not provided in detail for the sake of brevity.
The present invention is intended to embrace all such alternatives, modifications and variances which fall within the broad scope of the present application. Therefore, any omissions, modifications, equivalents, improvements, and the like that may be made within the spirit and principles of the invention are intended to be included within the scope of the disclosure.

Claims (10)

1. A method for safely predicting the mobility of a person, the method comprising:
acquiring track data of a user to which a client belongs, and randomly dividing the track data into first secret shared data according to the track data;
acquiring a first inference result according to the trajectory data, second secret shared data transmitted by a server and inference of a personnel mobility prediction model for pre-training interaction between a first random number set generated by a trusted third party and the server, wherein the second secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server;
and receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining a prediction result by combining the first reasoning result and the second reasoning result.
2. The method of claim 1, wherein the secure interaction protocol comprises: at least one of a multiply-to-add conversion protocol, a secure sigmoid protocol, a secure tanh protocol, a secure softmax protocol, and a secure re-add protocol.
3. The method as claimed in claim 1, wherein the trajectory data includes current trajectory data and historical trajectory data, and the obtaining of the first inference result based on the trajectory data, the second secret shared data transmitted by the server, and the inference of the staff mobility prediction model by applying the secure interaction protocol to interact with the server for pre-training by the first set of random numbers includes:
applying a secure interaction protocol to interact with the server side according to the current trajectory data, the second secret shared data and the first random number set to perform gate control cycle unit reasoning to obtain a first gate control output, and obtaining a second gate control output at the server side;
applying a secure interaction protocol to interact with the server side according to the historical track data, the first gated output, the second gated output shared by the server side secret, the second secret shared data and the first random number set to carry out inference of a historical attention module so as to obtain a first normalized output, and obtaining a second normalized output at the server side;
and applying a secure interaction protocol to interact with the server side to perform full connection layer processing according to the first gating output, the first normalization output, the second gating output and the second normalization output shared by the server side secret, the second secret sharing data and the first random number set, so as to obtain the first inference result.
4. The method of claim 3, wherein the gated loop unit comprises: the reset gate, the update gate, the candidate state output and the update mode respectively satisfy the following relations:
rt=σ(Wir*xt+bir+Whr*ht-1+bhr),
zt=σ(Wiz*zt+biz+Whz*ht-1+bhz),
Figure FDA0003564798670000011
Figure FDA0003564798670000012
wherein, Wir,bir,Whr,bhr,Wiz,biz,Whz,bhz,Win,bin,Whn,bhnAs a model parameter, xtAs input vector at the t-th time step, ht-1Information stored for the previous time step t-1, rtFor resetting the output of the t-th time step of the gate, ZtTo update the output of the t-th time step of the gate, ntNew memory contents, h, representing the t-th time steptInformation saved for the t time step.
5. A method for safely predicting the mobility of a person, the method comprising:
obtaining model parameters of a pre-trained personnel mobility prediction model, and randomly dividing the model parameters into second secret shared data according to the model parameters;
acquiring a second inference result according to the model parameter, first secret shared data transmitted by the client and inference of a personnel mobility prediction model for applying a secure interaction protocol to interact with the client to pre-train according to a second random number set generated by a trusted third party, wherein the first secret shared data is obtained by randomly dividing track data of a user to which the client belongs by the client;
and transmitting the second reasoning result to the client so that the client acquires a predicted result by combining the first reasoning result and the second reasoning result acquired in the interactive reasoning process.
6. The method as claimed in claim 5, wherein the first secret sharing data includes current trajectory sharing data and historical trajectory sharing data, and the obtaining of the second inference result based on inference of the model parameters, the first secret sharing data transmitted by the client, and a second random number set generated by a trusted third party by applying a secure interaction protocol to interact with the client for a pre-trained people mobility prediction model comprises:
according to the current track sharing data, the model parameters and the second random number set, applying a secure interaction protocol to interact with a client to carry out gate control cycle unit reasoning to obtain a second gate control output, and simultaneously obtaining a first gate control output at the client;
according to the historical track sharing data, the second gating output, the first gating output shared by the client secret, the model parameters and the second random number set, applying a secure interaction protocol to interact with the client to carry out inference of a historical attention module so as to obtain a second normalized output, and simultaneously obtaining a first normalized output at the client;
and carrying out full connection layer processing on the interaction with the client by applying a secure interaction protocol according to the second gating output, the second normalized output, the first gating output and the first normalized output shared by the client secret, the model parameter and the second random number set to obtain a second reasoning result.
7. A client device, the client device comprising:
the system comprises a first data acquisition unit, a second data acquisition unit and a first secret sharing unit, wherein the first data acquisition unit is used for acquiring track data of a user to which a client belongs and randomly dividing the track data into first secret sharing data according to the track data;
the first prediction unit is used for obtaining a first inference result according to the trajectory data, second secret shared data transmitted by the server and inference of a personnel mobility prediction model which is generated by the server and interacts with the server in a pre-training mode by applying a secure interaction protocol to a first random number set generated by a trusted third party, wherein the second secret shared data is obtained by randomly dividing model parameters of the personnel mobility prediction model by the server;
and the result obtaining unit is used for receiving a second reasoning result obtained by the server through interactive reasoning on the personnel mobility prediction model according to the first secret shared data, the model parameters and a second random number set generated by a trusted third party, and obtaining the prediction result by combining the first reasoning result and the second reasoning result.
8. A server, characterized in that the server comprises:
the second data acquisition unit is used for acquiring model parameters of a pre-trained personnel mobility prediction model and randomly dividing the model parameters into second secret shared data;
the second prediction unit is used for applying a secure interaction protocol to carry out inference on a personnel mobility prediction model pre-trained with the client according to the model parameters, first secret shared data transmitted by the client and a second random number set generated by a trusted third party to obtain a second inference result, wherein the first secret shared data is obtained by randomly dividing track data of a user to which the client belongs by the client;
and the data sending unit is used for transmitting the second reasoning result to the client so that the client can obtain a prediction result by combining the first reasoning result and the second reasoning result obtained in the interactive reasoning process.
9. A system for safely predicting the mobility of persons, the system comprising: a trusted third party, a client device as claimed in claim 7 and a server as claimed in claim 8.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any one of claims 1 to 6 when executing the program.
CN202210299560.6A 2022-03-25 2022-03-25 Safety prediction method and system for personnel mobility, client device and server Pending CN114679316A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210299560.6A CN114679316A (en) 2022-03-25 2022-03-25 Safety prediction method and system for personnel mobility, client device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210299560.6A CN114679316A (en) 2022-03-25 2022-03-25 Safety prediction method and system for personnel mobility, client device and server

Publications (1)

Publication Number Publication Date
CN114679316A true CN114679316A (en) 2022-06-28

Family

ID=82074578

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210299560.6A Pending CN114679316A (en) 2022-03-25 2022-03-25 Safety prediction method and system for personnel mobility, client device and server

Country Status (1)

Country Link
CN (1) CN114679316A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314345A (en) * 2020-02-19 2020-06-19 安徽大学 Method and device for protecting sequence data privacy, computer equipment and storage medium
CN112182649A (en) * 2020-09-22 2021-01-05 上海海洋大学 Data privacy protection system based on safe two-party calculation linear regression algorithm
CN113065145A (en) * 2021-03-25 2021-07-02 上海海洋大学 Privacy protection linear regression method based on secret sharing and random disturbance

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111314345A (en) * 2020-02-19 2020-06-19 安徽大学 Method and device for protecting sequence data privacy, computer equipment and storage medium
CN112182649A (en) * 2020-09-22 2021-01-05 上海海洋大学 Data privacy protection system based on safe two-party calculation linear regression algorithm
CN113065145A (en) * 2021-03-25 2021-07-02 上海海洋大学 Privacy protection linear regression method based on secret sharing and random disturbance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIE FENG等: "DeepMove:Predicting Human Mobility with Attentional Recurrent Networks", 《ACM》, pages 1459 - 1468 *

Similar Documents

Publication Publication Date Title
Ma et al. Privacy-preserving outsourced speech recognition for smart IoT devices
Nguyen et al. Federated learning for internet of things: A comprehensive survey
Nguyen et al. Federated learning for COVID-19 detection with generative adversarial networks in edge cloud computing
Hossain et al. Emotion recognition using secure edge and cloud computing
Thapa et al. Advancements of federated learning towards privacy preservation: from federated learning to split learning
CN112149171B (en) Method, device, equipment and storage medium for training federal neural network model
Riazi et al. Deep learning on private data
US20220351039A1 (en) Federated learning using heterogeneous model types and architectures
CN109214543B (en) Data processing method and device
Miao et al. Federated deep reinforcement learning based secure data sharing for Internet of Things
CN110795618B (en) Content recommendation method, device, equipment and computer readable storage medium
Jiang et al. Distributed deep learning optimized system over the cloud and smart phone devices
CN112948885B (en) Method, device and system for realizing privacy protection of multiparty collaborative update model
Treleaven et al. Federated learning: The pioneering distributed machine learning and privacy-preserving data technology
CN113098840A (en) Efficient and safe linear rectification function operation method based on addition secret sharing technology
CN115409204A (en) Federal recommendation method based on fast Fourier transform and learnable filter
Yang et al. Practical feature inference attack in vertical federated learning during prediction in artificial Internet of Things
Saputra et al. Federated learning framework with straggling mitigation and privacy-awareness for AI-based mobile application services
CN113362852A (en) User attribute identification method and device
Gheid et al. Efficient and privacy-aware multi-party classification protocol for human activity recognition
Wang et al. Privacy-preserving split learning for large-scaled vision pre-training
CN110088748A (en) Problem generation method and device, interrogation system, computer readable storage medium
CN114679316A (en) Safety prediction method and system for personnel mobility, client device and server
Kim et al. Rete-ADH: An improvement to rete for composite context-aware service
Xia et al. Cascade Vertical Federated Learning Towards Straggler Mitigation and Label Privacy over Distributed Labels

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination