CN111325322A - Deep learning method, system, server and storage medium based on privacy protection - Google Patents

Deep learning method, system, server and storage medium based on privacy protection Download PDF

Info

Publication number
CN111325322A
CN111325322A CN202010092513.5A CN202010092513A CN111325322A CN 111325322 A CN111325322 A CN 111325322A CN 202010092513 A CN202010092513 A CN 202010092513A CN 111325322 A CN111325322 A CN 111325322A
Authority
CN
China
Prior art keywords
trained
deep learning
result
extraction module
dissimilarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010092513.5A
Other languages
Chinese (zh)
Inventor
刘利
郭鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010092513.5A priority Critical patent/CN111325322A/en
Publication of CN111325322A publication Critical patent/CN111325322A/en
Priority to PCT/CN2021/071089 priority patent/WO2021159898A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of neural networks, and discloses a deep learning method, a system, a server and a storage medium based on privacy protection, wherein the method is applied to the server and comprises the following steps: sending a feature extraction module in a trained deep learning model to a mobile terminal, so that the mobile terminal inputs data to be processed into the feature extraction module, obtains feature information corresponding to the data to be processed, and feeds back the feature information; and inputting the characteristic information fed back by the mobile terminal into a result generation module in the trained deep learning model, outputting a result, and sending the result to the mobile terminal. The method and the system solve the problem that privacy information is leaked when the deep learning is carried out by utilizing a deep learning framework arranged at the cloud or a server in the prior art.

Description

Deep learning method, system, server and storage medium based on privacy protection
Technical Field
The invention relates to the technical field of neural networks, in particular to a deep learning method, a deep learning system, a deep learning server and a computer readable storage medium based on privacy protection.
Background
In the artificial intelligence technology, deep learning is a machine learning technology for realizing artificial intelligence-like by simulating a human brain neural network. In view of its high-efficient data feature extraction and analysis ability, it has now been widely used in computer vision, natural language processing, unmanned, smart home and other related fields or industries, affecting people's daily life.
Currently, a deep learning framework is generally placed at a cloud end/server end, model training and reasoning calculation are performed at the cloud end/server end, a user sends pictures, videos, texts or voices to be processed to the cloud end/server, the cloud end/server performs corresponding processing on the pictures, videos, texts or voices according to the user requirements, and then informs the user of a processing result, so that related services are provided for the user. However, since the user needs to send the picture, video or voice to be processed to the cloud/server, the picture, video, text or voice sent by the user may relate to private information, and the picture, video, text or voice including the private information may be illegally intercepted during the process that the user sends the picture, video, text or voice to the cloud/server, so that the private information is leaked.
Disclosure of Invention
The invention mainly aims to provide a deep learning method, a deep learning system, a deep learning server and a computer readable storage medium based on privacy protection, and aims to solve the technical problem that privacy information is leaked when deep learning is carried out by using a deep learning framework arranged at a cloud or the deep learning framework arranged at the server in the prior art.
In order to achieve the above object, the present invention provides a deep learning method based on privacy protection, which is applied to a server, and comprises the following steps:
sending a feature extraction module in a trained deep learning model to a mobile terminal so that the mobile terminal inputs data to be processed into the feature extraction module, obtains feature information corresponding to the data to be processed, and feeds back the feature information;
and inputting the characteristic information fed back by the mobile terminal into a result generation module in the trained deep learning model, outputting a result, and sending the result to the mobile terminal.
Optionally, before the step of sending the feature extraction module in the trained deep learning model to the mobile terminal, the method includes:
inputting a plurality of samples into a feature extraction module in a deep learning model to be trained, and outputting feature information corresponding to each sample, wherein each sample has a corresponding preset label;
calculating a first dissimilarity degree between every two pieces of feature information corresponding to the same preset label, and calculating a second dissimilarity degree between every two pieces of feature information corresponding to different preset labels;
if the first dissimilarity degree and the second dissimilarity degree do not accord with a preset rule, adjusting a feature extraction module according to the first dissimilarity degree and the second dissimilarity degree, and executing: inputting the plurality of samples into a feature extraction module, and outputting feature information corresponding to each sample;
if the first dissimilarity degree and the second dissimilarity degree accord with the preset rule, a trained feature extraction module is obtained;
and inputting the characteristic information of each sample output by the trained characteristic extraction module into a to-be-trained result generation module for training to obtain a trained result generation module.
Optionally, the step of calculating a first dissimilarity degree between every two pieces of feature information corresponding to the same preset tag, and calculating a second dissimilarity degree between every two pieces of feature information corresponding to different preset tags includes:
respectively calculating first dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula, and respectively calculating second dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula;
the dissimilarity degree calculation formula is as follows:
Figure BDA0002383092130000021
and
Figure BDA0002383092130000022
wherein margin is a predetermined hyperparameter, L1Is a first degree of dissimilarity, L2Second degree of dissimilarity, f1And f2Two characteristic information, f, respectively, for the same preset label3And f4Respectively, the characteristic information of two different preset labels.
Optionally, if the first dissimilarity degree and the second dissimilarity degree do not meet a preset rule, the feature extraction module is adjusted according to the first dissimilarity degree and the second dissimilarity degree to execute: the step of inputting the plurality of samples to the feature extraction module and outputting the feature information corresponding to each sample includes:
judging whether all the first dissimilarity degrees are smaller than or equal to a first preset threshold value and whether all the second dissimilarity degrees are larger than or equal to a second preset threshold value, wherein the first preset threshold value is smaller than the second preset threshold value;
if at least one first dissimilarity degree is greater than a first preset threshold value and/or at least one second dissimilarity degree is less than a second preset threshold value, adjusting a feature extraction module according to the first dissimilarity degree and the second dissimilarity degree, and executing: and inputting the plurality of samples into a feature extraction module, and outputting feature information corresponding to each sample.
Optionally, the step of inputting the feature information of each sample output by the trained feature extraction module into the to-be-trained result generation module for training, and obtaining the trained result generation module includes:
inputting the characteristic information of each sample output by the trained characteristic extraction module into a to-be-trained result generation module, and outputting an actual result corresponding to each sample;
obtaining a loss function value according to the actual result and the preset expected result of each sample;
judging whether the loss function value is less than or equal to a third preset threshold value or not;
if not, adjusting the parameters of the result generation module by adopting a back propagation algorithm according to the loss function value, and executing: inputting the characteristic information of each sample output by the trained characteristic extraction module into a result generation module, and outputting an actual result corresponding to each sample;
and if so, stopping training and obtaining a trained result generation module.
Optionally, the step of inputting the feature information of each sample output by the trained feature extraction module into the to-be-trained result generation module for training, and obtaining the trained result generation module includes:
inputting the feature information of each sample output by the trained feature extraction module into a result generation module, outputting an actual result corresponding to each sample, and updating the training cumulative number n to be n +1, wherein n is more than or equal to 0;
obtaining a loss function value according to the actual result and the preset expected result of each sample;
judging whether the loss function value is less than or equal to a third preset threshold value or not;
if the loss function value is larger than a third preset threshold value, judging whether the training accumulated times are smaller than preset times;
if the training accumulated times are less than the preset times, adjusting the parameters of the result generation module by adopting a back propagation algorithm according to the loss function value, and executing: inputting the characteristic information of each sample output by the trained characteristic extraction module into a result generation module, and outputting an actual result corresponding to each sample;
if the accumulated training times are larger than or equal to the preset times, stopping training and obtaining a trained result generation module;
and if the loss function value is smaller than or equal to a third preset threshold value, stopping training and obtaining a trained result generation module.
In addition, in order to achieve the above object, the present invention provides a deep learning method based on privacy protection, which is applied to a mobile terminal, and the deep learning method based on privacy protection includes the steps of:
receiving a feature extraction module in the trained deep learning model sent by the server;
inputting data to be processed into the feature extraction module to obtain feature information corresponding to the data to be processed;
sending the characteristic information to the server so that the server inputs the characteristic information to a result generation module in the trained deep learning model, outputs a result and feeds back the result;
and receiving the result sent by the server.
In addition, to achieve the above object, the present invention provides a deep learning system based on privacy protection, including:
the transmitting module is used for transmitting the feature extraction module in the trained deep learning model to a mobile server so that the mobile server inputs data to be processed into the feature extraction module, obtains feature information corresponding to the data to be processed and feeds back the feature information;
and the receiving module is used for receiving the characteristic information fed back by the mobile server, inputting the characteristic information into a learning result generating module in the trained deep learning model, outputting a learning result and sending the learning result to the mobile server.
In addition, to achieve the above object, the present invention further provides a deep learning server based on privacy protection, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the deep learning method based on privacy protection as described above.
Furthermore, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the privacy-based deep learning method as described above.
According to the deep learning method based on privacy protection, the intelligent problem system, the deep learning server based on privacy protection and the computer readable storage medium, the feature extraction module in the trained deep learning model based on the neural network is sent to the mobile terminal, so that the mobile terminal inputs original data into the feature extraction module, feature information corresponding to the original data is obtained, and the feature information is fed back; and inputting the characteristic information fed back by the mobile terminal into a result generation module in the trained deep learning model based on the neural network, outputting a result, and sending the result to the mobile terminal. Therefore, original data are input into the feature extraction module, and are sequentially analyzed and calculated through multiple layers in the feature extraction module and finally converted into feature information, the feature information is completely different from the original data, privacy data of a user cannot be directly obtained from the feature information, so that the user can send the feature information to a server to perform deep learning, even if the user is stolen, privacy leakage cannot be caused, and the safety of the deep learning process by using the server is improved.
Drawings
FIG. 1 is a schematic diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a deep learning method based on privacy protection according to a first embodiment of the present invention;
FIG. 3 is a flowchart illustrating a deep learning method based on privacy protection according to a second embodiment of the present invention;
FIG. 4 is a flowchart illustrating a detailed process of step S70 in the third embodiment of the deep learning method based on privacy protection according to the present invention;
FIG. 5 is a flowchart illustrating a detailed process of step S70 in the fourth embodiment of the deep learning method based on privacy protection according to the present invention;
FIG. 6 is a schematic diagram of functional modules of the deep learning system based on privacy protection according to the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating a hardware structure of a deep learning server based on privacy protection according to various embodiments of the present invention. The deep learning server based on privacy protection comprises a communication module 01, a memory 02, a processor 03 and the like. Those skilled in the art will appreciate that the privacy-based deep learning server illustrated in fig. 1 may also include more or fewer components than those illustrated, or combine certain components, or a different arrangement of components. The processor 03 is connected to the memory 02 and the communication module 01, respectively, and the memory 02 stores a computer program, which is executed by the processor 03 at the same time.
The communication module 01 may be connected to an external device through a network. The communication module 01 may receive data sent by an external device, and may also send data, instructions, and information to the external device, where the external device may be an electronic device such as another server, a mobile phone, a tablet computer, a notebook computer, and a desktop computer.
The memory 02 may be used to store software programs and various data. The memory 02 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program required by at least one function (sending a feature extraction module in a trained deep learning model to the mobile terminal), and the like; the storage data area may store data or information or the like created according to the use of the privacy-protection-based deep learning server. Further, the memory 02 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 03, which is a control center of the deep learning server based on privacy protection, connects various parts of the entire deep learning server based on privacy protection by using various interfaces and lines, and executes various functions and processes data of the deep learning server based on privacy protection by running or executing software programs and/or modules stored in the memory 02 and calling data stored in the memory 02, thereby performing overall monitoring of the deep learning server based on privacy protection. Processor 03 may include one or more processing units; preferably, the processor 03 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 03.
Although not shown in fig. 1, the deep learning server based on privacy protection may further include a circuit control module, where the circuit control module is configured to be connected to a mains power supply, implement power control, and ensure normal operation of other components.
Those skilled in the art will appreciate that the privacy-based deep learning server architecture shown in fig. 1 does not constitute a limitation of privacy-based deep learning servers, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
Various embodiments of the method of the present invention are presented in terms of the above-described hardware architecture.
Referring to fig. 2, in a first embodiment of the deep learning method based on privacy protection of the present invention, the deep learning method based on privacy protection is applied to any deep learning server based on privacy protection, and the deep learning method based on privacy protection includes the steps of:
step S10, sending a feature extraction module in the trained deep learning model to a mobile terminal, so that the mobile terminal inputs data to be processed into the feature extraction module, obtains feature information corresponding to the data to be processed, and feeds back the feature information;
in this embodiment, the neural network adopted by the deep learning model based on the neural network may be a convolutional neural network, a deep neural network, or a cyclic neural network, and is not limited herein. The server designates a certain layer in the middle of the neural network as a boundary layer, decomposes the trained deep learning model based on the neural network into a feature extraction module and a result generation module, the feature extraction module includes a plurality of layers from an input layer to the boundary layer, the input layer of the neural network is the input layer as the feature extraction module, the designated boundary layer is the output layer of the feature extraction module, and the result generation module includes a plurality of layers from the next layer of the boundary layer to the output layer, wherein the next layer of the designated boundary layer is the input layer of the result generation module, and the output layer of the neural network is the output layer of the result generation module. And when receiving the requirement of the feature extraction module sent by the mobile terminal, the server sends the feature extraction module to the mobile terminal through a wireless or wired network. After the mobile terminal receives the feature extraction module, when deep learning needs to be performed on certain original data, the data can be pictures, videos, texts, voices and the like, the mobile terminal inputs the original data to be processed into the feature extraction module, feature information corresponding to the original data is finally output by an output layer of the feature extraction module through analysis and calculation of each layer of the feature extraction module, and the mobile terminal sends the feature information corresponding to the original data to a server through a wireless or wired network.
And step S20, inputting the characteristic information fed back by the mobile terminal into a result generation module in the trained deep learning model, outputting a result, and sending the result to the mobile terminal.
The method comprises the steps that a server receives corresponding characteristic information of original data sent by a mobile terminal through a wireless or wired network, the characteristic information is used as parameters of an input layer of a result generation module in a trained deep learning model based on a neural network and is input into the result generation module, the parameters are analyzed and calculated through layers in the result generation module, finally, the output layer of the result generation module outputs a result, and the server sends the result to the mobile terminal through the wireless or wired network.
In the embodiment, a trained feature extraction module in a deep learning model based on a neural network is sent to a mobile terminal, so that the mobile terminal inputs original data into the feature extraction module, obtains feature information corresponding to the original data, and feeds back the feature information; and inputting the characteristic information fed back by the mobile terminal into a result generation module in the trained deep learning model based on the neural network, outputting a result, and sending the result to the mobile terminal. Therefore, original data are input into the feature extraction module, and are sequentially analyzed and calculated through multiple layers in the feature extraction module and finally converted into feature information, the feature information is completely different from the original data, privacy data of a user cannot be directly obtained from the feature information, so that the user can send the feature information to a server to perform deep learning, even if the user is stolen, privacy leakage cannot be caused, and the safety of the deep learning process by using the server is improved.
Further, referring to fig. 3, a second embodiment of the deep learning method based on privacy protection according to the present application is proposed according to the first embodiment of the deep learning method based on privacy protection of the present application, and in this embodiment, step S10 includes:
step S30, inputting a plurality of samples into a feature extraction module in the deep learning model to be trained, and outputting feature information corresponding to each sample, wherein each sample has a corresponding preset label;
in this embodiment, a plurality of sample data are obtained, before the deep learning model is trained by using the sample data, the sample data need to be labeled manually, and a corresponding label is set for each training sample, for example, before the deep learning model for gender identification is trained, a label of "male" or "female" is set for each sample data according to the actual corresponding gender of the sample.
The server inputs a plurality of samples with preset labels into a feature extraction module in the deep learning model to be trained for first training, and finally, feature information of each sample is output by an output layer of the feature extraction module through analysis and calculation of each layer in the feature extraction module in a forward propagation mode.
Step S40, calculating a first dissimilarity degree between every two pieces of feature information corresponding to the same preset label, and calculating a second dissimilarity degree between every two pieces of feature information corresponding to different preset labels;
and after the server acquires the feature information of each sample output by the feature extraction module, calculating a first dissimilarity degree of the feature information between every two samples with the same preset label and calculating a second dissimilarity degree of the feature information between every two samples with different preset labels according to the preset label of each sample.
The specific process of implementing the step of calculating the first dissimilarity degree between every two characteristic information corresponding to the same preset label and calculating the second dissimilarity degree between every two characteristic information corresponding to different preset labels is as follows:
step S41, respectively calculating first dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula, and respectively calculating first dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula;
the dissimilarity degree calculation formula is as follows:
Figure BDA0002383092130000081
and
Figure BDA0002383092130000082
wherein margin is a predetermined hyperparameter, L1Is a first degree of dissimilarity, L2Second degree of dissimilarity, f1And f2Two characteristic information, f, respectively, for the same preset label3And f4Respectively, the characteristic information of two different preset labels.
The greater the value of the degree of dissimilarity between the feature information of the two samples, the more dissimilar the feature information of the two samples, the smaller the value of the degree of dissimilarity between the feature information of the two samples, the more similar the feature information of the two samples, and the 0 value of the degree of dissimilarity between the feature information of the two samples, the same the feature information of the two samples.
Step S50, if the first dissimilarity degree and the second dissimilarity degree do not accord with preset rules, adjusting the feature extraction module according to the first dissimilarity degree and the second dissimilarity degree, and executing the step S30;
after calculating and obtaining a first dissimilarity degree between feature information of every two samples with the same labels and a second dissimilarity degree between feature information of every two samples with different labels, the server judges whether all the obtained first dissimilarity degrees and all the obtained second dissimilarity degrees accord with a preset rule or not according to a preset rule, determines that the first dissimilarity degrees and the second dissimilarity degrees do not accord with the preset rule, and adjusts parameters of each layer in the feature extraction module according to the first dissimilarity degrees and the second dissimilarity degrees. After adjusting the parameters, the server will start the next training, i.e. re-execute step S30, input the multiple samples into the feature extraction module in the deep learning model to be trained, and output the feature information corresponding to each sample.
In an embodiment, the specific process of determining that the first dissimilarity degree and the second dissimilarity degree obtained by the server do not meet the preset rule through the preset rule may be:
step S51, determining whether all the first dissimilarity degrees are less than or equal to a first preset threshold and all the second dissimilarity degrees are greater than or equal to a second preset threshold, wherein the first preset threshold is less than the second preset threshold;
and a first preset threshold corresponding to the first dissimilarity degree and a second preset threshold corresponding to the second dissimilarity degree are respectively set in the preset rule, all the obtained first dissimilarity degrees are sequentially compared with the first preset threshold in size, and all the obtained second dissimilarity degrees are sequentially compared with the second preset threshold in size, wherein the first preset threshold is smaller than the second preset threshold.
Step S52, if at least one first dissimilarity is greater than a first preset threshold and/or at least one second dissimilarity is less than a second preset threshold, adjusting the feature extraction module according to the first dissimilarity and the second dissimilarity, and executing the step S30.
And traversing results of sequentially comparing each first dissimilarity degree with a first preset threshold value and results of sequentially comparing each second dissimilarity degree with a second preset threshold value by the server, and when all the first dissimilarity degrees are determined to be smaller than or equal to the first preset threshold value and all the second dissimilarity degrees are determined to be larger than or equal to the second preset threshold value, determining that the first dissimilarity degrees and the second dissimilarity degrees accord with preset rules. And when it is determined that at least one first dissimilarity degree is greater than a first preset threshold and/or at least one second dissimilarity degree is less than a second preset threshold, determining that the first dissimilarity degree and the second dissimilarity degree do not accord with a preset rule, and therefore the server determines that the feature information obtained by inputting the sample into the feature extraction module during the training does not reach an expected target. Before inputting the sample to the feature extraction module to obtain the feature information again next time, the server can adjust parameters of each layer in the feature extraction module, and perform the next training again, namely, input the sample to the feature extraction module again to obtain new feature information.
In an embodiment, the specific process of the server adjusting the parameters of each layer in the feature extraction module is as follows: and constructing a first loss function through the first dissimilarity and a first preset threshold, and sequentially adjusting parameters of each layer from an output layer to an input layer of the feature extraction module through a back propagation method based on a gradient descent algorithm.
Step S60, if the first dissimilarity degree and the second dissimilarity degree accord with the preset rule, a trained feature extraction module is obtained;
when the first dissimilarity degree and the second dissimilarity degree calculated based on the feature information obtained after any training are in accordance with a preset rule, namely all the first dissimilarity degrees are smaller than or equal to a first preset threshold value and all the second dissimilarity degrees are larger than or equal to a second preset threshold value, the server determines that the parameters in the feature extraction module are trained, and the current feature extraction module is used as a trained feature extraction module.
The similarity among the feature information of the samples with the same label is extremely high through training, namely the samples are similar to multipoint mapping on one point, so that in the practical application process, the mobile terminal sends the feature information of certain data to be processed to the server and is illegally intercepted, and as the plurality of data are mapped with one feature information, an interceptor cannot deduce unique data from the feature information, for example, a plurality of different male portrait pictures are input into the trained feature extraction module, and the output feature information is the same or extremely high in similarity. Meanwhile, the similarity between the characteristic information of the samples with different labels is extremely low through training, and the accuracy of subsequent result generation is improved.
And step S70, inputting the feature information of each sample output by the trained feature extraction module into the to-be-trained result generation module for training, and obtaining the trained result generation module.
And after the server finishes training the feature module, inputting the sample into the trained feature extraction module to obtain the feature information of each sample output by the trained feature extraction module. And the characteristic information of each sample output by the trained characteristic extraction modules is used as a training sample of the result generation module to be trained, the training sample is input to the result generation module to be trained for training, and the final trained result generation module is obtained after the training is finished. Because the training samples of the result generation module are the feature information output by the trained feature extraction module, the accuracy of the result obtained by deep learning the feature information can also reach the accuracy of the result obtained by directly inputting the data to be processed into the complete deep learning model in the prior art through the feature information sent by the mobile terminal and received by the result generation module trained by the training samples in the actual application process.
In the training process of the feature extraction module and the result generation module in the deep learning model, the similarity of the feature information of the data of the same type is extremely high or the same through training, so that in the practical application process, the mobile terminal sends the feature information of a certain data to be processed to the server and is intercepted illegally, and an interceptor cannot deduce unique data from the feature information because a plurality of data can be mapped with one feature information, the confidentiality of the data to be processed is improved, and the leakage of privacy information is further avoided.
Further, referring to fig. 4, a third embodiment of the deep learning method based on privacy protection of the present application is proposed according to the first embodiment of the deep learning method based on privacy protection of the present application, and in this embodiment, the step S70 includes:
step S701, inputting the feature information of each sample output by the trained feature extraction module into a to-be-trained result generation module, and outputting an actual result corresponding to each sample;
in this embodiment, the server outputs feature information corresponding to each sample from the feature extraction modules in which all sample input values are trained, and generates training samples of the module using the feature information of the individual samples output by the trained feature extraction modules as results. When training is carried out, the characteristic information is input into a result generation module to be trained, and is sequentially analyzed and calculated through layers from an input layer to an output layer in the result generation module to be trained, so that actual results corresponding to samples output by the result generation module to be trained are obtained.
Step S702, obtaining a loss function value according to the actual result and the preset expected result of each sample;
during each training, the server inputs the actual result and the preset expected result of each sample into a preset loss function to obtain a loss function value of the current training, wherein the preset loss function may adopt a mean square error loss function, a root mean square error loss function, a mean absolute error loss function, a cross entropy cost loss function or other types of loss functions.
Step S703 of determining whether the loss function value is equal to or less than a third preset threshold; if not, go to step S704; if yes, go to step S705;
step S704, adjusting parameters of a result generation module by adopting a back propagation algorithm according to the loss function value, and executing the step S701;
step S705, stopping training, and obtaining the trained result generation module.
After obtaining the loss function value of the training, the server judges whether the loss function value is smaller than or equal to a third preset threshold, if the loss function value of the training is larger than the third preset threshold, the result obtained by the training does not reach the preset expectation, the parameters of the result generation module need to be adjusted, the parameters of the result generation module are adjusted by adopting a back propagation algorithm to sequentially adjust the parameters of each layer from the output layer to the input layer of the result generation module, then the next training is started, namely step S701 is executed again, the feature information of each sample output by the trained feature extraction module is input into the result generation module after the parameters are adjusted, and the loss function value obtained after the training is smaller than or equal to the third preset threshold. And if the loss function value of the training is less than or equal to a third preset threshold, stopping the training of the result generation module, and showing that the result of each sample output by the result generation module at the moment reaches a preset expectation without adjusting the parameters of the result generation module, wherein the result generation module at the moment is used as the trained result generation module.
In the embodiment, the result generation module is trained by using the feature information output by the trained feature extraction module as the training sample of the result generation module, and the parameters of the result generation module are adjusted by constructing the loss function value in the training process, so that the trained result generation module can perform deep learning according to the feature information sent by the mobile terminal.
Further, referring to fig. 5, a fourth embodiment of the deep learning method based on privacy protection of the present application is proposed according to the first embodiment of the deep learning method based on privacy protection of the present application, and in this embodiment, the step S70 includes:
step S711, inputting the feature information of each sample output by the trained feature extraction module into a result generation module, outputting an actual result corresponding to each sample, and updating the training cumulative number n to be n +1, wherein n is more than or equal to 0;
in this embodiment, the server outputs feature information corresponding to each sample from the feature extraction modules in which all sample input values are trained, and generates training samples of the module using the feature information of the individual samples output by the trained feature extraction modules as results. When training is carried out, the characteristic information is input into a result generation module to be trained, the characteristic information is analyzed and calculated through each layer from an input layer to an output layer in the result generation module to be trained in sequence, actual results which are output by the result generation module to be trained and correspond to each sample are obtained, the training cumulative frequency is added with 1, the training cumulative frequency is updated, and the initial training cumulative frequency is 0 before the result generation module is not trained.
Step 712, obtaining a loss function value according to the actual result and the preset expected result of each sample;
during each training, the server inputs the actual result and the preset expected result of each sample into a preset loss function to obtain a loss function value of the current training, wherein the preset loss function may adopt a mean square error loss function, a root mean square error loss function, a mean absolute error loss function, a cross entropy cost loss function or other types of loss functions.
Step S713, determining whether the loss function value is equal to or less than a third preset threshold; if not, go to step S714; if yes, go to step S717;
step S714, judging whether the training accumulated times is less than the preset times; if not, go to step S716; if yes, go to step S715;
step S715, adjusting parameters of a result generation module by adopting a back propagation algorithm according to the loss function value, and executing the step S711;
step S716, stopping training and obtaining a trained result generation module;
in step S717, the training is stopped and the trained result generating module is obtained.
The server judges whether the loss function value is smaller than or equal to a third preset threshold after obtaining the loss function value of each training, if the loss function value of the training is larger than the third preset threshold, the result obtained by the training does not reach the preset expectation, the parameters of the result generation module need to be adjusted, and before the parameters of the result generation module are adjusted, whether the accumulated times of the training reach the preset times or not is judged. If the training accumulated times do not reach the preset times, parameters of each layer from the output layer to the input layer of the result generation module are adjusted in sequence by adopting a back propagation algorithm, then next training is started, namely step S711 is executed again, the feature information of each sample output by the trained feature extraction module is input into the result generation module after the parameters are adjusted until the loss function value obtained after training is less than or equal to a third preset threshold value or the training accumulated times reach the preset times. And if the loss function value of the training is less than or equal to a third preset threshold, stopping the training of the result generation module, and showing that the result of each sample output by the result generation module at the moment reaches a preset expectation without adjusting the parameters of the result generation module, wherein the result generation module at the moment is used as the trained result generation module.
In the embodiment, the result generation module is trained by using the feature information output by the trained feature extraction module as a training sample of the result generation module, and the parameters of the result generation module are adjusted by constructing the loss function value in the training process, so that the trained result generation module can perform deep learning according to the feature information sent by the mobile terminal; meanwhile, whether the training of the result generation module is finished or not is determined according to whether the loss function reaches the preset threshold value range or not and whether the training times reach the preset times or not, so that the excessive training times and the overlong training time of the result generation module are avoided.
In a sixth embodiment of the deep learning method based on privacy protection of the present invention, the deep learning method based on privacy protection is applied to a mobile terminal, and the deep learning method based on privacy protection includes the steps of:
step S100, receiving a feature extraction module in a trained deep learning model sent by a server;
in this embodiment, the neural network used by the deep learning model may be a convolutional neural network, a deep neural network, a cyclic neural network, or the like, which is not limited herein. The server designates a certain layer in the middle of the neural network as a boundary layer, decomposes the trained deep learning model based on the neural network into a feature extraction module and a result generation module, the feature extraction module includes a plurality of layers from an input layer to the boundary layer, the input layer of the neural network is the input layer as the feature extraction module, the designated boundary layer is the output layer of the feature extraction module, and the result generation module includes a plurality of layers from the next layer of the boundary layer to the output layer, wherein the next layer of the designated boundary layer is the input layer of the result generation module, and the output layer of the neural network is the output layer of the result generation module. And when receiving the requirement of the feature extraction module sent by the mobile terminal, the server sends the feature extraction module to the mobile terminal through a wireless or wired network.
Step S200, inputting data to be processed into the feature extraction module to obtain feature information corresponding to the data to be processed;
after the mobile terminal receives the feature extraction module, when deep learning needs to be performed on some original data, the data can be pictures, videos, texts, voices and the like, the mobile terminal inputs the original data to be processed into the feature extraction module, and finally feature information corresponding to the original data is output by an output layer of the feature extraction module through analysis and calculation of each layer of the feature extraction module.
Step S300, the characteristic information is sent to the server, so that the server inputs the characteristic information to a result generation module in the trained deep learning model, outputs a result and feeds back the result;
step S400, receiving the result sent by the server.
And the mobile terminal sends the characteristic information corresponding to the original data to a server through a wireless or wired network. After receiving the corresponding characteristic information of the original data sent by the mobile terminal through a wireless or wired network, the server takes the characteristic information as the parameters of the input layer of the result generation module in the trained deep learning model based on the neural network, inputs the parameters into the result generation module, analyzes and calculates through each layer in the result generation module, finally outputs the result through the output layer of the result generation module, and sends the result to the mobile terminal through the wireless or wired network.
In the embodiment, a trained feature extraction module in a deep learning model based on a neural network is sent to a mobile terminal, so that the mobile terminal inputs original data into the feature extraction module, obtains feature information corresponding to the original data, and feeds back the feature information; and inputting the characteristic information fed back by the mobile terminal into a result generation module in the trained deep learning model based on the neural network, outputting a result, and sending the result to the mobile terminal. Therefore, original data are input into the feature extraction module, and are sequentially analyzed and calculated through multiple layers in the feature extraction module and finally converted into feature information, the feature information is completely different from the original data, privacy data of a user cannot be directly obtained from the feature information, so that the user can send the feature information to a server to perform deep learning, even if the user is stolen, privacy leakage cannot be caused, and the safety of the deep learning process by using the server is improved.
Referring to fig. 6, the present invention further provides a deep learning system based on privacy protection, including:
the sending module 10 is configured to send a feature extraction module in the trained deep learning model to a mobile terminal, so that the mobile terminal inputs data to be processed into the feature extraction module, obtains feature information corresponding to the data to be processed, and feeds back the feature information;
and the first input module 20 is configured to input the feature information fed back by the mobile terminal to a result generation module in the trained deep learning model, output a result, and send the result to the mobile terminal.
Further, the deep learning system based on privacy protection further comprises:
the second input module 30 is configured to input a plurality of samples to a feature extraction module in the deep learning model to be trained, and output feature information corresponding to each sample, where each sample has a corresponding preset label;
the calculation module 40 is configured to calculate a first dissimilarity degree between every two pieces of feature information corresponding to the same preset tag, and calculate a second dissimilarity degree between every two pieces of feature information corresponding to different preset tags;
the adjusting module 50 is configured to adjust the feature extracting module according to the first dissimilarity degree and the second dissimilarity degree if the first dissimilarity degree and the second dissimilarity degree do not meet a preset rule, and invoke the second input module 30 to execute a corresponding operation;
an obtaining module 60, configured to obtain a trained feature extraction module if the first dissimilarity degree and the second dissimilarity degree meet the preset rule;
and a third input module 70, configured to input the feature information of each sample output by the trained feature extraction module into the to-be-trained result generation module for training, so as to obtain a trained result generation module.
Further, the calculation module 40 includes:
the calculating unit 41 is configured to calculate first dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula, and calculate first dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula;
the dissimilarity degree calculation formula is as follows:
Figure BDA0002383092130000151
and
Figure BDA0002383092130000152
wherein margin is a predetermined hyperparameter, L1Is a first degree of dissimilarity, L2Second degree of dissimilarity, f1And f2Two characteristic information, f, respectively, for the same preset label3And f4Respectively, the characteristic information of two different preset labels.
Further, the adjusting module 50 includes:
a first determining unit 51, configured to determine whether all the first dissimilarity degrees are smaller than or equal to a first preset threshold and whether all the second dissimilarity degrees are larger than or equal to a second preset threshold, where the first preset threshold is smaller than the second preset threshold;
the first adjusting unit 52 is configured to adjust the feature extraction module according to at least one first dissimilarity degree greater than a first preset threshold and/or at least one second dissimilarity degree smaller than a second preset threshold, and invoke the second input module 30 to execute a corresponding operation.
Further, the third input module 70 includes:
a first input unit 701, configured to input feature information of each sample output by the trained feature extraction module to a to-be-trained result generation module, and output an actual result corresponding to each sample;
a first obtaining unit 702, configured to obtain a loss function value according to an actual result and a preset expected result of each sample;
a second judging unit 703, configured to judge whether the loss function value is less than or equal to a third preset threshold;
a second adjusting unit 704, configured to adjust a parameter of the result generating module by using a back propagation algorithm according to the loss function value if the loss function value is not obtained, and call the first input unit 701 to execute a corresponding operation;
a second obtaining unit 705, configured to stop training if the result is positive, and obtain a trained result generation module.
Further, the third input module 70 includes:
a second input unit 711, configured to input the feature information of each sample output by the trained feature extraction module to the result generation module, output an actual result corresponding to each sample, and update the training cumulative number n to be n +1, where n is greater than or equal to 0;
a third obtaining unit 712, configured to obtain a loss function value according to the actual result and the preset expected result of each sample;
a third determining unit 713, configured to determine whether the loss function value is less than or equal to a third preset threshold;
a fourth judging unit 714, configured to judge whether the training cumulative number is smaller than a preset number if the loss function value is larger than a third preset threshold;
a third adjusting unit 715, configured to adjust a parameter of the result generating module by using a back propagation algorithm according to the loss function value if the training cumulative number is smaller than a preset number, and call the second input unit 711 to perform a corresponding operation;
a fourth obtaining unit 716, configured to stop training and obtain a trained result generating module if the cumulative number of times of training is greater than or equal to the preset number of times;
a fifth obtaining unit 717, configured to stop the training and obtain a trained result generating module if the loss function value is less than or equal to a third preset threshold.
The invention also proposes a computer-readable storage medium on which a computer program is stored. The computer-readable storage medium may be the Memory 02 in the privacy-based deep learning server in fig. 1, and may also be at least one of a ROM (Read-Only Memory)/RAM (Random Access Memory), a magnetic disk, and an optical disk, and the computer-readable storage medium includes several pieces of information for enabling a server or a television to perform the method according to the embodiments of the present invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A deep learning method based on privacy protection is characterized by being applied to a server and comprising the following steps:
sending a feature extraction module in a trained deep learning model to a mobile terminal so that the mobile terminal inputs data to be processed into the feature extraction module, obtains feature information corresponding to the data to be processed, and feeds back the feature information;
and inputting the characteristic information fed back by the mobile terminal into a result generation module in the trained deep learning model, outputting a result, and sending the result to the mobile terminal.
2. The privacy protection-based deep learning method according to claim 1, wherein the step of sending the feature extraction module in the trained deep learning model to the mobile terminal is preceded by:
inputting a plurality of samples into a feature extraction module in a deep learning model to be trained, and outputting feature information corresponding to each sample, wherein each sample has a corresponding preset label;
calculating a first dissimilarity degree between every two pieces of feature information corresponding to the same preset label, and calculating a second dissimilarity degree between every two pieces of feature information corresponding to different preset labels;
if the first dissimilarity degree and the second dissimilarity degree do not accord with a preset rule, adjusting a feature extraction module according to the first dissimilarity degree and the second dissimilarity degree, and executing: inputting the plurality of samples into a feature extraction module, and outputting feature information corresponding to each sample;
if the first dissimilarity degree and the second dissimilarity degree accord with the preset rule, a trained feature extraction module is obtained;
and inputting the characteristic information of each sample output by the trained characteristic extraction module into a to-be-trained result generation module for training to obtain a trained result generation module.
3. The privacy protection-based deep learning method according to claim 2, wherein the step of calculating a first dissimilarity degree between every two pieces of feature information corresponding to the same preset tags and calculating a second dissimilarity degree between every two pieces of feature information corresponding to different preset tags comprises:
respectively calculating first dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula, and respectively calculating second dissimilarity degrees between every two pieces of feature information corresponding to the same preset label according to a dissimilarity degree calculation formula;
the dissimilarity degree calculation formula is as follows:
Figure FDA0002383092120000021
and
Figure FDA0002383092120000022
wherein margin is a predetermined hyperparameter, L1Is a first degree of dissimilarity, L2Second degree of dissimilarity, f1And f2Two characteristic information, f, respectively, for the same preset label3And f4Respectively, the characteristic information of two different preset labels.
4. The deep learning method based on privacy protection as claimed in claim 2 or 3, wherein if the first dissimilarity degree and the second dissimilarity degree do not meet a preset rule, the feature extraction module is adjusted according to the first dissimilarity degree and the second dissimilarity degree to perform: the step of inputting the plurality of samples to the feature extraction module and outputting the feature information corresponding to each sample includes:
judging whether all the first dissimilarity degrees are smaller than or equal to a first preset threshold value and whether all the second dissimilarity degrees are larger than or equal to a second preset threshold value, wherein the first preset threshold value is smaller than the second preset threshold value;
if at least one first dissimilarity degree is greater than a first preset threshold value and/or at least one second dissimilarity degree is less than a second preset threshold value, adjusting a feature extraction module according to the first dissimilarity degree and the second dissimilarity degree, and executing: and inputting the plurality of samples into a feature extraction module, and outputting feature information corresponding to each sample.
5. The deep learning method based on privacy protection as claimed in claim 4, wherein the step of inputting the feature information of each sample output by the trained feature extraction module into the to-be-trained result generation module for training, and obtaining the trained result generation module comprises:
inputting the characteristic information of each sample output by the trained characteristic extraction module into a to-be-trained result generation module, and outputting an actual result corresponding to each sample;
obtaining a loss function value according to the actual result and the preset expected result of each sample;
judging whether the loss function value is less than or equal to a third preset threshold value or not;
if not, adjusting the parameters of the result generation module by adopting a back propagation algorithm according to the loss function value, and executing: inputting the characteristic information of each sample output by the trained characteristic extraction module into a result generation module, and outputting an actual result corresponding to each sample;
and if so, stopping training and obtaining a trained result generation module.
6. The deep learning method based on privacy protection as claimed in claim 4, wherein the step of inputting the feature information of each sample output by the trained feature extraction module into the to-be-trained result generation module for training, and obtaining the trained result generation module comprises:
inputting the feature information of each sample output by the trained feature extraction module into a result generation module, outputting an actual result corresponding to each sample, and updating the training cumulative number n to be n +1, wherein n is more than or equal to 0;
obtaining a loss function value according to the actual result and the preset expected result of each sample;
judging whether the loss function value is less than or equal to a third preset threshold value or not;
if the loss function value is larger than a third preset threshold value, judging whether the training accumulated times are smaller than preset times;
if the training accumulated times are less than the preset times, adjusting the parameters of the result generation module by adopting a back propagation algorithm according to the loss function value, and executing: inputting the characteristic information of each sample output by the trained characteristic extraction module into a result generation module, and outputting an actual result corresponding to each sample;
if the accumulated training times are larger than or equal to the preset times, stopping training and obtaining a trained result generation module;
and if the loss function value is smaller than or equal to a third preset threshold value, stopping training and obtaining a trained result generation module.
7. A deep learning method based on privacy protection is characterized in that the deep learning method based on privacy protection is applied to a mobile terminal, and comprises the following steps:
receiving a feature extraction module in the trained deep learning model sent by the server;
inputting data to be processed into the feature extraction module to obtain feature information corresponding to the data to be processed;
sending the characteristic information to the server so that the server inputs the characteristic information to a result generation module in the trained deep learning model, outputs a result and feeds back the result;
and receiving the result sent by the server.
8. A privacy protection based deep learning system, comprising:
the transmitting module is used for transmitting the feature extraction module in the trained deep learning model to a mobile server so that the mobile server inputs data to be processed into the feature extraction module, obtains feature information corresponding to the data to be processed and feeds back the feature information;
and the receiving module is used for receiving the characteristic information fed back by the mobile server, inputting the characteristic information into a learning result generating module in the trained deep learning model, outputting a learning result and sending the learning result to the mobile server.
9. A privacy protection based deep learning server, characterized in that the privacy protection based deep learning server comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program when executed by the processor implementing the steps of the privacy protection based deep learning method according to any one of claims 1 to 6.
10. A computer-readable storage medium, characterized in that a computer program is stored thereon, which, when being executed by a processor, carries out the steps of the privacy-based deep learning method according to any one of claims 1 to 6.
CN202010092513.5A 2020-02-12 2020-02-12 Deep learning method, system, server and storage medium based on privacy protection Pending CN111325322A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010092513.5A CN111325322A (en) 2020-02-12 2020-02-12 Deep learning method, system, server and storage medium based on privacy protection
PCT/CN2021/071089 WO2021159898A1 (en) 2020-02-12 2021-01-11 Privacy protection-based deep learning method, system and server, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010092513.5A CN111325322A (en) 2020-02-12 2020-02-12 Deep learning method, system, server and storage medium based on privacy protection

Publications (1)

Publication Number Publication Date
CN111325322A true CN111325322A (en) 2020-06-23

Family

ID=71167125

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010092513.5A Pending CN111325322A (en) 2020-02-12 2020-02-12 Deep learning method, system, server and storage medium based on privacy protection

Country Status (2)

Country Link
CN (1) CN111325322A (en)
WO (1) WO2021159898A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112214791A (en) * 2020-09-24 2021-01-12 广州大学 Privacy policy optimization method and system based on reinforcement learning and readable storage medium
WO2021159898A1 (en) * 2020-02-12 2021-08-19 深圳壹账通智能科技有限公司 Privacy protection-based deep learning method, system and server, and storage medium
CN113849665A (en) * 2021-09-02 2021-12-28 中科创达软件股份有限公司 Multimedia data identification method, device, equipment and storage medium
CN115098885A (en) * 2022-07-28 2022-09-23 清华大学 Data processing method and system and electronic equipment
CN115530773A (en) * 2022-10-17 2022-12-30 广州市番禺区中心医院 Cardiovascular disease evaluation and prevention system based on food intake of patient

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114254275B (en) * 2021-11-16 2024-05-28 浙江大学 Black box deep learning model copyright protection method based on antagonism sample fingerprint
CN115277203A (en) * 2022-07-28 2022-11-01 国网智能电网研究院有限公司 Execution body difference evaluation method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280443A (en) * 2018-02-23 2018-07-13 深圳市唯特视科技有限公司 A kind of action identification method based on deep feature extraction asynchronous fusion network
CN109543829A (en) * 2018-10-15 2019-03-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for hybrid deployment of deep learning neural network on terminal and cloud
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning
CN110166424A (en) * 2019-04-03 2019-08-23 西安电子科技大学 Internet of things oriented services secret protection method for recognizing sound-groove and system, mobile terminal
CN110378092A (en) * 2019-07-26 2019-10-25 北京积加科技有限公司 Identification system and client, server and method
CN110399211A (en) * 2018-04-24 2019-11-01 北京中科寒武纪科技有限公司 Distribution system, method and device, the computer equipment of machine learning

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101373514A (en) * 2007-08-24 2009-02-25 李树德 Method and system for recognizing human face
CN103473563A (en) * 2013-09-23 2013-12-25 程涛 Fingernail image processing method and system, and fingernail feature analysis method and system
CN106202166B (en) * 2016-06-24 2020-08-18 北京奇虎技术服务有限公司 File cleaning method and device and corresponding client
CN107103279B (en) * 2017-03-09 2020-06-05 广东顺德中山大学卡内基梅隆大学国际联合研究院 Passenger flow counting method based on deep learning under vertical visual angle
CN109145829A (en) * 2018-08-24 2019-01-04 中共中央办公厅电子科技学院 A kind of safe and efficient face identification method based on deep learning and homomorphic cryptography
CN109918532B (en) * 2019-03-08 2023-08-18 苏州大学 Image retrieval method, device, equipment and computer readable storage medium
CN110188303A (en) * 2019-05-10 2019-08-30 百度在线网络技术(北京)有限公司 Page fault recognition methods and device
CN111325322A (en) * 2020-02-12 2020-06-23 深圳壹账通智能科技有限公司 Deep learning method, system, server and storage medium based on privacy protection

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280443A (en) * 2018-02-23 2018-07-13 深圳市唯特视科技有限公司 A kind of action identification method based on deep feature extraction asynchronous fusion network
CN110399211A (en) * 2018-04-24 2019-11-01 北京中科寒武纪科技有限公司 Distribution system, method and device, the computer equipment of machine learning
CN109543829A (en) * 2018-10-15 2019-03-29 华东计算技术研究所(中国电子科技集团公司第三十二研究所) Method and system for hybrid deployment of deep learning neural network on terminal and cloud
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning
CN110166424A (en) * 2019-04-03 2019-08-23 西安电子科技大学 Internet of things oriented services secret protection method for recognizing sound-groove and system, mobile terminal
CN110378092A (en) * 2019-07-26 2019-10-25 北京积加科技有限公司 Identification system and client, server and method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021159898A1 (en) * 2020-02-12 2021-08-19 深圳壹账通智能科技有限公司 Privacy protection-based deep learning method, system and server, and storage medium
CN112214791A (en) * 2020-09-24 2021-01-12 广州大学 Privacy policy optimization method and system based on reinforcement learning and readable storage medium
CN112214791B (en) * 2020-09-24 2023-04-18 广州大学 Privacy policy optimization method and system based on reinforcement learning and readable storage medium
CN113849665A (en) * 2021-09-02 2021-12-28 中科创达软件股份有限公司 Multimedia data identification method, device, equipment and storage medium
CN115098885A (en) * 2022-07-28 2022-09-23 清华大学 Data processing method and system and electronic equipment
CN115098885B (en) * 2022-07-28 2022-11-04 清华大学 Data processing method and system and electronic equipment
CN115530773A (en) * 2022-10-17 2022-12-30 广州市番禺区中心医院 Cardiovascular disease evaluation and prevention system based on food intake of patient
CN115530773B (en) * 2022-10-17 2024-01-05 广州市番禺区中心医院 Cardiovascular disease evaluation and prevention system based on diet intake of patient

Also Published As

Publication number Publication date
WO2021159898A1 (en) 2021-08-19

Similar Documents

Publication Publication Date Title
CN111325322A (en) Deep learning method, system, server and storage medium based on privacy protection
CN111950638B (en) Image classification method and device based on model distillation and electronic equipment
CN109815339B (en) Knowledge extraction method and device based on TextCNN, computer equipment and storage medium
US20190005399A1 (en) Learning device, generation device, learning method, generation method, and non-transitory computer readable storage medium
CN110348362B (en) Label generation method, video processing method, device, electronic equipment and storage medium
CN110991380A (en) Human body attribute identification method and device, electronic equipment and storage medium
CN112101437A (en) Fine-grained classification model processing method based on image detection and related equipment thereof
CN113157863A (en) Question and answer data processing method and device, computer equipment and storage medium
CN112232889A (en) User interest portrait extension method, device, equipment and storage medium
CN109214543B (en) Data processing method and device
CN111898735A (en) Distillation learning method, distillation learning device, computer equipment and storage medium
CN110795558B (en) Label acquisition method and device, storage medium and electronic device
CN111292262A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN113641797A (en) Data processing method, device, equipment, storage medium and computer program product
CN115510186A (en) Instant question and answer method, device, equipment and storage medium based on intention recognition
CN113128526B (en) Image recognition method and device, electronic equipment and computer-readable storage medium
US20220400164A1 (en) Method and apparatus for pushing subscription data in internet of things, device and storage medium thereof
CN112836807A (en) Data processing method and device based on neural network
CN117473249A (en) Modeling method and detection method of network flow detection model and related equipment
CN113239190B (en) Document classification method, device, storage medium and electronic equipment
CN114186039A (en) Visual question answering method and device and electronic equipment
CN115620710A (en) Speech recognition method, speech recognition device, storage medium and electronic device
CN111614697A (en) Method and system for identity recognition
CN112990046A (en) Difference information acquisition method, related device and computer program product
CN111859917A (en) Topic model construction method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination