CN110071798B - Equivalent key obtaining method and device and computer readable storage medium - Google Patents

Equivalent key obtaining method and device and computer readable storage medium Download PDF

Info

Publication number
CN110071798B
CN110071798B CN201910216928.6A CN201910216928A CN110071798B CN 110071798 B CN110071798 B CN 110071798B CN 201910216928 A CN201910216928 A CN 201910216928A CN 110071798 B CN110071798 B CN 110071798B
Authority
CN
China
Prior art keywords
neural network
training
network model
equivalent key
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910216928.6A
Other languages
Chinese (zh)
Other versions
CN110071798A (en
Inventor
何文奇
盘水新
彭翔
韩本年
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201910216928.6A priority Critical patent/CN110071798B/en
Publication of CN110071798A publication Critical patent/CN110071798A/en
Application granted granted Critical
Publication of CN110071798B publication Critical patent/CN110071798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B10/00Transmission systems employing electromagnetic waves other than radio-waves, e.g. infrared, visible or ultraviolet light, or employing corpuscular radiation, e.g. quantum communication
    • H04B10/80Optical aspects relating to the use of optical transmission for specific applications, not provided for in groups H04B10/03 - H04B10/70, e.g. optical power feeding or optical transmission through water
    • H04B10/85Protection from unauthorised access, e.g. eavesdrop protection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2209/00Additional information or applications relating to cryptographic mechanisms or cryptographic arrangements for secret or secure communication H04L9/00
    • H04L2209/12Details relating to cryptographic hardware or logic circuitry

Abstract

The embodiment of the invention discloses an equivalent key acquisition method, an equivalent key acquisition device and a computer readable storage medium, wherein a preset training data set is acquired; the training data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences; training a neural network based on the training data set to obtain a trained neural network model; determining the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system. Through the implementation of the invention, a series of known ciphertext-plaintext pairs are placed in a neural network structure for training, so that the mapping relation between the ciphertext and the plaintext of the optical encryption system is obtained, the mapping relation is used as an equivalent key, the efficiency and the accuracy of security analysis are improved, whether the encryption system carries out additional random scrambling or other secondary encryption operations on a ciphertext sequence or not is irrelevant, and the applicable scene is expanded.

Description

Equivalent key obtaining method and device and computer readable storage medium
Technical Field
The present invention relates to the field of optical cryptography, and in particular, to an equivalent key obtaining method, an equivalent key obtaining device, and a computer-readable storage medium.
Background
Optical image encryption is a new type of encryption technology different from traditional mathematical encryption, and compared with traditional mathematical encryption technology, optical image encryption technology has the advantages of multiple dimensions, large capacity, high robustness and natural parallel data processing capability, so that the encryption technology is widely concerned and continuously developed in recent years.
However, as a cryptographic system, the primary concern should be the security of the system, which must be verified by a strict security analysis of the cryptographic system to determine whether it is reliable. At present, in the related art, when an optical encryption system is subjected to security analysis, a system key is generally acquired by a plaintext attack mode depending on geometric parameters and a related structure of the optical encryption system, so that the attack efficiency and accuracy are low, and the method is only suitable for a scene in which the encryption system does not perform additional random scrambling or other secondary encryption operations on a ciphertext sequence, and is limited in applicable scenes.
Disclosure of Invention
The embodiments of the present invention mainly aim to provide an equivalent key obtaining method, an equivalent key obtaining device, and a computer-readable storage medium, which can at least solve the problems in the related art that the security analysis of an optical encryption system is performed depending on the geometric parameters and the related structure of the optical encryption system, the analysis efficiency and accuracy are low, and the applicable scenario is limited.
In order to achieve the above object, a first aspect of the embodiments of the present invention provides an equivalent key obtaining method, including:
acquiring a preset training data set; the training data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences;
training a neural network based on the training data set to obtain a trained neural network model;
determining the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system.
In order to achieve the above object, a second aspect of an embodiment of the present invention provides an equivalent key obtaining apparatus, including:
the acquisition module is used for acquiring a preset training data set; the training data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences;
the training module is used for training the neural network based on the training data set to obtain a trained neural network model;
a determining module, configured to determine the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system.
To achieve the above object, a third aspect of embodiments of the present invention provides an electronic apparatus, including: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of any one of the equivalent key obtaining methods described above.
In order to achieve the above object, a fourth aspect of the embodiments of the present invention provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of any one of the equivalent key obtaining methods described above.
According to the equivalent key obtaining method, the equivalent key obtaining device and the computer readable storage medium provided by the embodiment of the invention, a preset training data set is obtained; the training data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences; training a neural network based on the training data set to obtain a trained neural network model; determining the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system. Through the implementation of the invention, a series of known ciphertext-plaintext pairs are placed in a neural network structure for training, so that the mapping relation between the ciphertext and the plaintext of the optical encryption system is obtained, the mapping relation is used as an equivalent key, the efficiency and the accuracy of security analysis are improved, whether the encryption system carries out additional random scrambling or other secondary encryption operations on a ciphertext sequence or not is irrelevant, and the applicable scene is expanded.
Other features and corresponding effects of the present invention are set forth in the following portions of the specification, and it should be understood that at least some of the effects are apparent from the description of the present invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic basic flow chart of an equivalent key obtaining method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of an optical encryption scheme provided in accordance with a first embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating deep neural network model training according to a first embodiment of the present invention;
FIG. 4 is a diagram illustrating a discard regularization process performed on a neural network according to a first embodiment of the present invention;
fig. 5 is a schematic structural diagram of an equivalent key obtaining apparatus according to a second embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a third embodiment of the invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The first embodiment:
in order to solve the technical problems in the related art that the security analysis of the optical encryption system depends on the geometric parameters and the related structure of the optical encryption system, the analysis efficiency and the accuracy are low, and the applicable scenario is limited, the embodiment provides an equivalent key obtaining method, and as shown in fig. 1, a basic flow diagram of the equivalent key obtaining method provided by the embodiment is provided, and the equivalent key obtaining method provided by the embodiment includes the following steps:
step 101, acquiring a preset training data set; the training data set includes a combination of a plurality of plaintext images and corresponding ciphertext sequences.
Specifically, the method adopts a plaintext attack selection scheme based on Kerckhiffs principle which is a basic rule recognized in the field of cryptoanalytic science, namely, an attacker is assumed to be capable of selecting a series of given plaintexts and knowing respectively corresponding ciphertexts of the given plaintexts. It should be noted that the neural network is trained under the framework of supervised learning, so that the training data set needs to be constructed in the embodiment, so as to train the neural network based on a plurality of training data in the training data set. The training data of this embodiment is a plaintext-ciphertext pair, and each specific plaintext image corresponds to a ciphertext sequence.
Optionally, the obtaining a preset training data set includes: loading a random phase template on a spatial light modulator, and encoding a light source passing through the spatial light modulator; respectively encrypting a plurality of plaintext images to be encrypted through the coded light source to obtain a ciphertext sequence corresponding to each plaintext image; and constructing a training data set by combining all plaintext images and corresponding ciphertext sequences.
Specifically, when two communication parties communicate, a first communication party encrypts and transmits private information to a second communication party, the first communication party and the second communication party share a secret Key, and the secret Key is a one-dimensional sequence { S) composed of N numbersi1.., N). Before encryption, the first communication party is given a set SiN elements in the method are used as seeds to generate N random phase templates
Figure BDA0002002371080000041
Suppose { SiAnd random phase templates
Figure BDA0002002371080000042
The relation between them is clear and known only by both communication parties, and the phase values are uniformly distributed in [0,2 pi ]]I.e. each random phase template
Figure BDA0002002371080000043
All correspond to { SiOne key component of. It should be noted that as a preferred embodiment of the present embodiment, the optical encryption system may be an optical encryption system that calculates ghost images. FIG. 2 is a schematic diagram of the optical encryption provided in this embodiment, which is a Laser (Laser)A beam of laser that launches passes through Spatial Light Modulator (SLM), loads N Random Phase Mask (RPM) in proper order on SLM simultaneously and encodes the Light source, and after the Light beam after the code carries out a distance Z's Fresnel diffraction, shines the plain text (plain text) image T (x, y) that waits to encrypt, then is detected collection by single pixel Bucket Detector (BD), and records a its corresponding intensity value. Repeating the operation N times for N different phase templates, and recording N intensity values BiA one-dimensional sequence is formed as Ciphertext (cipertext). Generally speaking, the larger the value of N, the better the reconstruction.
And 102, training the neural network based on the training data set to obtain a trained neural network model.
Specifically, in this embodiment, based on the constructed training data set, a certain optimization algorithm is adopted to perform neural network model training in a specific training environment, wherein the learning rate and the training frequency during training are determined according to actual requirements, and are not limited herein. In the training process, a ciphertext sequence in the training data set is input into the neural network model, a plaintext image corresponding to the ciphertext sequence is used as output constraint of the neural network model, all plaintext-ciphertext pairs of the training data set are iteratively traversed, each training parameter in the neural network model is determined, and the trained neural network model is obtained.
Optionally, the neural network includes any one of a deep neural network DNN, a convolutional neural network CNN, and a recurrent neural network RNN.
Specifically, in practical application, different types of neural networks can be selected according to different use scenarios to train the neural network model for security analysis of the optical encryption system, and as DNN has better one-dimensional data processing capability, DNN is preferably selected to train the neural network model in this embodiment.
Optionally, when the neural network is a deep neural network, the deep neural network includes: the neuron array comprises an input layer, three hidden layers, an adjusting layer and an output layer, wherein neurons between adjacent layers are connected in a full connection mode.
Fig. 3 is a schematic diagram illustrating training of the deep neural network model provided in this embodiment. It should be noted that in this embodiment, the number of independent neurons in the Input Layer (Input Layer), the three Hidden layers (Hidden Layer), the Output Layer (Output Layer), and the adjustment Layer (rehaping Layer) may be determined according to the length of a single ciphertext sequence, such as: the size of the plaintext image is 28 × 28, and the corresponding ciphertext sequence is 1 × 784, the number of the independent neurons is set to 784. In one training process, firstly, a 1 × 784 ciphertext sequence is input into an input layer, then the 1 × 784 sequence passes through the input layer, three hidden layers and an output layer in sequence, and a regulating layer is used for regulating the 1 × 784 sequence into an output image with 28 × 28 pixels.
Optionally, training the neural network based on the training data set to obtain a trained neural network model includes: inputting a ciphertext sequence into the Mth deep neural network model, and training the ciphertext sequence sequentially through an input layer, three hidden layers, an output layer and an adjusting layer to obtain an actual output plaintext image of the Mth iterative training, wherein M is a positive integer greater than or equal to 1; comparing the actual output plaintext image with the original plaintext image in the training data set by using a preset loss function; when the comparison result meets a preset convergence condition, determining the Mth deep neural network model as a trained deep neural network model; and when the comparison result does not meet the preset convergence condition, continuing performing the M +1 th iterative training until the convergence condition is met.
Specifically, please refer to fig. 3 continuously, in this embodiment, after training the deep neural network model, the output image is compared with the original plaintext image, and a Loss Function (Loss Function) is used to determine whether the output image meets the requirement, if not, each parameter in the deep neural network model needs to be optimized continuously by using the "plaintext-ciphertext" in the training data set until the model can convert any ciphertext in the training set into an output image meeting the requirement, that is, an output image sufficiently similar to the corresponding plaintext image. The samples in the training set are trained in sequence according to the training process of the neural network, and after the training of all samples (namely, plaintext-ciphertext pairs) in the training set is completed, the training is a complete training process.
Optionally, the loss function is an average absolute value error function, and is expressed as:
Figure BDA0002002371080000061
wherein, yi' the i-th element, y, in the true value corresponding to the original plaintext imageiIndicating the i-th element in the output value corresponding to the actual output plaintext image, and Δ indicating the average absolute value error of the output value from the true value.
Specifically, in this embodiment, the Mean Absolute Error (MAE) function is selected as the loss function, which is more robust than the Mean Squared Error (MSE) function because it does not need square operation.
It should be noted that in the deep neural network model, each neuron acts on the input information through an activation function, and then transmits the input information through a weight connection, and obtains a corresponding nonlinear response output. In this embodiment, a sigmoid function may be selected as an activation function, and the function is expressed as: y isk+1=sigmoid(BiWk+bk) Wherein k represents the number of layers, BiRepresenting the input of a certain neuron of the k-th layer, yk+1Representing the output response, W, of a certain neuron of the k-th layerkRepresenting the weight of a certain neuron of the k-th layer, bkThe bias of a certain neuron in the k layer is represented, and the weight and the bias are collectively called the training parameters of the neural network.
It should also be noted that, in this embodiment, whether the output image meets the requirements is determined according to the loss function in the deep neural network model, and if not, the network parameters need to be optimized.
Optionally, training the neural network based on the training data set includes: a preset neuron is randomly discarded in each layer of the standard neural network by adopting a discarding regularization method, so that the connection quantity of the neuron is reduced; the neural network after discarding the neurons is trained based on the training data set.
Fig. 4 is a schematic diagram of discarding regularization processing performed on a neural network according to this embodiment. Specifically, in the training process of the neural network model, the overfitting phenomenon may occur in the neural network system due to the fact that the training set is too large and the training times are too many. In order to avoid the situation, the invention further adopts a drop Regularization (Dropout Regularization) technology, each layer of a Standard Neural network (Standard Neural Net) is randomly dropped with some units, and the main function of the invention is to temporarily and randomly drop a part of connections among neurons in the training and learning process of the Neural network, reduce the scale of the neurons, and further prevent the over-fitting situation.
Optionally, after obtaining the trained neural network model, the method further includes: acquiring a preset test data set; the test data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences; inputting the ciphertext sequence in the test data set into the trained neural network model to obtain a plaintext image output by test; carrying out correlation calculation on the plaintext image output by the test and the plaintext image in the test data set; and when the correlation degree is greater than a preset correlation degree threshold value, determining the trained neural network model as an effective neural network model.
Specifically, in this embodiment, after the neural network model is trained, the validity of the neural network model is verified by using the test data set, that is, the ciphertext sequence in the test data set is input to the trained neural network model, the validity of the test plaintext image output by the test data set is determined by comparing the correlation between the test plaintext image and the original plaintext image, when the correlation between the test data and the original data is greater than a preset threshold, the trained neural network model is determined to be an effective and correct model, and then the effective neural network model is determined to be an equivalent key, otherwise, it is indicated that an error exists in the construction of the trained neural network model, and the construction of the neural network model needs to be restarted.
Step 103, determining the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system.
Specifically, the neural network model in this embodiment may be regarded as an equivalent key of the optical encryption system, and the equivalent key may be used to restore a ciphertext. It should be noted that the equivalent key obtaining method of the present invention does not need to know the geometric parameters and the related structure of the computing optical encryption system, and whether the encryption system performs additional random scrambling or other common secondary encryption operations on the ciphertext sequence. Meanwhile, if the ciphertext is subjected to down-sampling processing during training (the training time consumption is reduced, and the attack efficiency is improved), the attack method is still effective; if a small amount of noise is introduced when a subsequent ciphertext is cracked, the attack method is still effective.
According to the equivalent key obtaining method provided by the embodiment of the invention, a preset training data set is obtained; the training data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences; training a neural network based on the training data set to obtain a trained neural network model; determining the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system. Through the implementation of the invention, a series of known ciphertext-plaintext pairs are placed in a neural network structure for training, so that the mapping relation between the ciphertext and the plaintext of the optical encryption system is obtained, the mapping relation is used as an equivalent key, the efficiency and the accuracy of security analysis are improved, whether the encryption system carries out additional random scrambling or other secondary encryption operations on a ciphertext sequence or not is irrelevant, and the applicable scene is expanded.
Second embodiment:
in order to solve the technical problems in the related art that the security analysis of the optical encryption system is performed depending on the geometric parameters and the related structure of the optical encryption system, the analysis efficiency and the accuracy are low, and the applicable scenario is limited, the embodiment shows an equivalent key obtaining apparatus, specifically referring to fig. 5, the equivalent key obtaining apparatus of the embodiment includes:
an obtaining module 501, configured to obtain a preset training data set; the training data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences;
a training module 502, configured to train a neural network based on a training data set to obtain a trained neural network model;
a determining module 503, configured to determine the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system.
Specifically, the training data set includes a plurality of training data, the training data of this embodiment is a plaintext-ciphertext pair, and each specific plaintext image corresponds to a ciphertext sequence. In the training process, a ciphertext sequence in the training data set is input into the neural network model, a plaintext image corresponding to the ciphertext sequence is used as output constraint of the neural network model, all plaintext-ciphertext pairs of the training data set are iteratively traversed, each training parameter in the neural network model is determined, and the trained neural network model is obtained. The neural network model in this embodiment may be regarded as an equivalent key of the optical encryption system, and the equivalent key may be used to restore a ciphertext. It should be noted that as a preferred embodiment of the present embodiment, the optical encryption system may be an optical encryption system that calculates ghost images.
In some embodiments of this embodiment, the obtaining module 501 is specifically configured to load a random phase template on the spatial light modulator, and encode a light source passing through the spatial light modulator; respectively encrypting a plurality of plaintext images to be encrypted through the coded light source to obtain a ciphertext sequence corresponding to each plaintext image; and constructing a training data set by combining all plaintext images and corresponding ciphertext sequences.
In some embodiments of this embodiment, the equivalent key obtaining apparatus further includes: the test module is used for acquiring a preset test data set; the test data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences; inputting the ciphertext sequence in the test data set into the trained neural network model to obtain a plaintext image output by test; carrying out correlation calculation on the plaintext image output by the test and the plaintext image in the test data set; and when the correlation degree is greater than a preset correlation degree threshold value, determining the trained neural network model as an effective neural network model.
In some embodiments of the present embodiment, the neural network includes any one of a deep neural network DNN, a convolutional neural network CNN, and a recurrent neural network RNN.
Further, when the neural network is a deep neural network, the deep neural network includes: the neuron array comprises an input layer, three hidden layers, an adjusting layer and an output layer, wherein neurons between adjacent layers are connected in a full connection mode.
In some embodiments of this embodiment, the training module 502 is specifically configured to input a ciphertext sequence in the mth deep neural network model, train the ciphertext sequence sequentially through the input layer, the three hidden layers, the output layer, and the adjustment layer, and obtain an actual output plaintext image of the mth iterative training, where M is a positive integer greater than or equal to 1; comparing the actual output plaintext image with the original plaintext image in the training data set by using a preset loss function; when the comparison result meets a preset convergence condition, determining the Mth deep neural network model as a trained deep neural network model; and when the comparison result does not meet the preset convergence condition, continuing performing the M +1 th iterative training until the convergence condition is met.
Further, in some embodiments of this embodiment, the loss function is an average absolute value error function expressed as:
Figure BDA0002002371080000091
wherein, y'iRepresenting the i-th element, y, in the real value corresponding to the original plaintext imageiIndicating the i-th element in the output value corresponding to the actual output plaintext image, and Δ indicating the average absolute value error of the output value from the true value.
In some embodiments of this embodiment, the training module 502 is specifically configured to randomly discard preset neurons in each layer of the standard neural network by using a discard regularization method, so as to reduce the connection amount of the neurons; and training the neural network after discarding the neurons based on the training data set to obtain a trained neural network model.
It should be noted that, the equivalent key obtaining method in the foregoing embodiment can be implemented based on the equivalent key obtaining device provided in this embodiment, and it can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working process of the equivalent key obtaining device described in this embodiment may refer to the corresponding process in the foregoing method embodiment, and details are not described here.
By adopting the equivalent key acquisition device provided by the embodiment, a preset training data set is acquired; the training data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences; training a neural network based on the training data set to obtain a trained neural network model; determining the neural network model as an equivalent key; the equivalent key is used for security analysis of the optical encryption system. Through the implementation of the invention, a series of known ciphertext-plaintext pairs are placed in a neural network structure for training, so that the mapping relation between the ciphertext and the plaintext of the optical encryption system is obtained, the mapping relation is used as an equivalent key, the efficiency and the accuracy of security analysis are improved, whether the encryption system carries out additional random scrambling or other secondary encryption operations on a ciphertext sequence or not is irrelevant, and the applicable scene is expanded.
The third embodiment:
the present embodiment provides an electronic device, as shown in fig. 6, which includes a processor 601, a memory 602, and a communication bus 603, wherein: the communication bus 603 is used for realizing connection communication between the processor 601 and the memory 602; the processor 601 is configured to execute one or more computer programs stored in the memory 602 to implement at least one step of the equivalent key obtaining method in the first embodiment.
The present embodiments also provide a computer-readable storage medium including volatile or non-volatile, removable or non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, computer program modules or other data. Computer-readable storage media include, but are not limited to, RAM (Random Access Memory), ROM (Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), flash Memory or other Memory technology, CD-ROM (Compact disk Read-Only Memory), Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer.
The computer-readable storage medium in this embodiment may be used for storing one or more computer programs, and the stored one or more computer programs may be executed by a processor to implement at least one step of the method in the first embodiment.
The present embodiment also provides a computer program, which can be distributed on a computer readable medium and executed by a computing device to implement at least one step of the method in the first embodiment; and in some cases at least one of the steps shown or described may be performed in an order different than that described in the embodiments above.
The present embodiments also provide a computer program product comprising a computer readable means on which a computer program as shown above is stored. The computer readable means in this embodiment may include a computer readable storage medium as shown above.
It will be apparent to those skilled in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software (which may be implemented in computer program code executable by a computing device), firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit.
In addition, communication media typically embodies computer readable instructions, data structures, computer program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to one of ordinary skill in the art. Thus, the present invention is not limited to any specific combination of hardware and software.
The foregoing is a more detailed description of embodiments of the present invention, and the present invention is not to be considered limited to such descriptions. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (9)

1. An equivalent key obtaining method is applied to a ghost imaging optical encryption system, and comprises the following steps:
simultaneously, sequentially loading N random phase templates on a spatial light modulator, and encoding a light source passing through the spatial light modulator;
respectively irradiating the coded light sources to each plaintext image to be encrypted, and then acquiring N corresponding intensity values of each plaintext image through a single-pixel bucket detector to obtain a ciphertext sequence corresponding to each plaintext image;
constructing a training data set by combining all the plaintext images and the corresponding ciphertext sequences;
training a neural network based on the training data set to obtain a trained neural network model;
determining the neural network model as an equivalent key; the equivalent key is used for restoring the cipher text so as to perform security analysis on the optical encryption system.
2. The equivalent key obtaining method according to claim 1, further comprising, after obtaining the trained neural network model:
acquiring a preset test data set; the test data set comprises a combination of a plurality of plaintext images and corresponding ciphertext sequences;
inputting the ciphertext sequence in the test data set into the trained neural network model to obtain a plaintext image output by testing;
carrying out correlation calculation on the plaintext image output by the test and the plaintext image in the test data set;
when the correlation degree is larger than a preset correlation degree threshold value, determining the trained neural network model as an effective neural network model;
the determining the neural network model as an equivalent key comprises:
determining the valid neural network model as an equivalent key.
3. The equivalent key obtaining method according to claim 1, wherein when the neural network is a deep neural network, the deep neural network includes: the neuron array comprises an input layer, three hidden layers, an adjusting layer and an output layer, wherein neurons between adjacent layers are connected in a full connection mode.
4. The equivalent key obtaining method according to claim 3, wherein the training of the neural network based on the training data set to obtain the trained neural network model comprises:
inputting the ciphertext sequence into an Mth deep neural network model, and training the ciphertext sequence sequentially through the input layer, the three hidden layers, the output layer and the adjusting layer to obtain an actual output plaintext image of the Mth iterative training;
comparing the actual output plaintext image with the original plaintext image in the training data set by using a preset loss function;
when the comparison result meets a preset convergence condition, determining the Mth deep neural network model as a trained deep neural network model;
and when the comparison result does not meet the preset convergence condition, continuing performing the M +1 th iterative training until the convergence condition is met.
5. The equivalent key obtaining method according to claim 4, wherein said loss function is an average absolute value error function expressed as:
Figure FDA0003307884650000021
wherein, y'iRepresenting the i-th element, y, in the real value corresponding to said original plaintext imageiRepresents the i-th element in the output value corresponding to the actual output plaintext image, and Δ represents the average absolute value error of the output value from the true value.
6. The equivalent key acquisition method of claim 1, wherein said training a neural network based on the set of training data comprises:
a preset neuron is randomly discarded in each layer of the standard neural network by adopting a discarding regularization method, so that the connection quantity of the neuron is reduced;
training the neural network after discarding neurons based on the training data set.
7. An equivalent key obtaining device, applied to a ghost imaging optical encryption system, comprising:
the system comprises an acquisition module, a spatial light modulator and a phase matching module, wherein the acquisition module is used for sequentially loading N random phase templates on the spatial light modulator at the same time and coding a light source passing through the spatial light modulator; respectively irradiating the coded light sources to each plaintext image to be encrypted, and then acquiring N corresponding intensity values of each plaintext image through a single-pixel bucket detector to obtain a ciphertext sequence corresponding to each plaintext image; constructing a training data set by combining all the plaintext images and the corresponding ciphertext sequences;
the training module is used for training the neural network based on the training data set to obtain a trained neural network model;
a determining module, configured to determine the neural network model as an equivalent key; the equivalent key is used for restoring the cipher text so as to perform security analysis on the optical encryption system.
8. An electronic device, comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the equivalent key obtaining method as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of the equivalent key obtaining method according to any one of claims 1 to 6.
CN201910216928.6A 2019-03-21 2019-03-21 Equivalent key obtaining method and device and computer readable storage medium Active CN110071798B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910216928.6A CN110071798B (en) 2019-03-21 2019-03-21 Equivalent key obtaining method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910216928.6A CN110071798B (en) 2019-03-21 2019-03-21 Equivalent key obtaining method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110071798A CN110071798A (en) 2019-07-30
CN110071798B true CN110071798B (en) 2022-03-04

Family

ID=67366428

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910216928.6A Active CN110071798B (en) 2019-03-21 2019-03-21 Equivalent key obtaining method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110071798B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674941B (en) * 2019-09-25 2023-04-18 南开大学 Data encryption transmission method and system based on neural network
CN111259427B (en) * 2020-01-21 2020-11-06 北京安德医智科技有限公司 Image processing method and device based on neural network and storage medium
CN111709867B (en) * 2020-06-10 2022-11-25 四川大学 Novel full convolution network-based equal-modulus vector decomposition image encryption analysis method
CN112802145A (en) * 2021-01-27 2021-05-14 四川大学 Color calculation ghost imaging method based on deep learning
CN113726979B (en) * 2021-07-31 2024-04-26 浪潮电子信息产业股份有限公司 Picture encryption method, picture decryption method, picture encryption system and related devices
CN116032636B (en) * 2023-01-06 2023-10-20 南京通力峰达软件科技有限公司 Internet of vehicles data encryption method based on neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106411510A (en) * 2016-10-28 2017-02-15 深圳大学 Method and apparatus for obtaining equivalent key of random phase coding-based optical encryption system
CN107659398A (en) * 2017-09-28 2018-02-02 四川长虹电器股份有限公司 Suitable for Android symmetric encryption method
CN108921282A (en) * 2018-05-16 2018-11-30 深圳大学 A kind of construction method and device of deep neural network model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9813246B2 (en) * 2013-10-29 2017-11-07 Jory Schwach Encryption using biometric image-based key

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106411510A (en) * 2016-10-28 2017-02-15 深圳大学 Method and apparatus for obtaining equivalent key of random phase coding-based optical encryption system
CN107659398A (en) * 2017-09-28 2018-02-02 四川长虹电器股份有限公司 Suitable for Android symmetric encryption method
CN108921282A (en) * 2018-05-16 2018-11-30 深圳大学 A kind of construction method and device of deep neural network model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MSE与MAE对机器学习性能优化的作用比较;文洁;《信息与电脑(理论版)》;20180815;第42-43页 *

Also Published As

Publication number Publication date
CN110071798A (en) 2019-07-30

Similar Documents

Publication Publication Date Title
CN110071798B (en) Equivalent key obtaining method and device and computer readable storage medium
Zhu et al. Hidden: Hiding data with deep networks
Wei et al. A framework for evaluating gradient leakage attacks in federated learning
Yin et al. See through gradients: Image batch recovery via gradinversion
Wu et al. Local Shannon entropy measure with statistical tests for image randomness
CN108683669B (en) Data verification method and secure multi-party computing system
Naveh et al. Photoproof: Cryptographic image authentication for any set of permissible transformations
Hsiao et al. Fingerprint image cryptography based on multiple chaotic systems
CN111373401B (en) Homomorphic inference device, homomorphic inference method, computer-readable storage medium, and hidden information processing system
Cogranne et al. Efficient steganography in JPEG images by minimizing performance of optimal detector
CN112862001A (en) Decentralized data modeling method under privacy protection
CN113160944B (en) Medical image sharing method based on blockchain
Liu et al. Adaptive steganography based on block complexity and matrix embedding
Zhou et al. Optical image encryption based on two-channel detection and deep learning
US20240104681A1 (en) Image steganography utilizing adversarial perturbations
Ahmed et al. Hash-based authentication of digital images in noisy channels
CN112802076A (en) Reflection image generation model and training method of reflection removal model
TİKEN et al. A comprehensive review about image encryption methods
Sharifzadeh et al. Convolutional neural network steganalysis's application to steganography
Yala et al. Syfer: Neural obfuscation for private data release
Cao et al. Using image sensor PUF as root of trust for birthmarking of perceptual image hash
CN111210378A (en) Recoverability method based on image data on industrial cloud
Nazari et al. A novel image steganography scheme based on morphological associative memory and permutation schema
Ren et al. A visually secure image encryption scheme based on compressed sensing and Chebyshev-dynamics coupled map lattices in cloud environment
CN113723604B (en) Neural network training method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant