CN116961960A - Data encryption method and device and electronic equipment - Google Patents

Data encryption method and device and electronic equipment Download PDF

Info

Publication number
CN116961960A
CN116961960A CN202211019512.3A CN202211019512A CN116961960A CN 116961960 A CN116961960 A CN 116961960A CN 202211019512 A CN202211019512 A CN 202211019512A CN 116961960 A CN116961960 A CN 116961960A
Authority
CN
China
Prior art keywords
neural network
data block
network processor
data
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211019512.3A
Other languages
Chinese (zh)
Inventor
王茂义
浦贵阳
陈进利
王亚莱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202211019512.3A priority Critical patent/CN116961960A/en
Publication of CN116961960A publication Critical patent/CN116961960A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/0618Block ciphers, i.e. encrypting groups of characters of a plain text message using fixed encryption transformation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/06Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for block-wise or stream coding, e.g. DES systems or RC4; Hash functions; Pseudorandom sequence generators
    • H04L9/0618Block ciphers, i.e. encrypting groups of characters of a plain text message using fixed encryption transformation
    • H04L9/0625Block ciphers, i.e. encrypting groups of characters of a plain text message using fixed encryption transformation with splitting of the data block into left and right halves, e.g. Feistel based algorithms, DES, FEAL, IDEA or KASUMI

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Storage Device Security (AREA)

Abstract

The application discloses a data encryption method, which comprises the following steps: obtaining a block cipher corresponding to the multimedia data through a neural network processor, and recombining the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and converting the binary format of the extraction result; carrying out multi-round confusion and diffusion on the first sequence through a general processor and a neural network processor to obtain a second sequence; inverse combination is carried out on the second sequence through a neural network processor to obtain ciphertext data; wherein the recombination and the inverse combination are a pair of reciprocal operations. The application also discloses a data encryption device and electronic equipment.

Description

Data encryption method and device and electronic equipment
Technical Field
The present application relates to the field of information security technologies, but not limited to, and in particular, to a data encryption method, a data encryption device, and an electronic device.
Background
At present, with the rapid development and popularization of computer and information technology, the scale of application systems in various industries is rapidly increased, and the generated application data shows explosive growth. The generation and transmission of large amounts of data, while promoting industry development, undoubtedly exposes more application information onto the network, resulting in a great deal of sensitive information and property being extremely dependent on electronic cryptographic devices. Under such a trend, security analysis methods for electronic password devices are increasing, so that the security of the electronic devices is greatly challenged. In order to ensure the safety of the generated and transmitted data, each industry researches the software and hardware realization of the effective encryption and decryption module in the application system of the home. In the prior art, the data encryption standard algorithm (Data Encryption Standard, DES) is commonly used to secure sensitive information and property in electronic devices.
However, when the encryption and decryption data volume is relatively large, the method runs above, occupies precious general processor (center processing units, CPU) resources, causes slow business influence speed and influences user experience.
Disclosure of Invention
The application provides a data encryption method, a data encryption device and electronic equipment, which can fully utilize the parallel operation capability of NPU and save the memory resource of CPU when encrypting and decrypting big data.
The technical scheme of the application is realized as follows:
a method of data encryption, the method comprising:
obtaining a block cipher corresponding to the multimedia data through a neural network processor, and recombining the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and performing binary format conversion on an extraction result;
performing multiple rounds of confusion and diffusion on the first sequence through the general processor and the neural network processor to obtain a second sequence;
inverse combination is carried out on the second sequence through the neural network processor to obtain ciphertext data; wherein the recombining and the inverse combining are a pair of reciprocal operations.
A data encryption device, the device comprising:
the first processing module is used for obtaining a block cipher corresponding to the multimedia data through the neural network processor and recombining the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and performing binary format conversion on an extraction result;
the first processing module and the second processing module are used for carrying out multi-round confusion and diffusion on the first sequence through the general processor and the neural network processor to obtain a second sequence;
the first processing module is further used for carrying out inverse combination on the second sequence through the neural network processor to obtain ciphertext data; wherein the recombining and the inverse combining are a pair of reciprocal operations.
An electronic device, the electronic device comprising: the system comprises a neural network processor, a general-purpose processor, a memory and a communication bus;
the communication bus is used for realizing communication connection among the neural network processor, the general processor and the memory;
the neural network processor and the general purpose processor are configured to execute a data encryption program stored in the memory to implement the steps of the data encryption method as described above.
The application provides a data encryption method, a data encryption device and electronic equipment, wherein the method comprises the following steps: obtaining a block cipher corresponding to the multimedia data through a neural network processor, and recombining the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and converting the binary format of the extraction result; carrying out multi-round confusion and diffusion on the first sequence through a general processor and a neural network processor to obtain a second sequence; inverse combination is carried out on the second sequence through a neural network processor to obtain ciphertext data; wherein the recombination and the inverse combination are a pair of reciprocal operations. In this way, the feature extraction is carried out on the multimedia data to obtain the corresponding block cipher, and then the block cipher is recombined to obtain the first sequence, so that the efficient extraction of the feature quantity of the multimedia data can be realized; then, the first sequence is mixed and diffused for multiple times to obtain a second sequence, and the parallel processing of big data can be realized through the NPU, so that the hardware computing power of the NPU is fully utilized, and the execution efficiency is improved; finally, the second sequence is inversely combined to obtain ciphertext data, so that efficient encryption can be realized in the data encryption process, the memory resource of the CPU is saved, the operation pressure of the CPU is reduced, and the service processing speed is improved.
Drawings
FIG. 1 is a flow chart of a related art data encryption method;
FIG. 2 is a flow chart of an alternative data encryption method according to an embodiment of the present application;
FIG. 3 is an exemplary first scenario of an alternative data encryption method provided by an embodiment of the present application;
FIG. 4 is an example of an alternative key generation provided by an embodiment of the present application;
FIG. 5 is a second example of a scenario of an alternative data encryption method provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of an alternative data encryption device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application will be further described in detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present application more apparent, and the described embodiments should not be construed as limiting the present application, and all other embodiments obtained by those skilled in the art without making any inventive effort are within the scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict.
In the following description, the terms "first", "second", "third" and the like are merely used to distinguish similar objects and do not represent a specific ordering of the objects, it being understood that the "first", "second", "third" may be interchanged with a specific order or sequence, as permitted, to enable embodiments of the application described herein to be practiced otherwise than as illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the application only and is not intended to be limiting of the application.
At present, along with the start of the three-point engineering, especially the gold card engineering, the DES algorithm is widely applied in the fields of point of sale (POS), automatic teller machine (Automated Teller Machine, ATM), magnetic card and intelligent (Integrated Circuit, IC) card, gas station, highway toll gate, etc., so as to realize the confidentiality of key data, such as the encryption transmission of personal identification code (Personal identification number, PIN) of credit card holder, the mutual authentication between IC card and POS, the message authentication code (Message Authentication Code, MAC) of financial transaction data packet, etc., and the DES algorithm is used. Referring to fig. 1, the DES algorithm of the prior art runs on a CPU.
An embodiment of the present application provides a data encryption method, referring to fig. 2, including the steps of:
step 201, obtaining a block cipher corresponding to the multimedia data through a neural network processor, and recombining the block cipher to obtain a first sequence.
The block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and performing binary format conversion on the extraction result.
In some embodiments, the multimedia data may include: text, pictures, photographs, sounds, animations and movies, and other data generated by interactive functions provided by the computer.
In some embodiments, prior to step 201, the multimedia data may also be obtained by: the raw image is captured by a camera or webcam and the captured raw image is sent to a network service provider (Internet Service Provider, ISP) for image processing.
In an alternative embodiment, feature extraction of multimedia data may include: the ISP, after receiving the original image, performs image processing on the original image, and then the NPU performs binarization processing on the image processing result, and finally converts the image data into a binary format vector, where the image processing result may also be understood as a feature map (feature map).
It should be noted that, in the present application, the feature extraction performed by the NPU on the multimedia data is not limited to the image scene, and any scene related to the multimedia data may be suitable for the present application.
In the embodiment of the present application, the first sequence may be a plaintext cipher with a size of 64 bits. Here, 64 bits refer to a plaintext cipher of which the size is 64 bits, and a plaintext cipher of 64-bit size can also be understood as a 64-bit vector.
It should be noted that, the permutation in the DES algorithm is understood to be that the original 64 bits are reordered, and the shift condition of each bit is different in different permutations. And, various substitutions involved in the DES encryption algorithm process have a substitution table corresponding to them; the values in the table are not data, but rather each bit is moved to which bit after the exchange.
In some embodiments, the step 201 of recombining the packet ciphers to obtain a first sequence may be implemented by the following steps: and carrying out initial address IP replacement on the packet password by a neural network processor to obtain a first sequence.
Illustratively, the NPU is reconfigured by the neural network gather operator IP substitution above and divided into two parts of 32 bits each. The function of IP permutation is to recombine the input 64-bit data blocks by bit, with the permutation rules as shown in the table below.
58 50 42 34 26 18 10 2
60 52 44 36 28 20 12 4
62 54 46 38 30 22 14 6
64 56 48 40 32 24 16 8
57 49 41 33 25 17 9 1
59 51 43 35 27 19 11 3
61 53 45 37 29 21 13 5
63 55 47 39 31 23 15 7
IP substitution table
Wherein the numbers in the IP substitution table are: for example, the 58 th bit of the input is changed to the first bit, the 50 th bit is changed to the 2 nd bit, and so on, and the last bit is the 7 th bit of the original.
In this way, the feature extraction is performed on the multimedia data to obtain the corresponding block ciphers, and then the block ciphers are recombined to obtain the first sequence, so that the efficient extraction of the feature quantity of the multimedia data can be realized.
Step 202, performing multiple rounds of confusion and diffusion on the first sequence through a general processor and a neural network processor to obtain a second sequence.
In some embodiments, the manner in which the multiple rounds of confusion and diffusion are performed may include: s-box substitution and P-box substitution.
The substitution of the S box is to compress 48-bit data into 32-bit data, and the implementation mode is as follows: the substitution is completed by 8 different S boxes, each S box is provided with 6-bit input and 4-bit output, 48-bit input is divided into 8 groups of 6 bits, one group corresponds to one S box, and the corresponding S box performs substitution operation on each group; that is, 6-bit data is input to the S-box, and 4-bit data is output from the S-box. Thus, the 48 bits of the output are compressed to 32 bits instead. The S-box substitution table includes S1-S8 substitution tables as follows.
14 4 13 1 2 15 11 8 3 10 6 12 5 9 0 7
0 15 7 4 14 2 13 1 10 6 12 11 9 5 3 8
4 1 14 8 13 6 2 11 15 12 9 7 3 10 5 0
15 12 8 2 4 9 1 7 5 11 3 14 10 0 6 13
S1 substitution table
15 1 8 14 6 11 3 4 9 7 2 13 12 0 5 10
3 13 4 7 15 2 8 14 12 0 1 10 6 9 11 5
0 14 7 11 10 4 13 1 5 8 12 6 9 3 2 15
13 8 10 1 3 15 4 2 11 6 7 12 0 5 14 9
S2 substitution table
10 0 9 14 6 3 15 5 1 13 12 7 11 4 2 8
13 7 0 9 3 4 6 10 2 8 5 14 12 11 15 1
13 6 4 9 8 15 3 0 11 1 2 12 5 10 14 7
1 10 13 0 6 9 8 7 4 15 14 3 11 5 2 12
S3 substitution table
7 13 14 3 0 6 9 10 1 2 8 5 11 12 4 15
13 8 11 5 6 15 0 3 4 7 2 12 1 10 14 9
10 6 9 0 12 11 7 13 15 1 3 14 5 2 8 4
3 15 0 6 10 1 13 8 9 4 5 11 12 7 2 14
S4 substitution table
2 12 4 1 7 10 11 6 8 5 3 15 13 0 14 9
14 11 2 12 4 7 13 1 5 0 15 10 3 9 8 6
4 2 1 11 10 13 7 8 15 9 12 5 6 3 0 14
11 8 12 7 1 14 2 13 6 15 0 9 10 4 5 3
S5 substitution table
S6 substitution table
4 11 2 14 15 0 8 13 3 12 9 7 5 10 6 1
13 0 11 7 4 9 1 10 14 3 5 12 2 15 8 6
1 4 11 13 12 3 7 14 10 15 6 8 0 5 9 2
6 11 13 8 1 4 10 7 9 5 0 15 14 2 3 12
S7 substitution table
13 2 8 4 6 15 11 1 10 9 3 14 5 0 12 7
1 15 13 8 10 3 7 4 12 5 6 11 0 14 9 2
7 11 4 1 9 12 14 2 0 6 10 13 15 3 5 8
2 1 14 7 4 10 8 13 15 12 9 0 3 5 6 11
S8 substitution table
Taking the S8 box as an example, inputting 110011, wherein the combination of the first bit and the sixth bit (the highest bit and the lowest bit) is 11 (binary), and the decimal is 3, and then the line number is 3 in the S8 box; correspondingly, the calculation is as follows: the second to fifth bits of the original data are 1001 (binary) and are converted to 9 decimal, and the column number is 9 in S8 box. Knowing that the number of row 03 and column 09 of S-box 8 is 12, it is converted to binary 1100, and thus binary 1100 is substituted for 110011. Thus, the S box is used for replacing to realize nonlinear operation, and the security of the password is improved.
Similar to the effect of IP permutation, P-box permutation shuffles 32-bit data. The rules for P-box permutation are shown below.
16 7 20 21 29 12 28 17
1 15 23 26 5 18 31 10
2 8 24 14 32 27 3 9
19 13 30 6 22 11 4 25
P-box substitution table
It is understood that diffusion (diffusion) and confusion (fusion) are two basic methods of designing cryptosystems, the purpose of which is to resist statistical analysis of the cryptosystem. In the design of the block cipher, the diffusion and confusion are fully utilized, and the adversary can be effectively resisted to infer the plaintext or the secret key from the statistic characteristics of the ciphertext. Diffusion and confusion are the design basis for modern block ciphers. Diffusion is the process of letting each bit in the plaintext affect many bits in the plaintext, or letting each bit in the plaintext be affected by many bits in the plaintext, so that the statistical properties of the plaintext can be masked. The confusion is to make the statistical relationship between the ciphertext and the key as complex as possible, and even if some statistical characteristics about the ciphertext are obtained, the key cannot be estimated, and a better confusion effect can be achieved by using complex nonlinear substitution transformation.
In some embodiments, the manner in which the multiple rounds of aliasing and diffusion are performed may include through product and iteration, as well as other non-linear substitution transformations, as the application is not specifically limited herein.
In this way, the first sequence is mixed and diffused for multiple times, the operation pressure in the step of obtaining the second sequence is transferred to the NPU, so that the parallel processing of big data can be achieved, the hardware calculation force of the NPU is fully utilized, the memory resource of the CPU is further saved, and the operation pressure of the CPU is reduced.
And 203, carrying out inverse combination on the second sequence through a neural network processor to obtain ciphertext data.
Wherein the recombination and the inverse combination are a pair of reciprocal operations.
In some embodiments, the second sequence may be subjected to inverse IP permutation by the neural network processor in step 203 to obtain ciphertext data.
Wherein, the inverse IP substitution is the reciprocal operation of the IP substitution, or can be written as IP -1 The rules for the permutation, reverse IP permutation, are as follows.
40 8 48 16 56 24 64 32
39 7 47 15 55 23 63 31
38 6 46 14 54 22 62 30
37 5 45 13 53 21 61 29
36 4 44 12 52 20 60 28
35 3 43 11 51 19 59 27
34 2 42 10 50 18 58 26
33 1 41 9 49 17 57 25
Reverse IP substitution table
The embodiment of the application provides a data encryption method, a data encryption device and electronic equipment, wherein the method comprises the following steps: obtaining a block cipher corresponding to the multimedia data through a neural network processor, and recombining the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and converting the binary format of the extraction result; carrying out multi-round confusion and diffusion on the first sequence through a general processor and a neural network processor to obtain a second sequence; inverse combination is carried out on the second sequence through a neural network processor to obtain ciphertext data; wherein the recombination and the inverse combination are a pair of reciprocal operations. In this way, the feature extraction is carried out on the multimedia data to obtain the corresponding block cipher, and then the block cipher is recombined to obtain the first sequence, so that the efficient extraction of the feature quantity of the multimedia data can be realized; then, the first sequence is mixed and diffused for multiple times to obtain a second sequence, and the parallel processing of big data can be realized through the NPU, so that the hardware computing power of the NPU is fully utilized, and the execution efficiency is improved; finally, the second sequence is inversely combined to obtain ciphertext data, so that efficient encryption can be realized in the data encryption process, the memory resource of the CPU is saved, the operation pressure of the CPU is reduced, and the service processing speed is improved.
In some embodiments, step 202 performs multiple rounds of obfuscation and diffusion of the first sequence by a general purpose processor and a neural network processor to obtain a second sequence, including:
step A1, dividing the first sequence into R by a neural network processor 0 Data block and L 0 A data block;
step A2, based on R by the neural network processor 0 Data block and L 0 A data block for n-th round confusion and diffusion; wherein N is more than or equal to 1 and less than or equal to N, N is a positive integer, and N is the total number of rounds of confusion and diffusion;
and A3, judging the relation between N and N through the neural network processor, and obtaining the second sequence according to the result obtained by confusion and diffusion of the previous N rounds under the condition that the relation meets the preset condition.
In the embodiment of the application, R 0 Data block, L 0 A data block or the like is composed of data and can be understood as multi-byte binary data.
In an alternative embodiment, the NPU divides the 64-bit first sequence into left and right portions, R in step A1 0 The data block representing the right part, L 0 The left hand portion is shown, each 32 bits long.
In an alternative embodiment, step A2 includes: if n=1, and n<N, R to 0 Data block and L 0 Data block, carrying out round 1 confusion and diffusion; if 1 <n<N, R is obtained by a neural network processor n-1 Data block and L n-1 Data block, and n-th round of confusion and diffusion.
In an alternative embodiment, step A3 includes: in the case of n=n, R is calculated by the neural network processor n Data block and L n The data blocks are combined to obtain a second sequence.
Therefore, the algorithm execution of the DES is split into two parts of logic operation executed by the CPU and parallel operation executed by the NPU, so that the memory resource of the CPU is saved, and the operation pressure of the CPU is reduced.
In some embodiments, prior to step A2, the method further comprises:
performing first selective replacement on the initial key through a neural network processor to obtain a replacement sequence;
dividing the permutation sequence into C by a neural network processor 0 Data block and D 0 A data block;
obtaining a shift parameter corresponding to the nth round through a general processor;
c according to shift parameter pair by general purpose processor 0 Shifting the data block to obtain C n Data block, according to shift parameter pair D 0 Shifting the data block to obtain D n A data block;
pair C by neural network processor n Data block and D n Data block executionAnd combining, and performing second selective replacement on the combined result to obtain an nth subkey.
In the embodiment of the application, the initial key can be obtained by converting the original 16-system key of 8 characters into a 64-bit binary key through a CPU.
Illustratively, the 16-ary key K is: k= 133457799BBCDFF1;
written in binary form: k= 00010011 00110100 01010111 01111001 10011011 10111100 11011111 11110001.
In an alternative embodiment, the first selection permutation may comprise a PC1 permutation, the PC1 permutation operation acting to reduce the 64-bit initial key to a 56-bit key, and the permutation sequence may be a 56-bit key. Thus, the removal of the 8-bit parity bits from the 64-bit initial key is achieved by a bit reduction operation.
Illustratively, the rules for PC1 replacement are as follows.
57 49 41 33 25 17 9 1
58 50 42 34 26 18 10 2
59 51 43 35 27 19 11 3
60 52 44 36 63 55 47 39
31 23 15 7 62 54 46 38
30 22 14 6 61 53 45 37
29 21 13 5 28 20 12 4
PC1 substitution table
In an alternative embodiment, the permutation sequence is divided into C by a neural network processor 0 Data block and D 0 The data block includes: after a 56-bit key is obtained, it is split into two parts, 28 bits each.
In the embodiment of the present application, the shift parameter may include the number of bits of each data block shifted to the left, that is, according to the number of rounds, this C 0 Data block and D 0 Each bit of data in the data block is cyclically shifted left by 1 bit or 2 bits, respectively. The number of bits per round of movement can be obtained from a shift parameter table.
Number of wheels 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Number of bits 1 1 2 2 2 2 2 2 1 2 2 2 2 2 2 1
Shift parameter table
In an alternative embodiment, the second selection permutation may comprise a PC2 permutation.
The effect of PC2 permutation is similar to PC1 permutation in that the 56-bit key is decremented to 48 bits and the rule of PC2 permutation is as follows.
14 17 11 24 1 5 3 28
15 6 21 10 23 19 12 4
26 8 16 7 27 20 13 2
41 52 31 37 47 55 30 40
51 45 33 48 44 49 39 56
34 53 46 42 50 36 29 32
PC2 substitution table
Illustratively, prior to the 1 st obfuscation and diffusion, the corresponding shift parameter is 1 from the shift parameter table by the general purpose processor, i.e., C is obtained by the general purpose processor 0 Each bit of data of the data block is respectively circularly shifted left by 1 bit to obtain C 1 Data block, D 0 Each bit of data in the data block is respectively circularly shifted left by 1 bit to obtain D 1 A data block; pair C by neural network processor 1 Data block and D 1 And combining the data blocks, and performing second selective replacement on the combined result to obtain the 1 st subkey.
Therefore, the bit reduction operation in the key derivation process is transferred to the NPU for processing, the hardware computing power of the NPU is fully utilized, the CPU is only responsible for shifting data in the key, the memory resource of the CPU is saved, and the computing pressure of the CPU is reduced.
In some embodiments, step A2 is based on R by the neural network processor 0 Data block and L 0 A data block, for an nth round of aliasing and diffusion, comprising:
step B1, R is processed by a neural network processor n-1 Expanding the data block to obtain an nth expanded data block;
step B2, performing exclusive OR operation on the nth expansion data block and the nth subkey through a neural network processor to obtain an nth operation result;
Step B3, the nth operation result is confused by a general processor, and the compression processing result is diffused by a neural network processor to obtain an nth+1 operation result;
step B4, R is used by a neural network processor n-1 Data block to L n Performing assignment on the data block;
and B5, updating N to n+1 through the neural network processor, and returning to execute the judgment of the relation between N and N.
In the embodiment of the present application, if n=1, and n<N, before the step B1, the method comprises the following steps: obtaining R obtained in step A1 0 Data block and L 0 A data block.
In the embodiment of the application, if 1<n<N, before the step B1, the method comprises the following steps: obtaining R through a neural network processor n-1 Data block and L n-1 A data block. Wherein R is n-1 Data block and L n-1 The data block is R 0 Data block and L 0 And processing the data block.
In an alternative embodiment, the expanding process in step B1 may include: and (5) performing expansion treatment through E replacement.
The function of the E permutation is to expand the 32-bit input to 48 bits, and the E permutation rule is as follows.
32 1 2 3 4 5 4 5
6 7 8 9 8 9 10 11
12 13 12 13 14 15 16 17
16 17 18 19 20 21 20 21
22 23 24 25 24 25 26 27
28 29 28 29 30 31 32 1
E substitution table
In an alternative embodiment, the confusion processing of the nth operation result in the step B3 by the general purpose processor may be replaced by an S box; diffusion processing of the results of the compression processing by the neural network processor may be by P-box permutation.
Illustratively, the processor is based on R by the neural network 0 Data block and L 0 The data block, carrying out the nth round of confusion and diffusion can be realized by the following steps:
in the process of confusion and diffusion of the 10 th round, obtaining 32-bit data on the left side and 32-bit data on the right side of the 9 th round;
the NPU uses the neural network gather operator E to replace and expand 32-bit data on the right into 48-bit data;
the NPU uses the neural network bitwise_xor operator to replace and exclusive-OR the 48-bit data and the 48-bit derivative key;
the CPU performs S-box substitution operation on the 48-bit data to obtain 32-bit data;
the NPU uses a neural network gather operator to carry out P-box replacement operation on 32 bit data;
the NPU utilizes a neural network bitwise_xor operator to realize 32-bit exclusive OR of the P box replacement result and the original left half part, and the left half part and the right half part are exchanged;
the NPU updates the number of rounds from 10 to 11 and returns to perform the relationship between decision 11 and N.
Therefore, operations such as expansion, exclusive or operation, multiple replacement and the like which need a large amount of calculation force are transferred to the NPU, and the NPU has strong parallel operation capability, so that the hardware calculation force of the NPU can be fully utilized, the DES algorithm operation has high execution efficiency and data throughput, the memory resource of the CPU is further saved, and the operation pressure of the CPU is reduced.
In some embodiments, if the relationship between N and N satisfies the preset condition includes n=n, and the step A3 includes, in the case that the relationship satisfies the preset condition, obtaining the second sequence according to the result obtained by the previous N rounds of confusion and diffusion:
r is processed by a neural network processor n Data block and L n The data blocks are combined to obtain a second sequence.
Wherein the results obtained by the confusion and diffusion of the first n rounds comprise R n Data block and L n A data block.
In the embodiment of the application, after the output of the N round of confusion and diffusion is obtained, the left half part and the right half part of the output are not exchanged, but the two parts are combined to form a data block.
In the embodiment of the present application, N may be 16.
It will be appreciated that the DES algorithm requires multiple rounds of iterative operations, each generating one sub-key, and 16 sub-keys in total. The input 64-bit plaintext string is converted into a 64-bit ciphertext string through 16 rounds of the same operation. In the 16-round iterative process, a 64-bit plaintext string is subjected to 16-round encryption by using a 56-bit key, so as to obtain a 64-bit ciphertext string. Wherein the key is 64 bits, 56 bits are actually used, and the remaining 8 bits are used as parity.
Thus, the output results of the iterative operations of multiple rounds are inversely combined to obtain ciphertext data, efficient encryption can be realized in the data encryption process, the memory resources of the CPU are saved, the operation pressure of the CPU is reduced, and the safety of sensitive data is ensured.
In some embodiments, after step 203, the method further comprises: the ciphertext data is output and displayed by the general purpose processor.
The method includes the steps of obtaining ciphertext data, sending the ciphertext data to a neural network processor through a general processor, decrypting the ciphertext data through the general processor and the neural network processor to obtain a feature map, and restoring the feature map to obtain multimedia data.
It will be appreciated that decryption is the reverse of encryption, and the reverse of the above steps is performed by using the subkey, so that 16 iterations are performed, which will not be described here.
Therefore, under the condition that the derivative key is obtained, the existing key can be directly utilized for decryption, and the convenience of decryption is improved.
In some embodiments, the scheme provided by the present application may be further combined with Triple DES (3 DES) encryption algorithm, that is, three DES encryption algorithms are applied to each data block according to the method provided by the above embodiments, and 3 encryption and decryption operations are performed. The encryption process is encryption, decryption and encryption, and the decryption process is to execute decryption, encryption and decryption operations in the order of the key 3, the key 2 and the key 1.
The data encryption method in one or more of the above embodiments is described below by way of example.
Fig. 3 is an example one of an alternative data encryption method provided by an embodiment of the present application.
Step 301, the CPU converts the plaintext data into a 64-bit vector.
Wherein the size of the plaintext data may be 8 bytes.
Step 302, the NPU performs IP permutation by using the neural network gather operator, and recombines the 64-bit vectors.
Wherein, after the combined 64-bit vector is obtained, the 64-bit vector is divided into two parts of 32 bits each.
Step 303, repeatedly performing 16 rounds of iteration.
Step 304, NPU uses neural network gather operator to carry out IP -1 And replacing to obtain 64-bit vectors contained in the ciphertext data.
Step 305, the CPU outputs ciphertext data.
It should be noted that, in this embodiment, the descriptions of the same steps and the same content as those in other embodiments may refer to the descriptions in other embodiments, and are not repeated here.
Fig. 4 is an example of an alternative key generation provided by an embodiment of the present application.
Step 401, the CPU converts the initial Key into a 64-bit vector.
Wherein the size of Key is 8 bytes.
Step 402, the NPU uses the neural network gather operator to perform PC1 substitution, and reduces the key of DES from 64 bits to 56 bits.
Step 403, the CPU performs vector shift on the obtained 56-bit key in different rounds.
And 404, performing PC2 replacement by the NPU by using a neural network gather operator, and generating different 48-bit subkeys by using the 56-bit key.
It should be noted that, in this embodiment, the descriptions of the same steps and the same content as those in other embodiments may refer to the descriptions in other embodiments, and are not repeated here.
Fig. 5 is a schematic diagram of an alternative data encryption method according to an embodiment of the present application.
And 501, performing E replacement by using the NPU by using a neural network gather operator, and expanding the 32-bit vector on the right into a 48-bit vector.
And 502, performing exclusive OR operation on the 48-bit vector and the 48-bit derivative key K by using the neural network bitwise_xor operator replacement by the NPU.
And 503, the CPU performs substitution operation on the 48-bit data S box to obtain 32-bit data.
Step 504, the NPU performs P-box permutation on the 32-bit data using a neural network gather operator.
And 505, performing exclusive OR operation on the P box replacement result and the original left half 32 bits by using a neural network bitwise_xor operator by the NPU.
Wherein after step 505, the output of the exclusive-or operation is also left and right half exchanged.
It should be noted that, in this embodiment, the descriptions of the same steps and the same content as those in other embodiments may refer to the descriptions in other embodiments, and are not repeated here.
An embodiment of the present application provides a data encryption device, which may be applied to a data encryption method provided in the corresponding embodiment of fig. 1, referring to fig. 6, the data encryption device 6 includes:
a first processing module 601, configured to obtain, by using a neural network processor, a block cipher corresponding to the multimedia data, and recombine the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and converting the binary format of the extraction result;
the first processing module 601 and the second processing module 602 are configured to perform multiple rounds of confusion and diffusion on the first sequence by using the general processor and the neural network processor, so as to obtain a second sequence;
the first processing module 601 is further configured to perform inverse combination on the second sequence through the neural network processor to obtain ciphertext data; wherein the recombination and the inverse combination are a pair of reciprocal operations.
In other embodiments of the present application, the first processing module 601 is further configured to divide the first sequence into R by the neural network processor 0 Data block and L 0 A data block;
a first processing module 601 and a second processing module 602, which are further configured to, based on the R 0 Data block and L 0 A data block for n-th round confusion and diffusion; wherein N is more than or equal to 1 and less than or equal to N, N is a positive integer, and N is the total number of rounds of confusion and diffusion;
the first processing module 601 is further configured to determine, by using the neural network processor, a relationship between N and N, and obtain the second sequence according to a result obtained by confusion and diffusion of a previous N rounds if the relationship meets a preset condition.
In other embodiments of the present application, the first processing module 601 is further configured to perform, by means of the neural network processor, a processing of R 0 Expanding the data blocks to obtain first expanded data blocks; performing exclusive OR operation on the first expansion data block and the first subkey through the neural network processor to obtain a first operation result;
the second processing module 602 is further configured to perform confusion processing on the first operation result by using a general purpose processor;
the first processing module 601 is further configured to diffuse the result of confusion processing by using the neural network processor, so as to obtain a second operation result; r for a neural network processor 0 Data block to L 1 Performing assignment on the data block;
updating N to 2 by the neural network processor, and returning to perform judgment on the relation between N and N.
In other embodiments of the present application, the first processing module 601 is further configured to: through god Obtaining R via a network processor n-1 Data block and L n-1 A data block; wherein R is n-1 Data block and L n-1 The data block is R 0 Data block and L 0 The data block is processed to obtain the data block; r is processed by a neural network processor n-1 Expanding the data block to obtain an nth expanded data block; performing exclusive OR operation on the nth expansion data block and the nth subkey through a neural network processor to obtain an nth operation result;
the second processing module 602 is further configured to obfuscate the nth operation result;
the first processing module 601 is further configured to: diffusing the compression processing result through the neural network processor to obtain an n+1 operation result; the n+1th operation result is added with L through a neural network processor n-1 Exclusive OR operation is carried out on the data blocks to obtain R n A data block; r for a neural network processor n-1 Data block to L n Performing assignment on the data block; updating N to n+1 by the neural network processor, and returning to execute the judgment of the relation between N and N.
In other embodiments of the present application, the first processing module 601 is further configured to:
r is processed by a neural network processor n Data block and L n Combining the data blocks to obtain a second sequence, wherein the results obtained by the previous n rounds of confusion and diffusion comprise R n Data block and L n A data block.
In other embodiments of the present application, the first processing module 601 is further configured to: performing first selective replacement on the initial key through a neural network processor to obtain a replacement sequence; dividing the permutation sequence into C by a neural network processor 0 Data block and D 0 A data block;
the second processing module 602 is further configured to obtain, by using a general purpose processor, a shift parameter corresponding to the nth round; c according to shift parameter pair by general purpose processor 0 Shifting the data block to obtain C n Data block, according to shift parameter pair D 0 Shifting the data block to obtain D n A data block;
the first processing module 601 also usesIn the process of C through the neural network processor n Data block and D n And combining the data blocks, and performing second selective replacement on the combined result to obtain an nth subkey.
In other embodiments of the present application, the first processing module 601 is further configured to perform, by using the neural network processor, an initial address IP permutation on the packet password to obtain a first sequence.
In other embodiments of the present application, the first processing module 601 is further configured to perform reverse IP permutation on the second sequence by using a neural network processor to obtain ciphertext data; wherein, the inverse IP permutation is a reciprocal operation of the IP permutation.
An embodiment of the present application provides an electronic device, which may be applied to the method provided in the embodiment corresponding to fig. 1, and referring to fig. 7, the electronic device 7 (the electronic device 7 in fig. 7 corresponds to the data encryption apparatus 6 in fig. 6) includes: a neural network processor 701, a general purpose processor 702, a memory 703, and a communication bus 704, wherein:
the communication bus 704 is used to implement communication connections between the neural network processor 701, the general purpose processor 702, and the memory 703.
The neural network processor 701 and the general-purpose processor 702 are configured to execute a data encryption program stored in the memory 703 to realize the steps of:
the neural network processor 701 obtains a block cipher corresponding to the multimedia data, and the block cipher is recombined to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and converting the binary format of the extraction result;
the neural network processor 701 and the general processor 702 perform multi-round confusion and diffusion on the first sequence to obtain a second sequence;
the neural network processor 701 performs inverse combination on the second sequence to obtain ciphertext data; wherein the recombination and the inverse combination are a pair of reciprocal operations.
In other embodiments of the present application, the neural network processor 701 and the general-purpose processor 702 are configured to execute a data encryption program stored in the memory 703, so as to implement the following steps:
the neural network processor 701 divides the first sequence into R 0 Data block and L 0 A data block;
the neural network processor 701 and the general processor 702 are based on the R 0 Data block and L 0 A data block for n-th round confusion and diffusion; wherein N is more than or equal to 1 and less than or equal to N, N is a positive integer, and N is the total number of rounds of confusion and diffusion;
the neural network processor 701 determines a relationship between N and N, and obtains the second sequence according to a result obtained by confusion and diffusion of the previous N rounds if the relationship satisfies a preset condition.
In other embodiments of the present application, the neural network processor 701 and the general-purpose processor 702 are configured to execute a data encryption program stored in the memory 703, so as to implement the following steps:
neural network processor 701 vs R 0 Expanding the data blocks to obtain first expanded data blocks; performing exclusive OR operation on the first expansion data block and the first subkey through the neural network processor to obtain a first operation result;
the general processor 702 performs confusion processing on the first operation result;
The neural network processor 701 diffuses the result of the confusion processing to obtain a second operation result;
r for neural network processor 701 0 Data block to L 1 Performing assignment on the data block;
the neural network processor 701 updates N to 2 and returns to perform the judgment of the relationship between N and N.
In other embodiments of the present application, the neural network processor 701 and the general-purpose processor 702 are configured to execute a data encryption program stored in the memory 703, so as to implement the following steps:
neural network processor 701 obtains R n-1 Data block and L n-1 A data block; wherein R is n-1 Data block and L n-1 The data block is R 0 Data block and L 0 The data block is processed to obtain the data block;
neural netThe complex processor 701 pairs R n-1 Expanding the data block to obtain an nth expanded data block;
the neural network processor 701 performs exclusive OR operation on the nth expansion data block and the nth subkey to obtain an nth operation result;
the general purpose processor 702 obfuscates the nth operation result;
the neural network processor 701 diffuses the result of the compression processing to obtain an n+1 operation result;
the neural network processor 701 sums the n+1-th operation result with L n-1 Exclusive OR operation is carried out on the data blocks to obtain R n A data block;
r for neural network processor 701 n-1 Data block to L n Performing assignment on the data block;
the neural network processor 701 updates N to n+1, and returns to perform the judgment of the relationship between N and N.
In other embodiments of the present application, the neural network processor 701 is configured to execute a data encryption program stored in the memory 703, so as to implement the following steps:
r is R n Data block and L n Combining the data blocks to obtain a second sequence, wherein the results obtained by the previous n rounds of confusion and diffusion comprise R n Data block and L n A data block.
In other embodiments of the present application, the neural network processor 701 and the general-purpose processor 702 are configured to execute a data encryption program stored in the memory 703, so as to implement the following steps:
the neural network processor 701 performs a first selective permutation on the initial key to obtain a permutation sequence;
the neural network processor 701 divides the permutation sequence into C 0 Data block and D 0 A data block;
the general purpose processor 702 obtains shift parameters corresponding to the nth round;
the general purpose processor 702 performs the shift operation on the pair C according to the shift parameter 0 Shifting the data block to obtain C n Data block, according to shift parameter pair D 0 Shifting the data block to obtain D n A data block;
neural network processor 701 vs C n Data block and D n Combining the data blocks, and performing second selective replacement on the combined result to obtain an nth subkey
In other embodiments of the present application, the neural network processor 701 is configured to execute a data encryption program stored in the memory 703, so as to implement the following steps:
and performing initial address IP replacement on the block cipher to obtain a first sequence.
In other embodiments of the present application, the neural network processor 701 is configured to execute a data encryption program stored in the memory 703, so as to implement the following steps:
performing reverse IP replacement on the second sequence to obtain ciphertext data; wherein, the inverse IP permutation is a reciprocal operation of the IP permutation.
By way of example, a general purpose processor may be an integrated circuit chip with signal processing capabilities such as a central processing unit (cpu), which may be a microprocessor or any conventional processor, a digital signal processor (Digital Signal Processor, DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. As an example, the neural network processor may be an embedded neural network processor, which is characterized by adopting a data-driven parallel computing architecture, and is good at processing massive multimedia data such as video and image.
It should be noted that, in the specific implementation process of the steps executed by the processor in this embodiment, reference may be made to the implementation process in the method provided in the corresponding embodiment of fig. 1, which is not repeated herein.
According to the electronic equipment provided by the embodiment of the application, the block passwords corresponding to the multimedia data can be obtained through the neural network processor, and the block passwords are recombined to obtain the first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and converting the binary format of the extraction result; carrying out multi-round confusion and diffusion on the first sequence through a general processor and a neural network processor to obtain a second sequence; inverse combination is carried out on the second sequence through a neural network processor to obtain ciphertext data; wherein the recombination and the inverse combination are a pair of reciprocal operations. In this way, the feature extraction is carried out on the multimedia data to obtain the corresponding block cipher, and then the block cipher is recombined to obtain the first sequence, so that the efficient extraction of the feature quantity of the multimedia data can be realized; then, the first sequence is mixed and diffused for multiple times to obtain a second sequence, so that the parallel processing of big data can be achieved, the hardware computing power of the NPU is fully utilized, and the execution efficiency is improved; and finally, the second sequence is inversely combined to obtain ciphertext data, so that efficient encryption can be realized in the data encryption process, the memory resources of the CPU are saved, and the operation pressure of the CPU is reduced.
Embodiments of the present application provide a computer readable storage medium storing one or more programs executable by one or more processors to implement a data encryption method according to the corresponding embodiment of fig. 1, which is not described herein.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present application, please refer to the description of the method embodiments of the present application.
The computer storage medium/Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (Programmable Read-Only Memory, PROM), an erasable programmable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), an electrically erasable programmable Read Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a magnetic random access Memory (Ferromagnetic Random Access Memory, FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Read Only optical disk (Compact Disc Read-Only Memory, CD-ROM); but may also be various terminals such as mobile phones, computers, tablet devices, personal digital assistants, etc., that include one or any combination of the above-mentioned memories.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment of the present application" or "the foregoing embodiments" or "some implementations" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" or "an embodiment of the application" or "the foregoing embodiment" or "some embodiments" or "some implementations" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence numbers of the foregoing processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic thereof, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of units is only one logical function division, and there may be other divisions in actual implementation, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
The methods disclosed in the method embodiments provided by the application can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the several product embodiments provided by the application can be combined arbitrarily under the condition of no conflict to obtain new product embodiments.
The features disclosed in the embodiments of the method or the apparatus provided by the application can be arbitrarily combined without conflict to obtain new embodiments of the method or the apparatus.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the above-described integrated units of the present application may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or part of what contributes to the related art may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
It should be noted that the drawings in the embodiments of the present application are only for illustrating schematic positions of respective devices on the terminal device, and do not represent actual positions in the terminal device, the actual positions of respective devices or respective areas may be changed or shifted according to actual situations (for example, structures of the terminal device), and proportions of different portions in the terminal device in the drawings do not represent actual proportions.
The foregoing is merely an embodiment of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method of encrypting data, the method comprising:
obtaining a block cipher corresponding to the multimedia data through a neural network processor, and recombining the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and performing binary format conversion on an extraction result;
Performing multiple rounds of confusion and diffusion on the first sequence through the general processor and the neural network processor to obtain a second sequence;
inverse combination is carried out on the second sequence through a neural network processor to obtain ciphertext data; wherein the recombining and the inverse combining are a pair of reciprocal operations.
2. The method of claim 1, wherein the performing, by the general purpose processor and the neural network processor, a plurality of rounds of confusion and diffusion on the first sequence to obtain a second sequence comprises:
by passing throughThe neural network processor divides the first sequence into R 0 Data block and L 0 A data block;
based on the R by the general processor and the neural network processor 0 Data block and L 0 A data block for n-th round confusion and diffusion; wherein N is more than or equal to 1 and less than or equal to N, N is a positive integer, and N is the total number of rounds of confusion and diffusion;
and judging the relation between N and N through the neural network processor, and obtaining the second sequence according to the result obtained by confusion and diffusion of the previous N rounds under the condition that the relation meets the preset condition.
3. The method of claim 2, wherein if the relationship characterizes n=1, and n <N, the general purpose processor and the neural network processor are based on the R 0 Data block and L 0 A data block, for an nth round of aliasing and diffusion, comprising:
the R is processed by the neural network processor 0 Expanding the data blocks to obtain first expanded data blocks;
performing exclusive OR operation on the first expansion data block and the first subkey through the neural network processor to obtain a first operation result;
the general processor carries out confusion processing on the first operation result, and the neural network processor diffuses the result of the confusion processing to obtain a second operation result;
the second operation result and the L are processed by the neural network processor 0 Exclusive OR operation is carried out on the data blocks to obtain R 1 A data block;
using the R by the neural network processor 0 Data block to L 1 Performing assignment on the data block;
updating N to 2 through the neural network processor, and returning to execute the judgment of the relation between N and N.
4. A method according to claim 2, characterized in thatThus, if the relationship characterizes 1<n<N, the general purpose processor and the neural network processor are based on the R 0 Data block and L 0 A data block, for an nth round of aliasing and diffusion, comprising:
obtaining R by the neural network processor n-1 Data block and L n-1 A data block; wherein the R is n-1 Data block and L n-1 The data block is for the R 0 Data block and L 0 The data block is processed to obtain the data block;
the R is processed by the neural network processor n-1 Expanding the data block to obtain an nth expanded data block;
performing exclusive OR operation on the nth expansion data block and the nth subkey through the neural network processor to obtain an nth operation result;
the nth operation result is confused by the general processor, and the compression processing result is diffused by the neural network processor to obtain an (n+1) th operation result;
the n+1th operation result and the L are processed by the neural network processor n-1 Exclusive OR operation is carried out on the data blocks to obtain R n A data block;
using the R by the neural network processor n-1 Data block to L n Performing assignment on the data block;
updating N to n+1 by the neural network processor, and returning to execute the relation between N and N.
5. The method of claim 4, wherein if the relationship satisfies a preset condition including n=n, the obtaining the second sequence according to a result obtained by the previous N rounds of confusion and diffusion in a case that the relationship satisfies the preset condition includes:
The R is processed by the neural network processor n Data block and L n Combining the data blocks to obtain the second sequence, wherein the result obtained by the confusion and diffusion of the previous n rounds comprises the R n Data block and L n A data block.
6. The method of claim 2, wherein the R is performed on the R by the general purpose processor and the neural network processor 0 Data block and L 0 Before the data block is subjected to n rounds of aliasing and diffusion, the method further comprises:
performing first selective replacement on the initial key through the neural network processor to obtain a replacement sequence;
dividing the permutation sequence into C by the neural network processor 0 Data block and D 0 A data block;
obtaining a shift parameter corresponding to an nth round through the general processor;
the general purpose processor performs the shift on the C according to the shift parameter 0 Shifting the data block to obtain C n A data block for the D according to the shift parameter 0 Shifting the data block to obtain D n A data block;
the neural network processor is used for controlling the C n Data block and said D n And combining the data blocks, and performing second selective replacement on the combined result to obtain an nth subkey.
7. The method according to any one of claims 1 to 6, wherein the recombining the block ciphers by a neural network processor results in a first sequence, comprising:
And performing initial address IP replacement on the block cipher by the neural network processor to obtain a first sequence.
8. The method of claim 7, wherein the inverse combining of the second sequence by the neural network processor results in ciphertext data, comprising:
performing inverse IP replacement on the second sequence through the neural network processor to obtain the ciphertext data; wherein the inverse IP permutation is a reciprocal operation of the IP permutation.
9. A data encryption device, the device comprising:
the first processing module is used for obtaining a block cipher corresponding to the multimedia data through the neural network processor and recombining the block cipher to obtain a first sequence; the block cipher is obtained by extracting the characteristics of the multimedia data by a general processor and performing binary format conversion on an extraction result;
the first processing module and the second processing module are used for carrying out multi-round confusion and diffusion on the first sequence through the general processor and the neural network processor to obtain a second sequence;
the first processing module is further used for carrying out inverse combination on the second sequence through the neural network processor to obtain ciphertext data; wherein the recombining and the inverse combining are a pair of reciprocal operations.
10. An electronic device, the electronic device comprising: the system comprises a neural network processor, a general-purpose processor, a memory and a communication bus;
the communication bus is used for realizing communication connection among the neural network processor, the general processor and the memory;
the neural network processor and the general-purpose processor are configured to execute a data encryption program stored in the memory to implement the data encryption method according to any one of claims 1 to 8.
CN202211019512.3A 2022-08-24 2022-08-24 Data encryption method and device and electronic equipment Pending CN116961960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211019512.3A CN116961960A (en) 2022-08-24 2022-08-24 Data encryption method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211019512.3A CN116961960A (en) 2022-08-24 2022-08-24 Data encryption method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116961960A true CN116961960A (en) 2023-10-27

Family

ID=88455290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211019512.3A Pending CN116961960A (en) 2022-08-24 2022-08-24 Data encryption method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116961960A (en)

Similar Documents

Publication Publication Date Title
US7760870B2 (en) Block cipher apparatus using auxiliary transformation
CN101371286B (en) Encryption processing device and method
CN110795762B (en) Reserved format encryption method based on stream cipher
JPH1075240A (en) Method for protecting data transmission and device for ciphering or deciphering data
WO2006012363A1 (en) Stream cipher combining system and method
JPWO2009075337A1 (en) ENCRYPTION METHOD, DECRYPTION METHOD, DEVICE, AND PROGRAM
EP3447963A1 (en) Method for protecting data
CN116961958A (en) Data encryption method and device, electronic equipment and storage medium
Achkoun et al. SPF-CA: A new cellular automata based block cipher using key-dependent S-boxes
Kapoor et al. Analysis of symmetric and asymmetric key algorithms
CN116455572B (en) Data encryption method, device and equipment
CN115632765A (en) Encryption method, decryption device, electronic equipment and storage medium
CN116961960A (en) Data encryption method and device and electronic equipment
CN114826560B (en) Lightweight block cipher CREF implementation method and system
CN115766244A (en) Internet of vehicles information encryption method and device, computer equipment and storage medium
WO2016124469A1 (en) System and method for performing block cipher cryptography by implementing a mixer function that includes a substitution-box and a linear transformation using a lookup-table
CN112910630A (en) Method and device for replacing expanded key
CN114024675B (en) Lightweight block cipher IoVCipher implementation method and system suitable for Internet of vehicles terminal
EP1043863B1 (en) Method for the cryptographic conversion of L-bit input blocks of digital data info into L-bit output blocks
EP3258639A1 (en) Cryptography apparatus protected against side-channel attack using constant hamming weight substitution-box
Raj et al. Multi-image encryption using genetic computation
CN116484407B (en) Data security protection method and device, electronic equipment and storage medium
CN116527236B (en) Information change verification method and system for encryption card
CN114254372B (en) Data encryption processing method and system and electronic equipment
CN113343276B (en) Encryption method of light-weight block cipher algorithm GCM based on generalized two-dimensional cat mapping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination