CN107358125B - Processor - Google Patents

Processor Download PDF

Info

Publication number
CN107358125B
CN107358125B CN201710449025.3A CN201710449025A CN107358125B CN 107358125 B CN107358125 B CN 107358125B CN 201710449025 A CN201710449025 A CN 201710449025A CN 107358125 B CN107358125 B CN 107358125B
Authority
CN
China
Prior art keywords
instruction
decoding
instructions
decoder
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710449025.3A
Other languages
Chinese (zh)
Other versions
CN107358125A (en
Inventor
刘大力
曹春春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DUOSI SCIENCE AND TECHNOLOGY I
Original Assignee
DUOSI SCIENCE AND TECHNOLOGY I
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DUOSI SCIENCE AND TECHNOLOGY I filed Critical DUOSI SCIENCE AND TECHNOLOGY I
Priority to CN201710449025.3A priority Critical patent/CN107358125B/en
Publication of CN107358125A publication Critical patent/CN107358125A/en
Application granted granted Critical
Publication of CN107358125B publication Critical patent/CN107358125B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/71Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer to assure secure computing or processing of information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Computer Security & Cryptography (AREA)
  • Executing Machine-Instructions (AREA)

Abstract

The invention discloses a processor, comprising: the device comprises an instruction queue storage area, a configuration information storage area, a decoding control unit, an explicit decoder and an implicit decoder; the instruction queue storage area is used for storing an instruction queue to be decoded; the configuration information storage area is used for storing configuration information, and the configuration information comprises decoding constraint information; the decoding control unit is used for distributing the instructions in the instruction queue to the explicit decoder or the implicit decoder for decoding according to the decoding constraint information in the configuration information storage area. The processor provided by the embodiment of the invention can not only improve the decoding efficiency, but also more importantly improve the complexity of the instruction decoding process in the processor by increasing the dimension of the decoding work in the processor, so that the instruction decoding process of the processor is not easy to be maliciously cracked when the processor runs, the decoding safety in the processor is greatly improved, and the processor is ensured to run safely and stably.

Description

Processor
Technical Field
The present invention relates to the field of processor technologies, and in particular, to a processor with a high security level.
Background
The importance of information security is increasing due to the rapid development of information technology. To secure information, the security of the processor must be ensured. The secure processor is a key technology in the field of information security.
The current 'safety processor' mainly realizes the safety processing of the processor by running encryption algorithm software. However, the probability of the encryption algorithm being cracked is higher, the overall performance of the processor is greatly reduced by the implementation of the encryption algorithm, and the simple use of the encryption algorithm software to ensure the processing safety is no longer suitable as the requirement for the data processing rate is higher and higher in practice.
Disclosure of Invention
The present invention provides a processor to at least partially solve the above problems.
The invention provides a processor, comprising: the device comprises an instruction queue storage area, a configuration information storage area, a decoding control unit, an explicit decoder and an implicit decoder;
the instruction queue storage area is used for storing an instruction queue to be decoded;
the configuration information storage area is used for storing configuration information, and the configuration information comprises decoding constraint information;
the decoding control unit is used for distributing the instructions in the instruction queue to the explicit decoder or the implicit decoder for decoding according to the decoding constraint information in the configuration information storage area.
Optionally, the processor further comprises: backing up the decoder;
the decoding control unit is also used for allocating the instructions which are not allocated to the explicit decoder or the implicit decoder in the instruction queue to the backup decoder for decoding according to the decoding constraint information in the configuration information storage area.
Optionally, the processor further comprises: an explicit instruction register, an implicit instruction register and a backup instruction register;
the decoding control unit is used for distributing the instructions in the instruction queue to an explicit instruction register, an implicit instruction register or a backup instruction register according to the decoding constraint information in the configuration information storage area;
the explicit decoder is used for decoding the instruction in the explicit instruction register;
the implicit decoder is used for decoding the instruction in the implicit instruction register;
the backup decoder is used for decoding the instructions in the backup instruction register.
Optionally, the decoding control unit is further configured to control decoding between each of the explicit decoder, the implicit decoder, and the backup decoder at a parallel or serial timing according to the decoding constraint information in the configuration information storage area.
Optionally, the decoding control unit is further configured to, according to the decoding constraint information in the configuration information storage area, perform macro processing on multiple instructions in the explicit instruction register to obtain macro instructions, perform macro processing on multiple instructions in the implicit instruction register to obtain macro instructions, and/or perform macro processing on multiple instructions in the backup implicit instruction register to obtain macro instructions;
the explicit decoder is used for decoding the macro instructions in the explicit instruction register;
the implicit decoder is used for decoding the macro instructions in the implicit instruction register;
the backup decoder is used for decoding the macro instructions in the backup instruction register.
Optionally, the macro-processing the plurality of instructions in the explicit instruction register comprises: sequencing, assembling, replacing and/or delaying a plurality of instructions in a display instruction register;
the macro-processing of the plurality of instructions in the implicit instruction register includes: sequencing, assembling, replacing and/or delaying a plurality of instructions in the implicit instruction register;
the macro-processing of the plurality of instructions in the backup instruction register includes: ordering, assembling, replacing, and/or delaying a plurality of instructions in a backup instruction register.
Optionally, the processor further comprises: configuring an information input interface;
the configuration information input interface is used for receiving decoding constraint information input by a user;
the configuration information storage area is connected with the configuration information input interface and used for acquiring the decoding constraint information from the configuration information input interface and updating and storing the decoding constraint information.
Optionally, the instructions in the explicit instruction register include: instructions indicating a complete target in an algorithm or operation;
the instructions in the implicit instruction register include: instructions indicating macro, loop, or branch pre-processing operations in an algorithm or operation;
the instructions in the backup instruction register include: instructions indicating pre-operation or delayed operation of macro-operations, loop operations, or branch pre-processing operations in an algorithm or operation.
Optionally, the instructions in the explicit instruction register include: instructions obtained from a main memory or peripheral interface of a processor;
the instructions in the implicit instruction register include: instructions fetched from an internal instruction queue or control stack of the processor;
the instructions in the backup instruction register include: an instruction fetched from a macro process register or an arithmetic register of the processor.
Optionally, the processor further comprises: an encryption unit;
the encryption unit is used for encrypting the decoding constraint information in a preset encryption mode;
the configuration information storage area is used for storing the encrypted decoding constraint information.
As can be seen from the above description, the processor provided in the embodiment of the present invention is different from the processor in the prior art, and the explicit decoder and the implicit decoder cooperate with each other to decode the instruction to be decoded in the processor instead of decoding the instruction to be decoded in the processor by using a single decoder, and the operation principle of the processor is as follows: and a decoding control unit in the processor performs decoding distribution on the instruction to be decoded in the instruction queue according to the decoding constraint information stored in the configuration information storage area, distributes the instruction to be decoded to an explicit decoder for decoding or distributes the instruction to be decoded to an implicit decoder for decoding, so that the multidimensional decoding of the instruction to be decoded in the processor is realized. It can be seen that, by increasing the dimension of the decoding work in the processor, the processor provided by the embodiment of the invention can not only improve the decoding efficiency, but also can improve the complexity of the instruction decoding process in the processor, so that the instruction decoding process of the processor is not easy to be maliciously cracked when the processor runs, the decoding safety in the processor is greatly improved, and the processor is ensured to run safely and stably.
Drawings
FIG. 1 is a block diagram of a processor in accordance with one embodiment of the invention;
FIG. 2 is a block diagram of a processor according to a second embodiment of the present invention;
fig. 3 is a block diagram of a processor according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a block diagram of a processor according to a first embodiment of the invention. As shown in fig. 1, a processor 100 according to a first embodiment of the present invention includes: an instruction queue storage area 110, a configuration information storage area 120, a decode control unit 130, and a multi-dimensional decoder 140. Among them, the multidimensional decoder 140 includes: an explicit decoder 141 and an implicit decoder 142.
The instruction queue storage area 110 is used to store an instruction queue to be decoded.
The configuration information storage area 120 is used for storing configuration information, and the configuration information includes decoding constraint information.
The decode control unit 130 is configured to assign the instruction in the instruction queue stored in the instruction queue storage area 110 to the explicit decoder 141 or the implicit decoder 142 for decoding according to the decode constraint information in the configuration information storage area 120.
It can be seen that, unlike the prior art, the processor shown in fig. 1 decodes the instruction to be decoded in the processor by using the explicit decoder and the implicit decoder, instead of using a single decoder to decode the instruction to be decoded in the processor, and the operation principle is as follows: and a decoding control unit in the processor performs decoding distribution on the instruction to be decoded in the instruction queue according to the decoding constraint information stored in the configuration information storage area, distributes the instruction to be decoded to an explicit decoder for decoding or distributes the instruction to be decoded to an implicit decoder for decoding, so that the multi-dimensional decoding of the instruction to be decoded in the processor is realized. It can be seen that, by increasing the dimension of the decoding work in the processor, the processor provided by the embodiment of the invention can not only improve the decoding efficiency, but also can improve the complexity of the instruction decoding process in the processor, so that the instruction decoding process of the processor is not easy to be maliciously cracked when the processor runs, the decoding safety in the processor is greatly improved, and the processor is ensured to run safely and stably.
In an embodiment of the present invention, the decoding constraint information stored in the configuration information storage area 120 indicates the decoding attribution of each instruction to be decoded in the instruction queue, and the decoding control unit 130 may allocate each instruction in the instruction queue to a corresponding decoder for decoding according to the decoding constraint information in the configuration information storage area 120. Specifically, when the decoding control unit 130 decodes each instruction to be decoded in the instruction queue, the configuration information storage area 120 writes the corresponding decoding constraint information stored therein into the constraint table, and the decoding control unit 130 reads the decoding constraint information from the constraint table and allocates the instruction to be decoded to a corresponding decoder (the explicit decoder 141 or the implicit decoder 142) for decoding according to the read decoding constraint information.
For example, the decoding constraint information stored in the configuration information storage area 120 indicates: instructions meeting the first predetermined condition are adapted to the explicit decoder and instructions meeting the second predetermined condition are adapted to the implicit decoder. After the processor 100 starts to operate, instructions to be decoded are sequentially placed into an instruction queue stored in the instruction queue storage area 110, the decoding control unit 130 obtains the instructions from the instruction queue and prepares for decoding allocation, the configuration information storage area 120 writes decoding constraint information stored therein into a constraint table, the decoding control unit 130 reads the decoding constraint information from the constraint table, judges whether each instruction meets a first preset condition and a second preset condition according to the decoding constraint information, allocates the instructions meeting the first preset condition to the explicit decoder 141 for decoding, allocates the instructions meeting the second preset condition to the implicit decoder 142 for decoding, and thus, a two-dimensional decoding mode is realized, decoding efficiency is improved, and decoding safety is protected.
Further, in an embodiment of the present invention, the decoding constraint information stored in the configuration information storage area 120 further indicates decoding timing relationship between different decoders (explicit decoder 141 and implicit decoder 142), and the decoding control unit 130 is further configured to control decoding between the explicit decoder 141 and the implicit decoder 142 in parallel or serial timing according to the decoding constraint information in the configuration information storage area 120 after the instruction in the instruction queue is allocated to the explicit decoder 141 or implicit decoder 142.
Following the above example, in case 1, if there is no correlation between the execution processes of the instruction meeting the first preset condition and the instruction meeting the second preset condition, and the decoding of the instruction meeting the first preset condition and the decoding of the instruction meeting the second preset condition can be performed simultaneously, the decoding constraint information stored in the configuration information storage area 120 further indicates: the decoding is performed in parallel timing between the explicit decoder 141 and the implicit decoder 142. In contrast, after the decoding control unit 130 allocates the instruction meeting the first preset condition to the explicit decoder 141 and allocates the instruction meeting the second preset condition to the implicit decoder 142 according to the decoding constraint information, the decoding control unit controls the explicit decoder 141 and the implicit decoder 142 to decode at a parallel timing sequence according to the decoding constraint information, that is, decode at the same time, so as to effectively accelerate the decoding speed. In case 2, if an instruction meeting the first preset condition is associated with an execution process of an instruction meeting the second preset condition, and the instruction meeting the second preset condition needs to be decoded after the instruction meeting the first preset condition is decoded, the decoding constraint information stored in the configuration information storage area 120 further indicates: the decoding is performed in serial timing between the explicit decoder 141 and the implicit decoder 142. In contrast, after the decoding control unit 130 allocates the instruction meeting the first preset condition to the explicit decoder 141 and allocates the instruction meeting the second preset condition to the implicit decoder 142 according to the decoding constraint information, the explicit decoder 141 is controlled to decode the implicit decoder 142 first and then decode the implicit decoder 142 according to the decoding constraint information, so as to meet the association between different instructions, and ensure the normal operation of the decoding process.
Fig. 2 is a block diagram of a processor according to a second embodiment of the present invention. As shown in fig. 2, the processor 100 according to the second embodiment of the present invention includes: an instruction queue storage area 110, a configuration information storage area 120, a decode control unit 130, a multidimensional decoder 140, an explicit instruction register 150, an implicit instruction register 160, a backup instruction register 170, a configuration information input interface 180, and an encryption unit 190. Among them, the multidimensional decoder 140 includes: an explicit decoder 141, an implicit decoder 142, and a backup decoder 143.
The functions of the instruction queue storage area 110, the configuration information storage area 120, the decoding control unit 130, the explicit decoder 141, and the implicit decoder 142 have been described above, and repeated parts are not described herein again.
In this embodiment, the instruction queue storage area 110 is a memory with a queue structure to store an instruction queue, instructions to be decoded are sequentially placed in the instruction queue after the processor 100 starts to operate, the decoding control unit 130 is further configured to allocate, according to the decoding constraint information in the configuration information storage area 120, instructions that are not allocated to the explicit decoder 141 or the implicit decoder 142 in the instruction queue to the backup decoder 143 for decoding, that is, the backup decoder 143 may serve as a supplementary decoding component of the explicit decoder 141 and the implicit decoder 142, so that the instructions to be decoded in the instruction queue are all successfully decoded and then executed, thereby ensuring normal operation of the processor 100.
Specifically, as shown in FIG. 2, processor 100 also includes an explicit instruction register 150, an implicit instruction register 160, and a backup instruction register 170; the decoding constraint information in the configuration information storage area 120 may indicate decoding affiliation of each instruction to be decoded in the instruction queue, and the decoding control unit 130 is configured to allocate the instruction in the instruction queue to the explicit instruction register 150, the implicit instruction register 160, or the backup instruction register 170 according to the decoding constraint information in the configuration information storage area 120; the explicit decoder 141 is used for decoding the instruction in the explicit instruction register 150; the implicit decoder 142 is used for decoding the instruction in the implicit instruction register 160; backup decoder 143 is used to decode instructions in backup instruction register 170.
For example, the decoding constraint information stored in the configuration information storage area 120 indicates: the instructions meeting the first preset condition are adapted to the explicit decoder, the instructions meeting the second preset condition are adapted to the implicit decoder, and the instructions meeting neither the first preset condition nor the second preset condition are adapted to the backup decoder. After the processor 100 starts to operate, the instructions to be decoded are sequentially placed into the instruction queue stored in the instruction queue storage area 110, the decoding control unit 130 obtains the instructions from the instruction queue and prepares for decoding allocation, the configuration information storage area 120 writes the decoding constraint information stored therein into the constraint table, the decoding control unit 130 reads the decoding constraint information from the constraint table, judges whether each instruction meets a first preset condition and a second preset condition according to the decoding constraint information, allocates the instructions meeting the first preset condition to the explicit instruction register 150, allocates the instructions meeting the second preset condition to the implicit instruction register 160, allocates the instructions meeting neither the first preset condition nor the second preset condition to the backup instruction register 170, and the explicit decoder 141 decodes the instructions in the explicit instruction register 150, namely, the instruction meeting the first preset condition is decoded, the implicit decoder 142 decodes the instruction in the implicit instruction register 160, namely, the instruction meeting the second preset condition is decoded, and the backup decoder 143 decodes the instruction in the backup instruction register 170, namely, the instruction meeting neither the first preset condition nor the second preset condition is decoded, so that a three-dimensional decoding mode in the operation process of the processor 100 is realized, the decoding efficiency is improved, and the decoding safety is protected.
Further, in an embodiment of the present invention, the decoding constraint information stored in the configuration information storage area 120 further indicates decoding timing relationships among different decoders (the explicit decoder 141, the implicit decoder 142, and the backup decoder 143), and the decoding control unit 130 is further configured to control each of the explicit decoder 141, the implicit decoder 142, and the backup decoder 143 to decode at a parallel or serial timing therebetween according to the decoding constraint information in the configuration information storage area 120.
Following the above example, if there is no association between the instruction meeting the first preset condition and the instruction meeting the second preset condition, that is, the instruction meeting the first preset condition and the instruction meeting the second preset condition can be decoded simultaneously, and there is an association between the instruction meeting neither the first preset condition nor the second preset condition and the instruction meeting the second preset condition, and the decoding of the instruction meeting neither the first preset condition nor the second preset condition has to be performed after the decoding of the instruction meeting the second preset condition is completed, the decoding constraint information stored in the configuration information storage area 120 further indicates: the explicit decoder 141 and the implicit decoder 142 decode at a parallel timing, the implicit decoder 142 and the backup decoder 143 decode at a serial timing, and the implicit decoder 142 finishes decoding and the backup decoder 143 starts decoding. In this regard, after the decode control unit 130 allocates the instruction meeting the first predetermined condition to the explicit instruction register 150, allocates the instruction meeting the second predetermined condition to the implicit instruction register 160, and allocates the instruction meeting neither the first predetermined condition nor the second predetermined condition to the backup instruction register 170 according to the decode constraint information in the configuration information storage area 120, and then controls the decoding timing among the explicit decoder 141, the implicit decoder 142 and the backup decoder 143 according to the decoding constraint information, such that the process of decoding the instructions in the explicit instruction register 150 by the explicit decoder 141 is performed in parallel with the process of decoding the instructions in the implicit instruction register 160 by the implicit decoder 142, and after the implicit decoder 142 finishes decoding the instruction in the implicit instruction register 160, backup decoder 143 begins decoding the instructions in backup instruction register 170.
In a specific embodiment, the decoding constraint information stored in the configuration information storage area 120 configures the decoding constraint condition according to the meaning of the instruction to be decoded, for example, the constraint condition indicated in the decoding constraint information is: the instruction indicating the complete target in the algorithm or operation is suitable for decoding by an explicit decoder, the instruction indicating the macro operation, the loop operation or the branch preprocessing operation in the algorithm or operation is suitable for decoding by an implicit decoder, and the instruction indicating the pre-operation or the delay operation on the macro operation, the loop operation or the branch preprocessing operation in the algorithm or operation is suitable for decoding by a backup decoder. The decode control unit 130 allocates an instruction indicating a complete target in an algorithm or operation among the instructions to be decoded in the instruction queue to the explicit instruction register 150, an instruction indicating a macro operation, a loop operation, or a branch preprocessing operation in the algorithm or operation to the implicit instruction register 160, and an instruction indicating a pre-operation or a delay operation on the macro operation, the loop operation, or the branch preprocessing operation in the algorithm or operation to the backup instruction register 170, according to the instruction of the decode constraint information stored in the configuration information storage area 120; it is understood that the explicit instruction register 150 includes an instruction indicating a complete target in an algorithm or operation, the implicit instruction register 160 includes an instruction indicating a macro operation, a loop operation, or a branch preprocessing operation in an algorithm or operation, and the backup instruction register 170 includes an instruction indicating a pre-operation or a delay operation on the macro operation, the loop operation, or the branch preprocessing operation in the algorithm or operation; further, the explicit decoder 141 decodes an instruction indicating an algorithm or an entire target in operation in the explicit instruction register 150, the implicit decoder 142 decodes an instruction indicating a method or a macro operation, a loop operation, or a branch preprocessing operation in the implicit instruction register 160, the backup decoder 143 decodes an instruction indicating a pre-operation or a delay operation for a macro operation, a loop operation, or a branch preprocessing operation in algorithm or operation in backup instruction register 170, and obtains control signals output after the three decoders (the explicit decoder 141, the implicit decoder 142, and the backup decoder 143) are decoded respectively, and each component in the processor 100 further executes a corresponding operation according to the control signal output by decoding.
In another specific embodiment, the decoding constraint information stored in the configuration information storage area 120 configures the decoding constraint according to the source of the instruction to be decoded, for example, the constraint indicated in the decoding constraint information is: instructions retrieved from a main memory or peripheral interface of the processor are adapted to be decoded using an explicit decoder, instructions retrieved from an internal instruction queue or control stack of the processor are adapted to be decoded using an implicit decoder, and instructions retrieved from a macro-machining register or arithmetic register of the processor are adapted to be decoded using a backup decoder. If the instruction in the instruction queue carries a source identifier, the decoding control unit 130 acquires an instruction to be decoded from the instruction queue, determines the source of the instruction according to the source identifier carried by the instruction, and then allocates, according to the instruction of the decoding constraint information stored in the configuration information storage area 120, an instruction acquired from the main memory or the peripheral interface of the processor 100 among the instructions to be decoded in the instruction queue to the explicit instruction register 150, an instruction acquired from the internal instruction queue or the control stack of the processor 100 to the implicit instruction register 160, and an instruction acquired from the macro processing register or the operation register of the processor 100 to the backup instruction register 170; it is known that the explicit instruction register 150 includes an instruction obtained from the main memory or peripheral interface of the processor 100, the implicit instruction register 160 includes an instruction obtained from the internal instruction queue or control stack of the processor 100, and the backup instruction register 170 includes an instruction obtained from the macro processing register or operation register of the processor 100; further, the explicit decoder 141 decodes an instruction in the explicit instruction register 150, which is obtained from the main memory or the peripheral interface of the processor 100, the implicit decoder 142 decodes an instruction in the implicit instruction register 160, which is obtained from the internal instruction queue or the control stack of the processor 100, the backup decoder 143 decodes an instruction in the backup instruction register 170, which is obtained from the macro processing register or the operation register of the processor 100, to obtain control signals that are respectively decoded and output by the three decoders (the explicit decoder 141, the implicit decoder 142, and the backup decoder 143), and each component in the processor 100 further executes a corresponding operation according to the control signal output by decoding.
In an embodiment of the present invention, in addition to allocating the instructions to be decoded in the instruction queue to different registers according to the decoding constraint conditions in the configuration information storage area 120 so as to enable different decoders to decode respectively, the decoding control unit 130 is further configured to perform macro processing on multiple instructions in the explicit instruction register 150 according to the decoding constraint information in the configuration information storage area 120 to obtain macro instructions, perform macro processing on multiple instructions in the implicit instruction register 160 to obtain macro instructions, and/or perform macro processing on multiple instructions in the backup implicit instruction register 170 to obtain macro instructions; the explicit decoder 141 is used to decode macro instructions in the explicit instruction register 150, the implicit decoder 142 is used to decode macro instructions in the implicit instruction register 160, and the backup decoder 143 is used to decode macro instructions in the backup instruction register 170. The macro processing performed by the decode control unit 130 on the instructions in the explicit instruction register 150 includes: the decode control unit 130 orders, assembles, replaces and/or delays the instructions in the explicit instruction register 150 so that the instructions that can be merged are dynamically merged into a macro instruction; the macro-processing performed by the decode control unit 130 on the plurality of instructions in the implicit instruction register 160 includes: the decode control unit 130 orders, assembles, replaces and/or delays the instructions in the implicit instruction register 160 so that the instructions that can be merged are dynamically merged into a macro instruction; the macro-processing performed by the decode control unit 130 on the plurality of instructions in the backup instruction register 170 includes: the decode control unit 130 orders, assembles, replaces, and/or delays the instructions in the backup instruction register 170 so that the instructions that can be merged are dynamically merged into a macro instruction.
For example, according to the decoding constraint information in the configuration information storage area 120, the decoding control unit 130 allocates the instruction meeting the first preset condition in the instruction queue to the explicit instruction register 150, allocates the instruction meeting the second preset condition to the implicit instruction register 160, and allocates the instruction meeting the third preset condition to the backup instruction register 170, and further indicates that: instructions meeting the first preset condition can be merged, and instructions meeting the second preset condition can be merged; then the decode control unit 130 performs macro processing on multiple instructions in the display instruction register 150 according to the decode constraint information to obtain a macro instruction, and the macro instruction is still put into the display instruction register 150, and performs macro processing on multiple instructions in the implicit instruction register 160 to obtain a macro instruction, and the macro instruction is still put into the implicit instruction register 160, when decoding is performed, the explicit decoder 141 decodes the macro instruction in the explicit instruction register 150, the implicit decoder 142 decodes the macro instruction in the implicit instruction register 160, and the backup decoder 143 decodes the instruction in the backup instruction register 170. It can be seen that, before decoding, the decoding control unit merges a plurality of instructions which can be merged into a macro instruction in a macro processing mode, and converts the decoding of the plurality of instructions into the decoding of the macro instruction, so that the decoding process is greatly simplified, the decoding efficiency is improved, the operating efficiency of the processor is improved, and the user requirements are met.
In an embodiment of the present invention, the configuration information stored in the configuration information storage area 120 includes static configuration information and dynamic configuration information, the main function of the static configuration information is configuration information for initially setting a decoding range and a basic macro processing mechanism after the processor 100 is powered on, the processor 100 reads the static configuration information and initializes the static configuration information according to the content of the static configuration information after the processor 100 is powered on, the static configuration information includes decoding constraint information preset in the processor 100, the main function of the dynamic configuration information is that decoding constraint information can be modified during the operation of the processor 100 to complete further modification and constraint of decoding logic, and the dynamic configuration information includes customized decoding constraint information input by a user during the operation of the processor 100. As shown in fig. 2, processor 100 also includes a configuration information input interface 180; the configuration information input interface 180 is used for receiving user-defined decoding constraint information input by a user, the configuration information storage area 120 is connected with the configuration information input interface 180, and the configuration information storage area 120 is used for acquiring the decoding constraint information from the configuration information input interface 180 and updating and storing the decoding constraint information. For example, the processor 100 displays a user interaction interface through which a user inputs configuration information, where the configuration information includes user-defined decoding constraint information, the user-defined decoding constraint information input by the user is received by the configuration information input interface 180, and the configuration information storage area 120 obtains and stores the user-defined decoding constraint information input by the user. It should be noted that the decoding constraint information in the static configuration information in the configuration information storage area 120 and the decoding constraint information of the dynamic configuration information constrain the decoding process together, before decoding, the configuration information storage area 120 writes the decoding constraint information in the static configuration information and the decoding constraint information in the dynamic configuration information into the constraint table together, and the decoding control unit 130 reads the constraint table and performs operations such as decoding allocation on the instruction to be decoded according to the read decoding constraint information. If the static configuration information is reset, the setting can be effective after the processor is powered on again; and the dynamic configuration information will take effect immediately after modification.
In one embodiment of the invention, as shown in FIG. 2, processor 100 further includes an encryption unit 190; the encryption unit 190 is configured to encrypt the decoding constraint information in a predetermined encryption manner; the configuration information storage area 120 is used to store encrypted decoding constraint information. As can be seen, by encrypting and storing the decoding constraint information in the configuration information storage area 120, the decoding constraint information is not easy to be maliciously cracked, so that the cracking difficulty of the decoding process of the processor 100 during operation is high, and the decoding safety of the processor 100 during operation is effectively protected.
Fig. 3 is a block diagram of a processor according to a third embodiment of the present invention. As shown in fig. 3, the processor 100 in the third embodiment of the present invention includes: an instruction queue storage area 110, a configuration information storage area 120, a decoding control unit 130, a multidimensional decoder 140, an explicit instruction register 150, an implicit instruction register 160, a backup instruction register 170, and a reassembly control unit 200. Among them, the multidimensional decoder 140 includes: an explicit decoder 141, an implicit decoder 142, and a backup decoder 143.
The decoding process of the multi-dimensional decoder 140 is described in detail above, and is not described herein again. Instruction queue storage area 110 may be part of a non-volatile storage area in processor 100, and show instruction register 10, implicit instruction register 160, and backup instruction register 170 may be three portions of a random queue memory of processor 100. The configuration information of the configuration information storage area 120 includes a reassembly rule in addition to the decoding constraint information. The recombination control unit 200 is used for controlling the connection relationship between the various logic devices in the processor 100; the reassembly control unit 120 is configured to receive the three decoding results of the multidimensional decoder 140 in the processor 100, and select corresponding logic devices according to the reassembly rules stored in the configuration information storage area 120 to form a reassembly circuit for executing the three decoding results, so as to implement the execution of the three decoding results. The logic devices in the processor 100 refer to basic logic units such as and gates, or gates, not gates, nand gates, nor gates, and are used for implementing basic logic operations and complex logic operations. Preferably, the basic logic devices in processor 100 may also include flip-flops, adders, shift registers, multipliers, and other higher logic level devices.
In summary, the processor provided by the present invention is different from the processor in the prior art, and the explicit decoder and the implicit decoder cooperate with each other to decode the instruction to be decoded in the processor instead of decoding the instruction to be decoded in the processor by the single decoder, and the operating principle of the processor is as follows: and a decoding control unit in the processor performs decoding distribution on the instruction to be decoded in the instruction queue according to the decoding constraint information stored in the configuration information storage area, distributes the instruction to be decoded to an explicit decoder for decoding or distributes the instruction to be decoded to an implicit decoder for decoding, so that the multi-dimensional decoding of the instruction to be decoded in the processor is realized. It can be seen that, by increasing the dimension of the decoding work in the processor, the processor provided by the embodiment of the invention can not only improve the decoding efficiency, but also can improve the complexity of the instruction decoding process in the processor, so that the instruction decoding process of the processor is not easy to be maliciously cracked when the processor runs, the decoding safety in the processor is greatly improved, and the processor is ensured to run safely and stably.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (6)

1. A processor, comprising: the device comprises an instruction queue storage area, a configuration information storage area, a decoding control unit, an explicit decoder, an implicit decoder and a backup decoder;
the instruction queue storage area is used for storing an instruction queue to be decoded;
the configuration information storage area is used for storing configuration information, and the configuration information comprises decoding constraint information;
the decoding control unit is used for distributing the instructions in the instruction queue to the explicit decoder or the implicit decoder for decoding according to the decoding constraint information in the configuration information storage area;
the decoding control unit is further used for allocating the instruction which is not allocated to the explicit decoder or the implicit decoder in the instruction queue to the backup decoder for decoding according to the decoding constraint information in the configuration information storage area;
the processor further comprises: an explicit instruction register, an implicit instruction register and a backup instruction register;
the decoding control unit is used for distributing the instructions in the instruction queue to the explicit instruction register, the implicit instruction register or the backup instruction register according to the decoding constraint information in the configuration information storage area;
the explicit decoder is used for decoding the instruction in the explicit instruction register;
the implicit decoder is used for decoding the instruction in the implicit instruction register;
the backup decoder is used for decoding the instructions in the backup instruction register;
the decoding control unit is further configured to perform macro processing on the multiple instructions in the explicit instruction register to obtain macro instructions, perform macro processing on the multiple instructions in the implicit instruction register to obtain macro instructions, and/or perform macro processing on the multiple instructions in the backup instruction register to obtain macro instructions according to decoding constraint information in the configuration information storage area;
the explicit decoder is to decode macro instructions in the explicit instruction register;
the implicit decoder is used for decoding macro instructions in the implicit instruction register;
the backup decoder is used for decoding the macroinstruction in the backup instruction register;
the decoding control unit allocates an instruction indicating an algorithm or a complete target in operation in the instructions to be decoded in the instruction queue to an explicit instruction register according to the instruction of the decoding constraint information stored in the configuration information storage area, allocates an instruction indicating a macro operation, a loop operation or a branch preprocessing operation in the algorithm or operation to an implicit instruction register, and allocates an instruction indicating a pre-operation or a delay operation on the macro operation, the loop operation or the branch preprocessing operation in the algorithm or operation to a backup instruction register; the explicit decoder decodes an instruction indicating an algorithm or a complete target in operation in an explicit instruction register, the implicit decoder decodes an instruction indicating the algorithm or macro operation, cycle operation or branch preprocessing operation in the implicit instruction register, the backup decoder decodes an instruction indicating the macro operation, cycle operation or branch preprocessing operation in algorithm or operation in advance or delayed operation in a backup instruction register to obtain control signals output after the explicit decoder, the implicit decoder and the backup decoder are decoded respectively, and the processor executes corresponding operation according to the control signals output by decoding;
and the macro-processing of the plurality of instructions in the explicit instruction register includes: sequencing, assembling, replacing and/or delaying the plurality of instructions so that the plurality of instructions which can be combined are dynamically combined into a macro instruction;
macro-processing of the plurality of instructions in the implicit instruction register includes: sequencing, assembling, replacing and/or delaying the plurality of instructions so that the plurality of instructions which can be combined are dynamically combined into a macro instruction;
macro-processing the plurality of instructions in the backup instruction register includes: and sequencing, assembling, replacing and/or delaying the instructions so that the combined instructions are dynamically combined into a macro instruction.
2. The processor of claim 1,
the decoding control unit is further used for controlling the explicit decoder, the implicit decoder and the backup decoder to decode in parallel or serial time sequence according to the decoding constraint information in the configuration information storage area.
3. The processor of claim 1, wherein the processor further comprises: configuring an information input interface;
the configuration information input interface is used for receiving decoding constraint information input by a user;
the configuration information storage area is connected with the configuration information input interface and used for acquiring decoding constraint information from the configuration information input interface and updating and storing the decoding constraint information.
4. The processor of claim 1,
instructions in the explicit instruction register include: instructions indicating a complete target in an algorithm or operation;
instructions in the implicit instruction register include: instructions indicating macro, loop, or branch pre-processing operations in an algorithm or operation;
the instructions in the backup instruction register include: instructions indicating pre-operation or delayed operation of macro-operations, loop operations, or branch pre-processing operations in an algorithm or operation.
5. The processor of claim 1,
instructions in the explicit instruction register include: instructions retrieved from a main memory or peripheral interface of the processor;
instructions in the implicit instruction register include: instructions retrieved from an internal instruction queue or control stack of the processor;
the instructions in the backup instruction register include: instructions fetched from a macro process register or an arithmetic register of the processor.
6. The processor of claim 1, wherein the processor further comprises: an encryption unit;
the encryption unit is used for encrypting the decoding constraint information in a preset encryption mode;
the configuration information storage area is used for storing encrypted decoding constraint information.
CN201710449025.3A 2017-06-14 2017-06-14 Processor Active CN107358125B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710449025.3A CN107358125B (en) 2017-06-14 2017-06-14 Processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710449025.3A CN107358125B (en) 2017-06-14 2017-06-14 Processor

Publications (2)

Publication Number Publication Date
CN107358125A CN107358125A (en) 2017-11-17
CN107358125B true CN107358125B (en) 2020-12-08

Family

ID=60273866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710449025.3A Active CN107358125B (en) 2017-06-14 2017-06-14 Processor

Country Status (1)

Country Link
CN (1) CN107358125B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112631724A (en) * 2020-12-24 2021-04-09 北京握奇数据股份有限公司 Byte code instruction set simplifying method and system
CN115525343B (en) * 2022-10-31 2023-07-25 海光信息技术股份有限公司 Parallel decoding method, processor, chip and electronic equipment
CN115525344B (en) * 2022-10-31 2023-06-27 海光信息技术股份有限公司 Decoding method, processor, chip and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1431588A (en) * 2002-01-08 2003-07-23 北京南思达科技发展有限公司 Logic reorganizable circuit
CN101996155A (en) * 2009-08-10 2011-03-30 北京多思科技发展有限公司 Processor supporting a plurality of command systems
CN107340994A (en) * 2017-06-14 2017-11-10 北京天宏绎网络技术有限公司 A kind of processor

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8904151B2 (en) * 2006-05-02 2014-12-02 International Business Machines Corporation Method and apparatus for the dynamic identification and merging of instructions for execution on a wide datapath
CN101996154B (en) * 2009-08-10 2012-09-26 北京多思科技发展有限公司 General processor supporting reconfigurable safety design
CN102436781B (en) * 2011-11-04 2014-02-12 杭州中天微***有限公司 Microprocessor order split device based on implicit relevance and implicit bypass
US9424045B2 (en) * 2013-01-29 2016-08-23 Arm Limited Data processing apparatus and method for controlling use of an issue queue to represent an instruction suitable for execution by a wide operand execution unit
EP3087470B1 (en) * 2013-12-28 2020-03-25 Intel Corporation Rsa algorithm acceleration processors, methods, systems, and instructions
US9635378B2 (en) * 2015-03-20 2017-04-25 Digimarc Corporation Sparse modulation for robust signaling and synchronization
US20170046153A1 (en) * 2015-08-14 2017-02-16 Qualcomm Incorporated Simd multiply and horizontal reduce operations

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1431588A (en) * 2002-01-08 2003-07-23 北京南思达科技发展有限公司 Logic reorganizable circuit
CN101996155A (en) * 2009-08-10 2011-03-30 北京多思科技发展有限公司 Processor supporting a plurality of command systems
CN107340994A (en) * 2017-06-14 2017-11-10 北京天宏绎网络技术有限公司 A kind of processor

Also Published As

Publication number Publication date
CN107358125A (en) 2017-11-17

Similar Documents

Publication Publication Date Title
US3530438A (en) Task control
US11061710B2 (en) Virtual machine exit support by a virtual machine function
US10026145B2 (en) Resource sharing on shader processor of GPU
US9304813B2 (en) CPU independent graphics scheduler for performing scheduling operations for graphics hardware
CN107358125B (en) Processor
CN106569891B (en) Method and device for scheduling and executing tasks in storage system
US10467052B2 (en) Cluster topology aware container scheduling for efficient data transfer
CN110308982B (en) Shared memory multiplexing method and device
CN105579967A (en) GPU divergence barrier
CN115988218B (en) Virtualized video encoding and decoding system, electronic equipment and storage medium
US20160321079A1 (en) System and method to clear and rebuild dependencies
CN114168271B (en) Task scheduling method, electronic device and storage medium
US11175919B1 (en) Synchronization of concurrent computation engines
CN116861470B (en) Encryption and decryption method, encryption and decryption device, computer readable storage medium and server
US20120204014A1 (en) Systems and Methods for Improving Divergent Conditional Branches
CN107340994B (en) Processor
US9898348B2 (en) Resource mapping in multi-threaded central processor units
US8930681B2 (en) Enhancing performance by instruction interleaving and/or concurrent processing of multiple buffers
US9344115B2 (en) Method of compressing and restoring configuration data
CN107506623B (en) Application program reinforcing method and device, computing equipment and computer storage medium
Mukherjee et al. Exploring the features of OpenCL 2.0
US10146736B2 (en) Presenting pipelines of multicore processors as separate processor cores to a programming framework
KR20130021637A (en) Method and apparatus for interrupt allocation of multi-core system
US11171881B2 (en) Multiplexed resource allocation architecture
CN107729772B (en) Processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant