CN111260070B - Operation method, device and related product - Google Patents

Operation method, device and related product Download PDF

Info

Publication number
CN111260070B
CN111260070B CN201811455827.6A CN201811455827A CN111260070B CN 111260070 B CN111260070 B CN 111260070B CN 201811455827 A CN201811455827 A CN 201811455827A CN 111260070 B CN111260070 B CN 111260070B
Authority
CN
China
Prior art keywords
data
instruction
input data
machine learning
executed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811455827.6A
Other languages
Chinese (zh)
Other versions
CN111260070A (en
Inventor
不公告发明人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Cambricon Information Technology Co Ltd
Original Assignee
Shanghai Cambricon Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Cambricon Information Technology Co Ltd filed Critical Shanghai Cambricon Information Technology Co Ltd
Priority to CN201811455827.6A priority Critical patent/CN111260070B/en
Publication of CN111260070A publication Critical patent/CN111260070A/en
Application granted granted Critical
Publication of CN111260070B publication Critical patent/CN111260070B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Advance Control (AREA)

Abstract

The disclosure relates to an arithmetic method, an apparatus and a related product. The machine learning device comprises one or more instruction processing devices, is used for acquiring data to be operated and control information from other processing devices, executes specified machine learning operation and transmits the execution result to other processing devices through an I/O interface; when the machine learning arithmetic device includes a plurality of instruction processing devices, the plurality of instruction processing devices can be connected to each other by a specific configuration to transfer data. The command processing devices are interconnected through a Peripheral Component Interface Express (PCIE) bus and transmit data; the plurality of instruction processing devices share the same control system or own control system and share a memory or own memory; the interconnection mode of the plurality of instruction processing apparatuses is an arbitrary interconnection topology. The operation method, the operation device and the related products provided by the embodiment of the disclosure can reduce the data access amount and the calculated amount while ensuring that the calculation precision is not damaged.

Description

Operation method, device and related product
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data processing apparatus and method, and a related product.
Background
With the continuous development of science and technology, machine learning, especially neural network algorithms, are more and more widely used. The method is well applied to the fields of image recognition, voice recognition, natural language processing and the like. However, as the complexity of the neural network algorithm is higher and higher, the types and the number of the related data operations are continuously increased, and the requirements on hardware such as the number of arithmetic units are also continuously increased. In the related art, the hardware requirement is reduced by reducing or decreasing the data access amount and the calculation amount, but the accuracy of the calculation is lost. On the premise of ensuring the precision, reducing the data access amount and the calculation amount is an urgent problem to be solved.
Disclosure of Invention
In view of this, the present disclosure provides a data processing apparatus and method, and a related product, so as to solve the problem that the guarantee of the calculation accuracy and the reduction of the data access amount and the calculation amount cannot be satisfied at the same time.
According to a first aspect of the present disclosure, there is provided a data processing apparatus for performing machine learning computations, the apparatus comprising:
the control module is used for acquiring input data and a calculation instruction;
the processing module is used for performing machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction,
wherein the data type of the input data includes at least one of an exponent type and a dynamic fix-point type, the exponential type input data includes exponent bits for storing exponents of the exponential type input data, the dynamic fix-point type input data includes fraction bits for storing integer part data and fraction part data of the dynamic fix-point type input data, and integer bits for storing data of the fraction bits for marking positions of the fraction bits in the data stored in the integer bits,
performing machine learning calculations on the input data includes: and carrying out shift operation on the input data.
According to a second aspect of the present disclosure, there is provided a machine learning operation apparatus including:
one or more data processing apparatuses according to the first aspect, configured to acquire input data and control information from another processing apparatus, execute a specified machine learning operation, and transmit an execution result to the other processing apparatus through an I/O interface;
when the machine learning arithmetic device comprises a plurality of data processing devices, the plurality of data processing devices can be connected through a specific structure and transmit data;
the data processing devices are interconnected through a PCIE bus which is a bus for interconnecting fast external equipment and transmit data so as to support operation of machine learning in a larger scale; a plurality of the data processing devices share the same control system or own respective control systems; the data processing devices share the memory or own the memory; the interconnection mode of the plurality of data processing devices is any interconnection topology.
According to a third aspect of the present disclosure, there is provided a combined processing apparatus, the apparatus comprising:
the machine learning arithmetic device, the universal interconnect interface, and the other processing device according to the second aspect;
and the machine learning arithmetic device interacts with the other processing devices to jointly complete the calculation operation designated by the user.
According to a fourth aspect of the present disclosure, there is provided a machine learning chip including the machine learning network computing device of the second aspect or the combination processing device of the third aspect.
According to a fifth aspect of the present disclosure, there is provided a machine learning chip package structure, which includes the machine learning chip of the fourth aspect.
According to a sixth aspect of the present disclosure, a board card is provided, which includes the machine learning chip packaging structure of the fifth aspect.
According to a seventh aspect of the present disclosure, there is provided an electronic device, which includes the machine learning chip of the fourth aspect or the board of the sixth aspect.
According to an eighth aspect of the present disclosure, there is provided a data processing method applied to a data processing apparatus for performing machine learning calculation, the method including:
acquiring input data and a calculation instruction;
performing machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction,
wherein a data type of the input data includes at least one of an exponent type and a dynamic pointing type, the input data of the exponent type includes exponent bits for storing exponents of the input data of the exponent type, the input data of the dynamic pointing type includes decimal places for storing integer part data and fractional part data of the input data of the dynamic pointing type, and integer bits for storing data of the fractional parts for marking positions of decimal points in the data stored in the integer bits,
performing machine learning calculations on the input data includes: and carrying out shift operation on the input data.
The data processing device, method and related product that this disclosed embodiment provided, the apparatus includes: the control module is used for acquiring input data and calculating instructions. The processing module is used for performing machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction. The data type of the input data can comprise at least one of an exponential type and a dynamic fixed point type, the exponential type input data comprises exponent bits, the exponent bits are used for storing exponents of the exponential type input data, the dynamic fixed point type input data comprises decimal point bits and integer bits, the integer bits are used for storing integer part data and fractional part data of the dynamic fixed point type input data, and data stored in the decimal point bits are used for marking positions of the decimal points in the data stored in the integer bits. Performing machine learning calculations on input data includes: the shift operation is performed on the input data. The data processing device, the data processing method and the related products provided by the embodiment of the disclosure can reduce the data access amount and the calculated amount while ensuring that the calculation precision is not damaged, and can effectively improve the data processing speed.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
Fig. 2 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure.
3 a-3 d show block diagrams of processing modules in a data processing apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of an application scenario of a data processing apparatus according to an embodiment of the present disclosure.
Fig. 5a, 5b show block diagrams of a combined processing device according to an embodiment of the present disclosure.
Fig. 6 shows a schematic structural diagram of a board card according to an embodiment of the present disclosure.
FIG. 7 shows a flow diagram of a data processing method according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure. The apparatus is for performing machine learning computations. As shown in fig. 1, the apparatus includes a control module 11 and a processing module 12.
The control module 11 is used for acquiring input data and calculating instructions. The processing module 12 is configured to perform machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction.
The data type of the input data can comprise at least one of an exponential type and a dynamic fixed point type, the exponential type input data comprises exponent bits, the exponent bits are used for storing exponents of the exponential type input data, the dynamic fixed point type input data comprises decimal point bits and integer bits, the integer bits are used for storing integer part data and fractional part data of the dynamic fixed point type input data, and data stored in the decimal point bits are used for marking positions of the decimal points in the data stored in the integer bits. Performing machine learning calculations on the input data may include: the shift operation is performed on the input data.
In the present embodiment, the carry system of the input data may be binary, decimal, hexadecimal, etc., which is not limited by the present disclosure.
In this embodiment, when the machine learning calculation is other operations such as addition and subtraction, the operation may be performed in a corresponding manner, which is not limited by the present disclosure. When it is determined that the machine learning calculation performed on the input data is a multiplication according to the calculation instruction, the operation process of the multiplication can be realized by an operation manner of shifting the input data. For example, when it is determined that the shift operation on the input data is to multiply two input data of different data types, the purpose of obtaining a calculation result by multiplying the two input data by an operation method of shifting the dynamic fixed-point input data in the input data can be achieved. The shift operation may determine the number of shift bits and the shift direction according to the exponential input data, then shift the decimal point position of the dynamic fixed-point input data according to the number of shift bits and the shift direction, and represent the shift direction and the number of shift bits of the decimal point by changing the value of the data stored in the decimal point position, thereby determining the calculation result. That is, the numerical value stored in the decimal point of the exponential type input data is added to the numerical value stored in the decimal point of the dynamic fixed point type input data to obtain the addition result, and the data stored in the decimal point of the dynamic fixed point type input data is replaced by the addition result, so that the calculation result of multiplying the exponential type input data and the dynamic fixed point type input data can be obtained.
In this embodiment, the data type of the input data is an exponential type or a dynamic fixed point type, which can reduce the amount of data access and calculation and increase the processing speed.
In this embodiment, the apparatus may include one or more control modules and one or more processing modules, and the number of the control modules and the number of the processing modules may be set according to actual needs, which is not limited in this disclosure.
The data processing device provided by the embodiment of the disclosure comprises: the control module is used for acquiring input data and calculating instructions. The processing module is used for performing machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction. The data type of the input data can comprise at least one of an exponential type and a dynamic fixed point type, the exponential type input data comprises exponent bits, the exponent bits are used for storing exponents of the exponential type input data, the dynamic fixed point type input data comprises decimal point bits and integer bits, the integer bits are used for storing integer part data and fractional part data of the dynamic fixed point type input data, and the data stored in the decimal point bits are used for marking the positions of the decimal points in the data stored in the integer bits. Performing machine learning calculations on input data includes: the shift operation is performed on the input data. The data processing device provided by the embodiment of the disclosure can reduce the data access amount and the calculation amount while ensuring that the calculation accuracy is lossless, and can effectively improve the data processing speed.
In one possible implementation, the machine learning calculations may include artificial neural network operations and the input data may include input neuron data and weight data. The calculation result is output neuron data, and the data types of the input neuron data and the weight data are different.
For example, the input neuron data may be exponential type data, and the weight data is dynamic fixed point type data. Or the input neuron data may be dynamic pointing type data and the weight data is numerical type data. In this way, it is ensured that the shift operation can be performed on the input data. The type of input neuron data and weight data may be set by those skilled in the art according to actual needs, and the present disclosure does not limit this.
Fig. 2 shows a block diagram of a data processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 2, the apparatus may further include a first type conversion module 13.
The first type conversion module 13 is configured to convert the received data to be processed into first data with a base number of a specified value, and generate exponential type input data according to an exponent of the first data. The exponent bits of the exponent-type input data are used to store the exponent.
In this implementation, the exponent of the first data converted from the to-be-processed data received by the first type conversion module needs to be an integer to ensure that the shift operation can be performed on the input data. The number of bits occupied by the exponent bits may be set according to actual needs, for example, 5 bits, which is not limited by this disclosure.
In one possible implementation, the input data of the exponent form may further include a specified value bit for marking or storing a specified value of the input data of the exponent form.
In this implementation, when there is no designated value bit in the input data, or the designated value bit is empty, the designated value may be determined according to a carry system of the input data, or a preset default designated value may be determined as the designated value of the exponential type input data.
In one possible implementation, the exponent bits further include a sign bit for indicating whether the exponent bit stores positive or negative data. For example, it is possible to set the exponent type input data to occupy 5 bits, the 1 st bit to be the sign bit, and the 2 nd to 5 th bits to be the exponent bits. It may be set that the data stored by the exponent bit is a positive number when the number stored by the sign bit is 0, and the data stored by the exponent bit is a negative number when the number stored by the sign bit is 1.
For example, assume that the received data to be processed is 1024, the designated value set is 2, and the input data is a binary number. The first type conversion module may convert the data to be processed "1024" into first data "2" with base 2 (a designated value) 10 ". According to the first data' 2 10 The "exponent" 10 "generates exponential, binary input data" 01010". The received data to be processed is 0.5, the set specified value is 2, and the input data is binary number. The first type conversion module may convert the data to be processed "0.5" into the first data "2" with base 2 (a specified value) -1 ". According to the first data' 2 -1 "exponent" -1 "generates exponential, binary input data" 10001".
In one possible implementation, the assigned value corresponding to the exponent type of input data corresponds to a carry system of the input data. In this way, it is guaranteed that the shift operation is performed on the input data.
For example, assuming the specified value is 2, the input data needs to be binary data.
In one possible implementation, as shown in fig. 2, the apparatus may further include a second type conversion module 14.
The second type conversion module 14 is configured to convert the received to-be-processed data to obtain second data representing a numerical value of an integer part of the to-be-processed data and third data representing a numerical value of a decimal part of the to-be-processed data, and generate dynamic fixed-point input data according to the second data, the third data, and a decimal point position of the to-be-processed data. The integer bits of the dynamic fixed-point input data are used for storing second data and third data, and the data stored in the decimal point positions of the dynamic fixed-point input data are used for marking the positions of the decimal points of the data to be processed in the data stored in the integer bits.
In this implementation, the data to be processed received by the second type conversion module may be a decimal number or an integer. E.g., 12 (decimal), 123.4 (decimal), etc. The total number of bits occupied by the dynamic fixed-point input data and the number of bits occupied by the integer bits and the decimal points respectively can be set according to the calculation requirement. For example, it may be set that the dynamic fixed point type input data occupies 12 bits in total, wherein an integer bit occupies 8 bits and a decimal bit occupies 4 bits. Those skilled in the art can set the total number of bits occupied by the dynamic fixed-point input data and the number of bits occupied by the integer bits and the decimal points, respectively, according to the actual requirement, which is not limited by the present disclosure.
For example, assume that the received data to be processed is 24.5, the input data is binary, the integer bits occupy 10 bits, and the decimal point occupies 4 bits. The second type conversion module may convert an integer part '24' of the data to be processed into binary second data '11000' and convert a fractional part '0.5' of the data to be processed into binary third data '0.1000'. It can be determined that the integer bit of the input data of the dynamic pointing type stores "0110001000", and the position of the decimal point can be represented by "0110" since the decimal point position follows the sixth bit of "0110001000" stored in the integer bit. Then, finally, the dynamic fixed-point type input data generated by the second type conversion module according to the data to be processed "24.5" is "0110001000, 0110". In the dynamic fixed-point input data "0110001000, 0110", the "is to distinguish the integer bits and decimal points, and the" and "may not be set in actual use. Hereinafter, "in the input data of the dynamic fixed point type", "the same as here, and will not be explained later.
In one possible implementation, as shown in fig. 2, the apparatus may further include a storage module 15. The storage module 15 is used for storing the vector to be searched.
In this implementation, the storage module may include one or more of a memory, a cache, and a register, and the cache may include a scratch pad cache. The vector to be searched may be stored in a memory, a cache and/or a register in the storage module as needed, which is not limited by the present disclosure.
In a possible implementation manner, the apparatus may further include a direct memory access module for reading or storing data from the storage module.
In one possible implementation, as shown in fig. 2, the control module 11 may include an instruction storage sub-module 111, an instruction processing sub-module 112, and a queue storage sub-module 113.
The instruction storage sub-module 111 is used to store vector lookup instructions.
The instruction processing sub-module 112 is configured to parse the vector lookup instruction to obtain an operation code and an operation domain of the vector lookup instruction.
The queue storage submodule 113 is configured to store an instruction queue, where the instruction queue includes multiple instructions to be executed that are sequentially arranged according to an execution order, and the multiple instructions to be executed may include a vector lookup instruction. The plurality of instructions to be executed may include other computational instructions that may also include instructions related to a vector lookup instruction.
In this implementation manner, the execution order of the multiple instructions to be executed may be arranged according to the receiving time, the priority level, and the like of the instructions to be executed to obtain an instruction queue, so that the multiple instructions to be executed are sequentially executed according to the instruction queue.
In one possible implementation, as shown in fig. 2, the control module 11 may further include a dependency processing sub-module 114.
The dependency relationship processing submodule 114 is configured to, when it is determined that a first to-be-executed instruction in the plurality of to-be-executed instructions has an association relationship with a zeroth to-be-executed instruction before the first to-be-executed instruction, cache the first to-be-executed instruction in the instruction storage submodule 112, and after the zeroth to-be-executed instruction is executed, extract the first to-be-executed instruction from the instruction storage submodule 112 and send the first to-be-executed instruction to the processing module 12. The first instruction to be executed and the zeroth instruction to be executed are instructions in a plurality of instructions to be executed.
The method for associating the first to-be-executed instruction with the zeroth to-be-executed instruction before the first to-be-executed instruction comprises the following steps of: the first storage address interval for storing the data required by the first to-be-executed instruction and the zeroth storage address interval for storing the data required by the zeroth to-be-executed instruction have an overlapped area. Conversely, the first to-be-executed instruction and the zeroth to-be-executed instruction have no association relationship therebetween, and the first storage address interval and the zeroth storage address interval have no overlapping region.
By the method, the subsequent instruction to be executed is executed after the execution of the prior instruction to be executed is finished according to the dependency relationship among the instructions to be executed, so that the accuracy of the calculation result is ensured.
In one possible implementation, as shown in fig. 2, the processing module 12 may include a master processing sub-module 121 and a plurality of slave processing sub-modules 122.
The control module 11 is further configured to analyze the calculation instruction to obtain a plurality of operation instructions, and send the input data and the plurality of operation instructions to the main processing sub-module 121.
The main processing sub-module 121 is configured to perform preamble processing on input data and perform data and operation instruction transmission with the plurality of slave processing sub-modules 122.
The slave processing sub-module 122 is configured to perform an intermediate operation in parallel according to the data and the operation instruction transmitted from the master processing sub-module 121 to obtain a plurality of intermediate results, and transmit the plurality of intermediate results to the master processing sub-module 122.
In this implementation, the intermediate operation may be an arithmetic, logical, or the like operation on the data. When the input data includes input neuron data and weight data, and the input neuron data and the weight data correspond to different data types, if it is determined according to the operation instruction that the executed intermediate operation is a multiplication of the input neuron data and the weight data, a shift operation may be performed on the input neuron data or the weight data to obtain an intermediate result.
The main processing sub-module 121 is further configured to perform subsequent processing on the plurality of intermediate results to obtain a calculation result, and store the calculation result in the target address.
In this implementation, the subsequent processing performed on the plurality of intermediate results may be arithmetic operations such as addition, multiplication, and logical operations performed thereon. When a plurality of intermediate results are received and need to be accumulated, the plurality of intermediate results can be calculated by adopting a circular accumulation operation mode to obtain a calculation result. The operation mode of the cyclic accumulation may be an accumulation result obtained by adding the intermediate result in the "current operation period", and when the intermediate result in the "later operation period" is added, the intermediate result and the accumulation result are added to obtain a new accumulation result, and finally, a calculation result in which all the intermediate results are accumulated together is obtained. The "following calculation period" may be the first, second, third, etc. calculation periods after the "current calculation period", and the several calculation periods after the "following calculation period" is the "current calculation period" may be set according to the timing requirements such as the calculation capability of the apparatus, which is not limited in this disclosure.
In a possible implementation manner, the cyclic accumulation process of the accumulation sub-module may be set according to actual needs such as the computing capability of the apparatus, and examples of the cyclic accumulation process in the first manner and the cyclic accumulation process in the second manner are given below. It should be noted that, the circular accumulation process may be set by those skilled in the art according to actual needs, and the disclosure is not limited thereto.
In a possible implementation manner, for the first manner, performing a circular accumulation operation on the plurality of intermediate results may include:
in a first operation period of receiving the intermediate result, adding the intermediate result and first intermediate data of the first operation period to obtain a first accumulation result;
storing the first accumulation result as first intermediate data of a next operation period;
determining first intermediate data of a second operation period as a calculation result in the second operation period in which the intermediate result is not received,
wherein, the value of the first intermediate data of the initial operation period is zero.
In a possible implementation manner, for the second implementation manner, performing a circular accumulation operation on the multiple intermediate results may include:
in a third operation period of receiving the intermediate result, adding the intermediate result and third intermediate data of the third operation period to obtain a second accumulation result;
storing the second intermediate data of the third operation period as third intermediate data of a next operation period, and storing the second accumulation result as second intermediate data of the next operation period;
and in a fourth operation period in which the intermediate result is not received, adding the second intermediate data of the fourth operation period and the third intermediate data of the fourth operation period to obtain a calculation result.
And the values of the second intermediate data and the third intermediate data of the initial operation period are zero.
It should be noted that, a person skilled in the art may set a connection manner between the master processing sub-module and the multiple slave processing sub-modules according to actual needs to implement configuration setting of the processing module, for example, the configuration of the processing module may be an "H" type configuration, an array type configuration, a tree type configuration, and the like, which is not limited by the present disclosure.
Fig. 3a shows a block diagram of processing modules in a data processing device according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 3a, the processing module 12 may further include one or more branch processing sub-modules 123, where the branch processing sub-module 123 is configured to forward data and/or operation instructions between the master processing sub-module 121 and the slave processing sub-module 122. Wherein, the main processing sub-module 121 is connected with one or more branch processing sub-modules 123. Therefore, the main processing sub-module, the branch processing sub-module and the auxiliary processing sub-module in the processing module are connected by adopting an H-shaped structure, and data and/or operation instructions are forwarded by the branch processing sub-module, so that the resource occupation of the main processing sub-module is saved, and the instruction processing speed is further improved.
Fig. 3b shows a block diagram of processing modules in a data processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 3b, a plurality of slave processing sub-modules 122 are distributed in an array.
Each slave processing sub-module 122 is connected with other adjacent slave processing sub-modules 122, the master processing sub-module 121 is connected with k slave processing sub-modules 122 of the plurality of slave processing sub-modules 122, and the k slave processing sub-modules 122 are: the n slave processing sub-modules 122 of row 1, the n slave processing sub-modules 122 of row m, and the m slave processing sub-modules 122 of column 1.
As shown in fig. 3b, the k slave processing sub-modules only include the n slave processing sub-modules in the 1 st row, the n slave processing sub-modules in the m th row, and the m slave processing sub-modules in the 1 st column, that is, the k slave processing sub-modules are slave processing sub-modules directly connected to the master processing sub-module from among the multiple slave processing sub-modules. And the k slave processing sub-modules are used for forwarding data and instructions between the main processing sub-module and the plurality of slave processing sub-modules. Therefore, the plurality of slave processing sub-modules are distributed in an array, the speed of sending data and/or operation instructions from the main processing sub-module to the slave processing sub-modules can be increased, and the instruction processing speed is further increased.
Fig. 3c shows a block diagram of processing modules in a data processing apparatus according to an embodiment of the present disclosure. In one possible implementation, as shown in fig. 3c, the processing module may further include a tree sub-module 124. The tree submodule 124 includes a root port 401 and a plurality of branch ports 402. The root port 401 is connected to the master processing submodule 121, and the plurality of branch ports 402 are connected to the plurality of slave processing submodules 122, respectively. The tree sub-module 124 has a transceiving function, and is configured to forward data and/or operation instructions between the master processing sub-module 121 and the slave processing sub-module 122. Therefore, the processing modules are connected in a tree-shaped structure under the action of the tree-shaped sub-modules, and the speed of sending data and/or operation instructions to the slave processing sub-modules by the main processing sub-modules can be increased by utilizing the forwarding function of the tree-shaped sub-modules, so that the processing speed of the instructions is increased.
In one possible implementation, the tree sub-module 124 may be an optional result of the apparatus, which may include at least one level of nodes. The nodes are line structures with forwarding functions, and the nodes do not have operation functions. The lowermost node is connected to the slave processing submodule to forward data and/or operation instructions between the master processing submodule 121 and the slave processing submodule 122. In particular, if the tree sub-module has zero level nodes, the device does not need the tree sub-module.
In one possible implementation, the tree submodule 124 may include a plurality of nodes of an n-ary tree structure, and the plurality of nodes of the n-ary tree structure may have a plurality of layers.
For example, fig. 3d shows a block diagram of a processing module in a data processing device according to an embodiment of the present disclosure. As shown in FIG. 3d, the n-ary tree structure may be a binary tree structure with tree-type sub-modules including 2 levels of nodes 01. The lowest node 01 is connected to the slave processing submodule 122 to forward data and/or operation instructions between the master processing submodule 121 and the slave processing submodule 122.
In this implementation, the n-ary tree structure may also be a ternary tree structure or the like, where n is a positive integer greater than or equal to 2. The number of n in the n-ary tree structure and the number of layers of nodes in the n-ary tree structure may be set by those skilled in the art as needed, and the disclosure is not limited thereto.
It should be noted that, although the data processing apparatus is described above by taking the above-described embodiment as an example, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each module according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Application examples
An application example according to the embodiment of the present disclosure is given below in conjunction with "operate on two input data" as an exemplary application scenario to facilitate understanding of the working principle and process of the data processing apparatus. It is to be understood by those skilled in the art that the following application examples are provided only for the purpose of facilitating understanding of the embodiments of the present disclosure, and should not be construed as limiting the embodiments of the present disclosure.
Fig. 4 shows a schematic diagram of an application scenario of a data processing apparatus according to an embodiment of the present disclosure. As shown in fig. 4, the two input data are exponential weight data and dynamic fixed-point input neuron data. Assume that the exponential weight data is binary "00001" (the decimal number corresponding to the weight data is 2) 1 ). The dynamic fixed-point input neuron data is binary "11001000, 1000" (the decimal number corresponding to the input neuron data is 12.5), wherein the first 8 bits are integer bits, and the last 4 bits are decimal point bits. The control module obtains the two input data and the calculation instruction. When determining that the operations to be performed on the exponential type weight data "00001" and the dynamic fixed-point type input neuron data "11001000, 1000" are multiplications according to the calculation instruction, the processing module may determine that the shift operation to be performed on the input neuron data is "1 bit shifted to the right in the decimal point position" according to the exponential type weight data "00001". That is, the new data "0101" to be stored in the new point position is obtained by adding the data "0100" in the point position and "00001" in the weight data, and the calculation result "11001000, 0101" (the decimal number corresponding to the calculation result is 25) obtained by multiplying the "00001" in which the exponential type weight data is binary and the "11001000, 0100" in which the dynamic fixed point type input neuron data is binary is obtained by storing the new data "0101" in the point position in the input neuron data.
Therefore, the data processing method can reduce the access amount and the calculation amount and accelerate the data processing speed while ensuring the data calculation progress.
The present disclosure provides a machine learning operation device, which may include one or more of the above-described data processing devices, for acquiring input data and control information from other processing devices, and performing a specified machine learning operation. The machine learning arithmetic device can obtain a calculation instruction from another machine learning arithmetic device or a non-machine learning arithmetic device, and transmit an execution result to a peripheral device (also referred to as another processing device) through an I/O interface. Peripheral devices such as cameras, displays, mice, keyboards, network cards, wifi interfaces, servers. When more than one data processing device is included, the data processing devices can be linked and transmit data through a specific structure, for example, the data processing devices are interconnected and transmit data through a PCIE bus, so as to support larger-scale operation of the neural network. At this time, the same control system can be shared, and independent control systems can be provided; the memory may be shared or there may be separate memories for each accelerator. In addition, the interconnection mode can be any interconnection topology.
The machine learning arithmetic device has high compatibility and can be connected with various types of servers through PCIE interfaces.
Fig. 5a shows a block diagram of a combined processing device according to an embodiment of the present disclosure. As shown in fig. 5a, the combined processing device includes the machine learning arithmetic device, the universal interconnection interface and other processing devices. The machine learning arithmetic device interacts with other processing devices to jointly complete the operation designated by the user.
Other processing devices include one or more of general purpose/special purpose processors such as Central Processing Units (CPUs), graphics Processing Units (GPUs), neural network processors, and the like. The number of processors included in the other processing devices is not limited. The other processing devices are used as interfaces of the machine learning arithmetic device and external data and control, including data transportation, and finish basic control of starting, stopping and the like of the machine learning arithmetic device; other processing devices may cooperate with the machine learning computing device to perform computing tasks.
And the universal interconnection interface is used for transmitting data and control instructions between the machine learning arithmetic device and other processing devices. The machine learning arithmetic device obtains required input data from other processing devices and writes the required input data into a storage device on the machine learning arithmetic device chip; control instructions can be obtained from other processing devices and written into a control cache on a machine learning arithmetic device chip; the data in the storage module of the machine learning arithmetic device can also be read and transmitted to other processing devices.
Fig. 5b shows a block diagram of a combined processing device according to an embodiment of the present disclosure. In a possible implementation manner, as shown in fig. 5b, the combined processing device may further include a storage device, and the storage device is connected to the machine learning operation device and the other processing device respectively. The storage device is used for storing data stored in the machine learning arithmetic device and the other processing device, and is particularly suitable for data which is required to be calculated and cannot be stored in the internal storage of the machine learning arithmetic device or the other processing device.
The combined processing device can be used as an SOC (system on chip) system of equipment such as a mobile phone, a robot, an unmanned aerial vehicle and video monitoring equipment, the core area of a control part is effectively reduced, the processing speed is increased, and the overall power consumption is reduced. In this case, the universal interconnect interface of the combined processing device is connected to some component of the apparatus. Some parts are such as camera, display, mouse, keyboard, network card, wifi interface.
The present disclosure provides a machine learning chip, which includes the above machine learning arithmetic device or combined processing device.
The present disclosure provides a machine learning chip package structure, which includes the above machine learning chip.
Fig. 6 shows a schematic structural diagram of a board according to an embodiment of the present disclosure. As shown in fig. 6, the board includes the above-mentioned machine learning chip package structure or the above-mentioned machine learning chip. In addition to including the machine learning chip 389, the board may include other kits including, but not limited to: memory device 390, interface device 391 and control device 392.
Memory device 390 is coupled via a bus to a machine learning chip 389 (or a machine learning chip within a machine learning chip package structure) for storing data. Memory device 390 may include multiple groups 393 of memory cells. Each group of memory cells 393 is coupled to a machine learning chip 389 via a bus. It is understood that each group 393 may be a DDR SDRAM (Double Data Rate SDRAM).
DDR can double up the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on the rising and falling edges of a clock pulse. DDR is twice as fast as standard SDRAM.
In one embodiment, memory device 390 may include 4 groups of memory cells 393. Each group of memory cells 393 may include multiple DDR4 particles (chips). In one embodiment, the machine learning chip 389 may internally include 4 72-bit DDR4 controllers, wherein 64 bits of the 72-bit DDR4 controllers are used for data transmission, and 8 bits of the 72-bit DDR4 controllers are used for ECC checking. It is understood that when DDR4-3200 particles are used in each group of memory cells 393, the theoretical bandwidth of data transfer can reach 25600MB/s.
In one embodiment, each group 393 comprises a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. A controller for controlling DDR is provided in the machine learning chip 389, and is used to control data transfer and data storage of each memory unit 393.
Interface device 391 is electrically coupled to machine learning chip 389 (or a machine learning chip within a machine learning chip package). The interface device 391 is used to implement data transfer between the machine learning chip 389 and an external device (e.g., a server or computer). In one embodiment, for example, the interface device 391 may be a standard PCIE interface. For example, the data to be processed is transmitted to the machine learning chip 289 by the server through the standard PCIE interface, so as to implement data transfer. Preferably, when PCIE 3.0X 16 interface is adopted for transmission, the theoretical bandwidth can reach 16000MB/s. In another embodiment, the interface device 391 may also be another interface, and the disclosure does not limit the specific representation of the other interface, and the interface device can implement the switching function. In addition, the calculation result of the machine learning chip is still transmitted back to the external device (e.g., server) by the interface device.
The control device 392 is electrically connected to the machine learning chip 389. The control device 392 is used to monitor the state of the machine learning chip 389. Specifically, the machine learning chip 389 and the control device 392 may be electrically connected through an SPI interface. The control device 392 may include a single chip Microcomputer (MCU). For example, machine learning chip 389 may comprise a plurality of processing chips, a plurality of processing cores, or a plurality of processing circuits, which may carry a plurality of loads. Therefore, the machine learning chip 389 can be in different operation states such as a multi-load and a light load. The control device can regulate and control the working states of a plurality of processing chips, a plurality of processing circuits and/or a plurality of processing circuits in the machine learning chip.
The present disclosure provides an electronic device, which includes the above machine learning chip or board card.
The electronic device may include a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
The vehicle may include an aircraft, a ship, and/or a vehicle. The household appliances may include televisions, air conditioners, microwave ovens, refrigerators, electric rice cookers, humidifiers, washing machines, electric lamps, gas cookers, and range hoods. The medical device may include a nuclear magnetic resonance apparatus, a B-mode ultrasound apparatus and/or an electrocardiograph.
FIG. 7 shows a flow diagram of a data processing method according to an embodiment of the present disclosure. As shown in fig. 7, the method is applied to the above-described data processing apparatus for performing machine learning calculations. The method comprises step S51 and step S52.
In step S51, input data and a calculation instruction are acquired.
In step S52, machine learning calculation is performed on the input data according to the calculation instruction, and a calculation result of the calculation instruction is obtained.
The data type of the input data may include at least one of an exponent type and a dynamic fixed point type, the exponent type input data includes exponent bits, the exponent bits are used for storing exponents of the exponent type input data, the dynamic fixed point type input data may include decimal point bits and integer bits, the integer bits are used for storing integer part data and fractional part data of the dynamic fixed point type input data, and data stored in the decimal point bits are used for marking positions of the decimal points in data stored in the integer bits. Performing machine learning calculations on input data includes: the shift operation is performed on the input data.
In one possible implementation, the method may further include: and converting the received data to be processed into first data with a specified value as a base number, and generating exponential type input data according to the exponent of the first data. The exponent bits of the exponent-type input data are used to store the exponent.
In one possible implementation, the specified value corresponding to the exponent type of input data is the same as the carry system of the input data.
In one possible implementation, the method may further include: the method comprises the steps of converting received data to be processed to obtain second data representing numerical values of an integer part of the data to be processed and third data representing numerical values of a decimal part of the data to be processed respectively, and generating dynamic fixed-point input data according to the second data, the third data and the decimal point position of the data to be processed. The integer bits of the dynamic fixed-point input data are used for storing second data and third data, and the data stored in the decimal point positions of the dynamic fixed-point input data are used for marking the positions of decimal points of data to be processed in the data stored in the integer bits.
In one possible implementation, the machine learning computation may include: the artificial neural network operation, the input data may include: inputting neuron data and weight data; the calculation result is output neuron data, and the data types of the input neuron data and the weight data are different.
In one possible implementation, step S51 may include:
analyzing the calculation instruction to obtain a plurality of operation instructions;
the method for performing machine learning calculation on input data according to a calculation instruction to obtain a calculation result of the calculation instruction comprises the following steps:
performing preorder processing on input data, and transmitting data and an operation instruction;
executing intermediate operation in parallel according to the transmitted data and the operation instruction to obtain a plurality of intermediate results;
and performing subsequent processing on the plurality of intermediate results to obtain a calculation result of the calculation instruction.
In one possible implementation, the method may further include: the input data is stored.
In one possible implementation, step S51 may include:
storing the calculation instruction;
analyzing the calculation instruction to obtain a plurality of operation instructions of the calculation instruction;
and storing an instruction queue, wherein the instruction queue comprises a plurality of instructions to be executed which are sequentially arranged according to an execution sequence, and the plurality of instructions to be executed comprise a plurality of operation instructions.
In one possible implementation manner, step S51 may further include:
when determining that the first to-be-executed instruction in the plurality of to-be-executed instructions has an association relation with a zeroth to-be-executed instruction before the first to-be-executed instruction, caching the first to-be-executed instruction, after determining that the zeroth to-be-executed instruction is completely executed, controlling to execute the first to-be-executed instruction,
the method for determining the zero-th instruction to be executed before the first instruction to be executed has an incidence relation with the first instruction to be executed comprises the following steps:
the first storage address interval for storing the data required by the first to-be-executed instruction and the zeroth storage address interval for storing the data required by the zeroth to-be-executed instruction have an overlapped area.
It should be noted that, although the data processing method is described above by taking the above-mentioned embodiment as an example, those skilled in the art can understand that the disclosure should not be limited thereto. In fact, the user can flexibly set each step according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
The data processing method provided by the embodiment of the disclosure can reduce the data access amount and the calculation amount while ensuring the lossless calculation precision, and can effectively improve the data processing speed.
It should be noted that for simplicity of description, the above-mentioned embodiments of the method are described as a series of acts, but those skilled in the art should understand that the present application is not limited by the described order of acts, as some steps may be performed in other orders or simultaneously according to the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules are not necessarily required for the disclosure.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
In the embodiments provided in the present disclosure, it should be understood that the disclosed system and apparatus may be implemented in other ways. For example, the above-described embodiments of the system and apparatus are merely illustrative, and for example, the division of the apparatus, device and module is only one type of logical function division, and there may be another division manner in actual implementation, for example, a plurality of modules may be combined or may be integrated into another system or apparatus, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, and the indirect coupling or communication connection between the devices, apparatuses or modules may be in an electrical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, functional modules in the embodiments of the present disclosure may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be implemented in a form of hardware, and can also be implemented in a form of a software program module.
The integrated modules, if implemented in the form of software program modules and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, read-Only memories (ROMs), random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or technical improvements to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (17)

1. A data processing apparatus for performing machine learning computations, the apparatus comprising:
the control module is used for acquiring input data and a calculation instruction;
the processing module is used for carrying out machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction,
the data type of the input data comprises at least one of an exponential type and a dynamic fixed point type, the exponential type input data comprises an exponent bit and a designated value bit, the designated value bit is used for marking or storing the base number of the exponential type input data, the exponent bit is used for storing the exponent of the exponential type input data, the dynamic fixed point type input data comprises a decimal point bit and an integer bit, the integer bit is used for storing the integer part data and the fractional part data of the dynamic fixed point type input data, the data stored in the decimal point bit is used for marking the position of the decimal point in the data stored in the integer bit, and the designated value corresponding to the exponential type input data corresponds to the carry system of the input data;
performing machine learning calculations on the input data includes: and when the machine learning calculation performed on the input data is determined to be multiplication according to the calculation instruction, performing a shift operation on the input data, wherein the input data is different in data type.
2. The apparatus of claim 1, further comprising:
the first type conversion module is used for converting the received data to be processed into first data with a specified value as a base number, and generating the exponential type input data according to the exponent of the first data.
3. The apparatus of claim 1, further comprising:
a second type conversion module, configured to convert the received data to be processed to obtain second data representing a numerical value of an integer part of the data to be processed and third data representing a numerical value of a decimal part of the data, and generate the dynamic fixed-point input data according to the second data, the third data, and a decimal point position of the data to be processed,
wherein the integer bits of the dynamic fixed point type input data are used to store the second data and the third data.
4. The apparatus of claim 1, wherein the machine learning computation comprises: artificial neural network operations, the input data comprising: inputting neuron data and weight data; the calculation result is output neuron data.
5. The apparatus of claim 1, wherein the processing module comprises a master processing sub-module and a plurality of slave processing sub-modules,
the control module is also used for analyzing the calculation instruction to obtain a plurality of operation instructions and sending the input data and the operation instructions to the main processing submodule;
the main processing submodule is used for executing preorder processing on the input data and transmitting data and operation instructions with the plurality of slave processing submodules;
the plurality of slave processing sub-modules are used for executing intermediate operation in parallel according to the data and the operation instruction transmitted from the main processing sub-module to obtain a plurality of intermediate results and transmitting the plurality of intermediate results to the main processing sub-module;
the main processing sub-module is further configured to perform subsequent processing on the plurality of intermediate results to obtain a calculation result of the calculation instruction.
6. The apparatus of claim 1,
the device further comprises: the storage module is used for storing the input data;
wherein the control module comprises:
the instruction storage submodule is used for storing the calculation instruction;
the instruction processing submodule is used for analyzing the calculation instruction to obtain a plurality of operation instructions of the calculation instruction;
the queue storage submodule is used for storing an instruction queue, the instruction queue comprises a plurality of instructions to be executed which are sequentially arranged according to an execution sequence, and the plurality of instructions to be executed comprise the plurality of operation instructions;
wherein, the control module still includes:
the dependency relationship processing submodule is used for caching a first instruction to be executed in the instruction storage submodule when the incidence relationship between the first instruction to be executed in the plurality of instructions to be executed and a zeroth instruction to be executed before the first instruction to be executed is determined, extracting the first instruction to be executed from the instruction storage submodule after the zeroth instruction to be executed is executed, and sending the first instruction to be executed to the processing module,
wherein the association relationship between the first to-be-executed instruction and a zeroth to-be-executed instruction before the first to-be-executed instruction comprises:
and a first storage address interval for storing the data required by the first instruction to be executed and a zeroth storage address interval for storing the data required by the zeroth instruction to be executed have an overlapped area.
7. A machine learning arithmetic device, the device comprising:
one or more data processing apparatus as claimed in any one of claims 1 to 6, arranged to obtain input data and control information from other processing apparatus, and to perform specified machine learning operations, and to pass the results of the execution to other processing apparatus via the I/O interface;
when the machine learning arithmetic device comprises a plurality of data processing devices, the plurality of data processing devices can be connected through a specific structure and transmit data;
the data processing devices are interconnected through a PCIE bus of a fast peripheral equipment interconnection bus and transmit data so as to support operation of larger-scale machine learning; a plurality of the data processing devices share the same control system or own respective control systems; the data processing devices share a memory or own respective memories; the interconnection mode of the plurality of data processing devices is any interconnection topology.
8. A combined processing apparatus, characterized in that the combined processing apparatus comprises:
the machine learning computing device, the universal interconnect interface, and the other processing device of claim 7;
the machine learning arithmetic device interacts with the other processing devices to jointly complete the calculation operation designated by the user,
wherein the combination processing apparatus further comprises: and a storage device connected to the machine learning arithmetic device and the other processing device, respectively, for storing data of the machine learning arithmetic device and the other processing device.
9. A machine learning chip, the machine learning chip comprising:
the machine learning computation apparatus of claim 7 or the combined processing apparatus of claim 8.
10. An electronic device, characterized in that the electronic device comprises:
the machine learning chip of claim 9.
11. The utility model provides a board card, its characterized in that, the board card includes: a memory device, an interface apparatus and a control device and a machine learning chip according to claim 9;
wherein the machine learning chip is connected with the storage device, the control device and the interface device respectively;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the machine learning chip and external equipment;
and the control device is used for monitoring the state of the machine learning chip.
12. A data processing method applied to a data processing apparatus for performing machine learning computation, the method comprising:
acquiring input data and a calculation instruction;
performing machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction,
the data type of the input data comprises at least one of an exponential type and a dynamic fixed point type, the exponential type input data comprises an exponent bit and a designated value bit, the designated value bit is used for marking or storing the base number of the exponential type input data, the exponent bit is used for storing the exponent of the exponential type input data, the dynamic fixed point type input data comprises a decimal point bit and an integer bit, the integer bit is used for storing the integer part data and the fractional part data of the dynamic fixed point type input data, the data stored in the decimal point bit is used for marking the position of the decimal point in the data stored in the integer bit, and the designated value corresponding to the exponential type input data corresponds to the carry system of the input data;
performing machine learning calculations on the input data includes: and when the machine learning calculation performed on the input data is determined to be a multiplication according to the calculation instruction, performing a shift operation on the input data, wherein the input data are different in data type.
13. The method of claim 12, further comprising:
converting the received data to be processed into first data with a specified value as a base number, and generating the exponential type input data according to the exponent of the first data.
14. The method of claim 12, further comprising:
converting the received data to be processed to obtain second data representing the numerical value of the integer part of the data to be processed and third data representing the numerical value of the decimal part, and generating the dynamic fixed-point input data according to the second data, the third data and the decimal point position of the data to be processed,
wherein the integer bits of the dynamic fixed point type input data are used to store the second data and the third data.
15. The method of claim 12, wherein the machine learning computation comprises: an artificial neural network operation, the input data comprising: inputting neuron data and weight data; the calculation result is output neuron data.
16. The method of claim 12, wherein obtaining input data and computing instructions comprises:
analyzing the calculation instruction to obtain a plurality of operation instructions;
the method for performing machine learning calculation on the input data according to the calculation instruction to obtain a calculation result of the calculation instruction comprises the following steps:
performing preorder processing on the input data, and transmitting data and an operation instruction;
executing intermediate operation in parallel according to the transmitted data and the operation instruction to obtain a plurality of intermediate results;
and performing subsequent processing on the plurality of intermediate results to obtain a calculation result of the calculation instruction.
17. The method of claim 12,
the method further comprises the following steps: storing the input data;
wherein, obtaining input data and calculating instructions comprises:
storing the calculation instruction;
analyzing the calculation instruction to obtain a plurality of operation instructions of the calculation instruction;
storing an instruction queue, wherein the instruction queue comprises a plurality of instructions to be executed which are sequentially arranged according to an execution sequence, and the plurality of instructions to be executed comprise a plurality of operation instructions;
wherein, obtain input data and calculation instruction, still include:
when determining that the first to-be-executed instruction in the plurality of to-be-executed instructions has an association relationship with a zeroth to-be-executed instruction before the first to-be-executed instruction, caching the first to-be-executed instruction, and after determining that the zeroth to-be-executed instruction is completely executed, controlling the execution of the first to-be-executed instruction,
the method for associating the first to-be-executed instruction with the zeroth to-be-executed instruction before the first to-be-executed instruction comprises the following steps of:
and a first storage address interval for storing the data required by the first instruction to be executed and a zeroth storage address interval for storing the data required by the zeroth instruction to be executed have an overlapped area.
CN201811455827.6A 2018-11-30 2018-11-30 Operation method, device and related product Active CN111260070B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811455827.6A CN111260070B (en) 2018-11-30 2018-11-30 Operation method, device and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811455827.6A CN111260070B (en) 2018-11-30 2018-11-30 Operation method, device and related product

Publications (2)

Publication Number Publication Date
CN111260070A CN111260070A (en) 2020-06-09
CN111260070B true CN111260070B (en) 2022-11-29

Family

ID=70950281

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811455827.6A Active CN111260070B (en) 2018-11-30 2018-11-30 Operation method, device and related product

Country Status (1)

Country Link
CN (1) CN111260070B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115086232B (en) * 2022-06-13 2023-07-21 清华大学 Task processing and data stream generating method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992329A (en) * 2017-07-20 2018-05-04 上海寒武纪信息科技有限公司 A kind of computational methods and Related product
CN108205700A (en) * 2016-12-20 2018-06-26 上海寒武纪信息科技有限公司 Neural network computing device and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7912890B2 (en) * 2006-05-11 2011-03-22 Intel Corporation Method and apparatus for decimal number multiplication using hardware for binary number operations

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108205700A (en) * 2016-12-20 2018-06-26 上海寒武纪信息科技有限公司 Neural network computing device and method
CN107992329A (en) * 2017-07-20 2018-05-04 上海寒武纪信息科技有限公司 A kind of computational methods and Related product

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
定浮点数据算术及其优化;石学林等;《计算机科学》;20050623;第32卷(第2005年第6期);全文 *

Also Published As

Publication number Publication date
CN111260070A (en) 2020-06-09

Similar Documents

Publication Publication Date Title
CN109284823B (en) Arithmetic device and related product
CN109522052B (en) Computing device and board card
CN111381871B (en) Operation method, device and related product
CN111260070B (en) Operation method, device and related product
CN111258641B (en) Operation method, device and related product
CN111258541B (en) Multiplier, data processing method, chip and electronic equipment
CN111260046B (en) Operation method, device and related product
CN111047005A (en) Operation method, operation device, computer equipment and storage medium
CN112395003A (en) Operation method, device and related product
CN111047030A (en) Operation method, operation device, computer equipment and storage medium
CN109582277A (en) Data processing method, device and Related product
CN111061507A (en) Operation method, operation device, computer equipment and storage medium
CN111401536A (en) Operation method, device and related product
CN111382850A (en) Operation method, device and related product
CN112396186B (en) Execution method, execution device and related product
CN111026440B (en) Operation method, operation device, computer equipment and storage medium
CN113033788B (en) Data processor, method, device and chip
CN111382390B (en) Operation method, device and related product
CN111339060B (en) Operation method, device, computer equipment and storage medium
CN112394990A (en) Floating point to half precision floating point instruction processing device and method and related products
CN111045729A (en) Operation method, device and related product
CN111047027A (en) Operation method, device and related product
CN112394991A (en) Floating point to half precision floating point instruction processing device and method and related products
CN111047028A (en) Operation method, device and related product
CN112346781A (en) Instruction processing method and device and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant