CN109165721A - Data processing method, data processing equipment and electronic equipment - Google Patents

Data processing method, data processing equipment and electronic equipment Download PDF

Info

Publication number
CN109165721A
CN109165721A CN201810708160.XA CN201810708160A CN109165721A CN 109165721 A CN109165721 A CN 109165721A CN 201810708160 A CN201810708160 A CN 201810708160A CN 109165721 A CN109165721 A CN 109165721A
Authority
CN
China
Prior art keywords
data
sequence
memory space
lstm
list entries
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810708160.XA
Other languages
Chinese (zh)
Other versions
CN109165721B (en
Inventor
刘洪�
王俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing suneng Technology Co.,Ltd.
Original Assignee
Feng Feng Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Feng Feng Technology (beijing) Co Ltd filed Critical Feng Feng Technology (beijing) Co Ltd
Priority to CN201810708160.XA priority Critical patent/CN109165721B/en
Publication of CN109165721A publication Critical patent/CN109165721A/en
Application granted granted Critical
Publication of CN109165721B publication Critical patent/CN109165721B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The application provides a kind of data processing method, data processing equipment and electronic equipment;Wherein, data processing method packet includes: the N number of input data obtained in list entries set;N number of input data in list entries set is respectively fed to the first LSTM system and the 2nd LSTM system, and obtains 2N intermediate data;2N intermediate data is distributed to N number of memory space;2N intermediate data is carried out to operation in N number of memory space and obtains N number of output data;Wherein N is the quantity of input data in list entries set.

Description

Data processing method, data processing equipment and electronic equipment
Technical field
This application involves data processing field, in particular to a kind of data processing method, data processing equipment and Electronic equipment.
Background technique
LSTM (Long Short-Term Memory) i.e. shot and long term memory network is a kind of study sequence in deep learning The network structure of feature is suitable for being spaced and postpone relatively long critical event in processing and predicted time sequence.LSTM is Through having a variety of applications in sciemtifec and technical sphere.System based on LSTM can learn interpreter language, control robot, image analysis, Documentation summary, speech recognition, image recognition, handwriting recognition, control chat robots etc. task.
However, existing generally require when enter data into two-way LSTM network to carry out backward to list entries Operation, while memory space also being needed to store backward sequence;On the other hand, to store respectively in each input data two it is defeated Out as a result, so greatly wasting memory space and reducing arithmetic speed.
Summary of the invention
To solve the above-mentioned problems, according to the one side of the application, data processing method, data processing equipment and electricity are proposed Sub- equipment.Wherein, data processing method is applied to embedded nerve network system, includes: obtaining N number of in list entries set Input data;N number of input data in list entries set is respectively fed to the first LSTM system and the 2nd LSTM system, and is obtained To 2N intermediate data;2N intermediate data is distributed to N number of memory space;By 2N intermediate data in N number of memory space It carries out operation and obtains N number of output data;Wherein N is the quantity of input data in list entries set.
In some embodiments, N number of input data in list entries set is respectively fed to the first LSTM system and Two LSTM systems, comprising: N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and The 2nd LSTM system is sent into according to the second sequence, wherein the second sequence is the backward of the first sequence, and is often sent according to the first sequence After entering the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single It is a to be alternately sent into, until being respectively fed to N number of input data according to the first sequence and the second sequence.
In some embodiments, 2N intermediate data is distributed to N number of memory space, comprising: press 2N intermediate data Corresponding N number of memory space is alternately allocated to according to its genesis sequence;Wherein, two intermediate data are stored in each memory space.
In some embodiments, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data, It include: that each of N number of input data in list entries set input data is sent into the first LSTM system and second Two intermediate results obtained after LSTM system carry out operation in corresponding memory space, obtain N number of output data.
According to the another aspect of the application, a kind of data processing equipment is proposed;Wherein, data processing equipment includes: data Acquiring unit configures to obtain N number of input data in list entries set;Data processing unit is configured list entries N number of input data in set is respectively fed to the first LSTM system and the 2nd LSTM system, and obtains 2N intermediate data;Data Allocation unit, configuration distribute 2N intermediate data to N number of memory space;Data Computation Unit, configuration are intermediate by 2N Data carry out operation in N number of memory space and obtain N number of output data;Wherein N is the number of input data in list entries set Amount.
In some embodiments, N number of input data in list entries set is respectively fed to the first LSTM system and Two LSTM systems, comprising: N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and The 2nd LSTM system is sent into according to the second sequence, wherein the second sequence is the backward of the first sequence, and is often sent according to the first sequence After entering the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single It is a to be alternately sent into, until being respectively fed to N number of input data according to the first sequence and the second sequence.
In some embodiments, 2N intermediate data is distributed to N number of memory space, comprising: press 2N intermediate data Corresponding N number of memory space is alternately allocated to according to its genesis sequence;Wherein, two intermediate data are stored in each memory space.
In some embodiments, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data, It include: that each of N number of input data in list entries set input data is sent into the first LSTM system and second Two intermediate results obtained after LSTM system carry out operation in corresponding memory space, obtain N number of output data.
According to the another aspect of the application, a kind of electronic equipment is proposed;Wherein, electronic equipment includes: at least one insertion Formula neural network processor;And the memory communicated to connect at least one embedded neural network processor;Wherein, it stores Device is stored with the instruction that can be executed by least one embedded neural network processor, instructs by least one embedded nerve net When network processor executes, at least one embedded neural network processor is made to execute data processing method as described above.
Pass through the data processing method of the application, data processing equipment and electronic equipment, it is possible to reduce in calculating process Memory space, while saving the backout to list entries, calculate space and time to save, accelerate to calculate.
Referring to following description and accompanying drawings, apply for specific implementations of the present application in detail, specifies the original of the application Reason can be in a manner of adopted.It should be understood that presently filed embodiment is not so limited in range.In appended power In the range of the spirit and terms that benefit requires, presently filed embodiment includes many changes, modifications and is equal.
The feature for describing and/or showing for a kind of embodiment can be in a manner of same or similar one or more It uses in a other embodiment, is combined with the feature in other embodiment, or the feature in substitution other embodiment.
It should be emphasized that term "comprises/comprising" refers to the presence of feature, one integral piece, step or component when using herein, but simultaneously It is not excluded for the presence or additional of one or more other features, one integral piece, step or component.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of application for those skilled in the art without any creative labor, can be with root Other attached drawings are obtained according to these attached drawings.
One or more embodiments carry out exemplary explanation, these exemplary description and accompanying drawings by corresponding attached drawing The restriction to embodiment is not constituted, the element with same reference numbers label is expressed as similar element, attached drawing in attached drawing Composition does not limit, and wherein:
Fig. 1 is a kind of typical network architecture schematic diagram of LSTM study sequence signature;
Fig. 2 is the overall structure diagram being input to list entries in one LSTM system;
Fig. 3 is the operation chart that list entries is carried out to backward processing;
Fig. 4 is that the signal of the overall structure in another LSTM system will be input to by the processed list entries of backward Figure;
Fig. 5 is the memory space data structure schematic diagram stored in traditional LSTM method to data;
Fig. 6 is the overall flow figure of the data processing method provided according to an embodiment of the present application;
Fig. 7 is the concrete structure schematic diagram of the data processing method provided according to an embodiment of the present application;
Fig. 8 be according to an embodiment of the present application in data processing method data storage memory space data structure show It is intended to;
Fig. 9 is the overall structure diagram of the data processing equipment provided according to an embodiment of the present application;And
Figure 10 is the overall structure diagram of the electronic equipment provided according to an embodiment of the present application.
Specific embodiment
The characteristics of in order to more fully hereinafter understand the embodiment of the present disclosure and technology contents, with reference to the accompanying drawing to this public affairs The realization for opening embodiment is described in detail, appended attached drawing purposes of discussion only for reference, is not used to limit the embodiment of the present disclosure. In technical description below, for convenience of explanation for the sake of, disclosed embodiment is fully understood with providing by multiple details. However, one or more embodiments still can be implemented in the case where without these details.It in other cases, is simplification Attached drawing, well known construction and device can simplify displaying.
Below with reference to several representative embodiments of the application, the principle and spirit of the application are illustrated in detail.
Fig. 1 is a kind of typical network architecture schematic diagram of LSTM study sequence signature.As shown in Figure 1, input node Xt-1, Xt, Xt+1Form list entries Xt-1XtXt+1It is input in LSTM network system, by being corresponded to after a series of calculating Output node ht-1, ht, ht+1Form output sequence ht-1htht+1
Fig. 2 to Fig. 4 is the schematic diagram of the two-way LSTM implementation method of tradition.
Traditional two-way LSTM implementation method in practical applications, as shown in Figures 2 to 4, usually inputs same data Sequence is first input in LSTM1 system from forward direction, obtains an intermediate result sequence, list entries is carried out backward again later Operation, obtains the backward sequence of list entries, by backward sequence inputting into LSTM2 system, obtains another intermediate result sequence Column, then by the corresponding addition of each data in two intermediate result sequences, to obtain output sequence to reach better sequence Feature.As shown in Fig. 2, might as well assume that list entries is X0X1X2X3X4X5X6(each letter represents a feature vector), it is positive When by X0X1X2X3X4X5X6It is directly inputted in LSTM1 grid, obtaining intermediate result sequence is h0h1h2h3h4h5h6;Reversely When, then it needs first to carry out backout to list entries, as shown in figure 3, obtaining the backward sequence of list entries X6X5X4X3X2X1X0, then by backward sequence inputting into LSTM2 grid, as shown in figure 4, obtaining another intermediate result sequence It is classified as g0g1g2g3g4g5g6.Finally two intermediate result sequence results are added to obtain output sequence [h0+g6][h1+g5][h2+ g4][h3+g3][h4+g2][h5+g1][h6+g0]。
However, needing to carry out backout to list entries, while also needing additional deposit in above-mentioned calculating process It stores up space and stores backward sequence X6X5X4X3X2X1X0, that is to say, that list entries is needed positive and is inversely stored twice, is such as schemed Shown in 5, positive list entries X should be stored0X1X2X3X4X5X6, also to store will be inverse after the processing of positive list entries backward Sequence sequence X6X5X4X3X2X1X0;On the other hand, two corresponding intermediate sequence result h are stored respectively0h1h2h3h4h5h6With g0g1g2g3g4g5g6, so greatly waste memory space and reduce arithmetic speed.
To solve the above-mentioned problems, embodiments herein provides a kind of data processing method, is applied to embedded mind Through network system, include: obtaining N number of input data in list entries set;By N number of input data in list entries set It is respectively fed to the first LSTM system and the 2nd LSTM system, and obtains 2N intermediate data;2N intermediate data is distributed to N number of Memory space;2N intermediate data is carried out to operation in N number of memory space and obtains N number of output data;Wherein N is list entries The quantity of input data in set.
In some embodiments, N number of input data in list entries set is respectively fed to the first LSTM system and Two LSTM systems, comprising: N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and The 2nd LSTM system is sent into according to the second sequence, wherein the second sequence is the backward of the first sequence, and is often sent according to the first sequence After entering the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single It is a to be alternately sent into, until being respectively fed to N number of input data according to the first sequence and the second sequence.
In some embodiments, 2N intermediate data is distributed to N number of memory space, comprising: press 2N intermediate data Corresponding N number of memory space is alternately allocated to according to its genesis sequence;Wherein, two intermediate data are stored in each memory space.
In some embodiments, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data, It include: that each of N number of input data in list entries set is sent into the first LSTM system and the 2nd LSTM system Two intermediate results obtained after system carry out operation in corresponding memory space, obtain N number of output data.
Pass through above-mentioned data processing method, it is possible to reduce the memory space in calculating process, while saving to input sequence The backout of column calculates space and time to save, and accelerates to calculate.
The data processing method of embodiments herein is discussed in detail below in conjunction with Fig. 6 to Fig. 8.
Fig. 6 is the overall flow figure of the data processing method provided according to an embodiment of the present application;Such as the flow chart institute of Fig. 6 Show, step S61, i.e. N number of input data in acquisition list entries set is first carried out, wherein N is in list entries set The quantity of input data, and N is an integer more than or equal to 1, in a preferred embodiment of the present application, each input data can To be a feature vector.
After obtaining N number of input data in input set arrangement set, as shown in fig. 6, execute step S62, i.e., it will be defeated The N number of input data entered in arrangement set is respectively fed to the first LSTM system and the 2nd LSTM system, and obtains 2N mediant According to.Wherein, N number of input data in list entries set is respectively fed to the first LSTM system and the 2nd LSTM system, comprising: N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and is sent into according to the second sequence 2nd LSTM system, wherein the second sequence is the backward of the first sequence, and is sent into the first LSTM system according to the first sequence is every After uniform bit data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single to be alternately sent into, until pressing N number of input data is respectively fed to according to the first sequence and the second sequence.
Fig. 7 is the concrete structure schematic diagram of the data processing method provided according to an embodiment of the present application;As shown in fig. 7, Input data in list entries set is X0X1X2X3X4X5X6, therefore in the present embodiment, N 7;Input in the present embodiment The value of the quantity N of input data in arrangement set is only schematical, and those skilled in the art can be according to practical feelings N is taken different values by condition.Obtaining input data X0X1X2X3X4X5X6Later, it is respectively fed to the first LSTM system LSTM1 and the 2nd LSTM system LSTM2.Wherein, the mode of feeding are as follows: first with the first sequence by X0It is sent into LSTM1, is obtained pair The output h answered0;Later again with the second sequence by X6It is sent into LSTM2, obtains corresponding output g0;Later again with the first sequence by X1 It is sent into LSTM1, obtains corresponding output h1;Later again with the second sequence by X5It is sent into LSTM2, obtains corresponding output g1;Later Again with the first sequence by X2It is sent into LSTM1, obtains corresponding output h2;Later again with the second sequence by X4It is sent into LSTM2, is obtained Corresponding output g2;Later again with the first sequence by X3It is sent into LSTM1, obtains corresponding output h3;It later again will with the second sequence X3It is sent into LSTM2, obtains corresponding output g3;Later again with the first sequence by X4It is sent into LSTM1, obtains corresponding output h4;It Afterwards again with the second sequence by X2It is sent into LSTM2, obtains corresponding output g4;Later again with the first sequence by X5It is sent into LSTM1, is obtained To corresponding output h5;Later again with the second sequence by X1It is sent into LSTM2, obtains corresponding output g5;Later again with the first sequence By X6It is sent into LSTM1, obtains corresponding output h6;Finally by X0It is sent into LSTM2 with the second sequence, obtains corresponding output g6.? In the present embodiment, the first sequence is the data X inputted in list entries set from left to right0X1X2X3X4X5X6, the second sequence is Data X in the sequence of sets of input input from right to left0X1X2X3X4X5X6, the second sequence is the backward of the first sequence.However this The definition of the first sequence and the second sequence in application is without being limited thereto, and those skilled in the art can be according to different situations by One sequence and the second sequence carry out different definition, for example, the first sequence is defined as to input list entries set from right to left In sequence, the second sequence is defined as to the sequence inputted in list entries set from left to right.
The above-mentioned feeding mode by the input data in list entries set, i.e., by N number of input in list entries set Data are sent into the first LSTM system according to the first sequence, and are sent into the 2nd LSTM system according to the second sequence, wherein the second sequence For the backward of the first sequence, and after being sent into the first LSTM system a data according to the first sequence is every, according to the second sequence It is sent into the 2nd LSTM system a data, it is successively single to be alternately sent into, until according to the first sequence and the second sequence point It is not sent into N number of input data;The above-mentioned feeding mode to input data in the embodiment of the present application, which is eliminated, gathers sequence to input The backout of input data in column eliminates the backward processing step in Fig. 3, reduces operation time, improves fortune Calculate efficiency.
As shown in fig. 7, by 7 (at this point, N=7) input data X in list entries set0X1X2X3X4X5X6According to upper State the mode in embodiment to be respectively fed to after LSTM1 and LSTM2, obtained 2N intermediate data, in the present embodiment for 14 intermediate data h0h1h2h3h4h5h6With g0g1g2g3g4g5g6
Step S63 is executed later, i.e., distributes 2N intermediate data to N number of memory space;Wherein by the 2N data Distribution is to N number of memory space, comprising: the 2N intermediate data is alternately allocated to corresponding N number of deposit according to its genesis sequence Store up space;Wherein, two intermediate data are stored in each memory space.
The mode of the data storage in embodiments herein in data processing method is described in detail below in conjunction with Fig. 8. Fig. 8 be according to an embodiment of the present application in data processing method data storage memory space data structure schematic diagram;Such as Shown in Fig. 8, only need to store once the input data in input sequence of sets using the data processing method in the present embodiment, Only need to store positive list entries X into memory space 7 in memory space 10X1X2X3X4X5X6.Due to not needing to input Data carry out backout, therefore eliminate the backward sequence X to the input data after backout6X5X4X3X2X1X0It is deposited The memory space of storage, while first intermediate data h is sequentially generated with first according to the above-mentioned data method of the present embodiment0It Afterwards, data h0It can be directly entered its corresponding memory space 8, be sequentially generated second intermediate data g with second0Later, data g0 It can be directly entered its corresponding memory space 14, as shown in figure 8, and so on, 14 intermediate data h0h1h2h3h4h5h6With g0g1g2g3g4g5g6It is alternately allocated in corresponding 7 memory spaces according to its genesis sequence, i.e., as shown in figure 8, in the application Embodiment in, 14 intermediate data are assigned to memory space 8 into memory space 14, wherein each memory space memory Store up two intermediate data.
Step S64 is finally executed, 2N intermediate data is carried out to operation in N number of memory space and obtains N number of output data; Wherein, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data, comprising: by list entries set In each of N number of input data input data be sent into after the first LSTM system and the 2nd LSTM system in two obtained Between result operation is carried out in corresponding memory space, obtain N number of output data.
As shown in figure 8, in embodiments herein, 14 intermediate data h0h1h2h3h4h5h6With g0g1g2g3g4g5g6It presses After sequentially entering 7 memory spaces according to its genesis sequence, by 7 input data X in list entries set0X1X2X3X4X5X6 Each of be sent into two intermediate results obtained after LSTM1 system and LSTM2 system and carried out in corresponding memory space It is added, N number of output data is obtained, as shown in figure 8, X0Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h0, Being sent to the intermediate data that LSTM2 system obtains by the second sequence is g6, by h0With g6It is added in memory space 8 and obtains first Data [h is stated in a output0+g6];X1Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h1, sent by the second sequence Entering to the intermediate data that LSTM2 system obtains is g5, by h1With g5It is added in memory space 9 and obtains second output data [h1 +g5];X2Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h2, LSTM2 system is sent to by the second sequence and is obtained The intermediate data arrived is g4, by h2With g4It is added in memory space 10 and obtains third output data [h2+g4];X3It is suitable by first It is h that sequence, which is sent to the intermediate data that LSTM1 system obtains,3, being sent to the intermediate data that LSTM2 system obtains by the second sequence is g3, by h3With g3It is added in memory space 11 and obtains the 4th output data [h3+g3];X4LSTM1 is sent to by the first sequence The intermediate data that system obtains is h4, being sent to the intermediate data that LSTM2 system obtains by the second sequence is g2, by h4With g2? It is added in memory space 12 and obtains the 5th output data [h4+g2];X5It is sent to during LSTM1 system obtains by the first sequence Between data be h5, being sent to the intermediate data that LSTM2 system obtains by the second sequence is g1, by h5With g1In memory space 13 Addition obtains the 6th output data [h5+g1];X6Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h6, by It is g that second sequence, which is sent to the intermediate data that LSTM2 system obtains,0, by h6With g0It is added in memory space 14 and obtains the 7th Output data [h6+g0];However the operation in the application in memory space is without being limited thereto, those skilled in the art can be with Other operations other than addition are carried out to intermediate data in memory space according to the actual situation;Pass through the embodiment of the present application In operation is directly carried out in memory space to the storage mode of intermediate data and by intermediate data, do not have to as shown in figure 5, Two intermediate result sequences that list entries is obtained by LSTM1 and LSTM2 two different LSTM systems respectively h0h1h2h3h4h5h6With g0g1g2g3g4g5g6It stores in memory, further saves memory space.
According to above-mentioned data processing method disclosed in the present application, it is possible to reduce the memory space in calculating process saves simultaneously The backout to list entries is gone, calculates space and time to save, accelerates to calculate.
It should be noted that although describing the operation of the method for the present invention in the accompanying drawings with particular order, this is not required that Or hint must execute these operations in this particular order, or have to carry out operation shown in whole and be just able to achieve the phase The result of prestige.Additionally or alternatively, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/or will One step is decomposed into execution of multiple steps.
After describing the data processing method of the embodiment of the present application, next, with reference to Fig. 9 to the exemplary reality of the present invention The data processing equipment for applying mode is introduced.The implementation of the device may refer to the implementation of the above method, repeat place no longer It repeats.Term " module " used below and " unit " can be the software and/or hardware for realizing predetermined function.Although with Module described in lower embodiment is preferably realized with software, but the combined realization of hardware or software and hardware It may and be contemplated.
Fig. 9 is the overall structure diagram of the data processing equipment provided according to an embodiment of the present application;As shown in figure 9, Data processing equipment 900 in the embodiment of the present application includes: data capture unit 901, configures to obtain in list entries set N number of input data;N number of input data in list entries set is respectively fed to first by data processing unit 902, configuration LSTM system and the 2nd LSTM system, and obtain 2N intermediate data;Data allocation unit 903 is configured 2N mediant According to distribution to N number of memory space;Data Computation Unit 904, configuration transport 2N intermediate data in N number of memory space Calculation obtains N number of output data;Wherein N is the quantity of input data in list entries set.
In some embodiments, N number of input data in list entries set is distinguished in the configuration of data processing unit 902 It is sent into the first LSTM system and the 2nd LSTM system, comprising: by N number of input data in list entries set according to the first sequence It is sent into the first LSTM system, and is sent into the 2nd LSTM system according to the second sequence, wherein the second sequence is the inverse of the first sequence Sequence, and after being sent into the first LSTM system a data according to the first sequence is every, described second is sent into according to the second sequence LSTM system a data, it is successively single to be alternately sent into, until being respectively fed to N number of input according to the first sequence and the second sequence Data.
In some embodiments, the configuration of data allocation unit 903 distributes 2N intermediate data to N number of memory space, It include: that 2N intermediate data is alternately allocated to corresponding N number of memory space according to its genesis sequence;Wherein, each storage is empty Two intermediate data of interior storage.
In some embodiments, the configuration of Data Computation Unit 904 carries out 2N intermediate data in N number of memory space Operation obtains N number of output data, comprising: is sent into each of N number of input data in list entries set input data Two intermediate results obtained after first LSTM system and the 2nd LSTM system carry out operation in corresponding memory space, obtain N number of output data.
According to above-mentioned data processing equipment disclosed in the present application, it is possible to reduce the memory space in calculating process saves simultaneously The backout to list entries is gone, calculates space and time to save, accelerates to calculate.
The embodiment of the present disclosure additionally provides a kind of electronic equipment, and structure is as shown in Figure 10, which includes: at least In one 300, Figure 10 of built-in network neuron processor (NPU) by taking a NPU300 as an example;With memory (memory) 301, It can also include communication interface (Communication Interface) 302 and bus 303.Wherein, NPU300, communication interface 302, memory 301 can complete mutual communication by bus 303.Communication interface 302 can be used for information transmission. NPU300 can call the logical order in memory 301, to execute the data processing method of above-described embodiment.
In addition, the logical order in above-mentioned memory 301 can be realized by way of SFU software functional unit and conduct Independent product when selling or using, can store in a computer readable storage medium.
Memory 301 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer Sequence, such as the corresponding program instruction/module of the method in the embodiment of the present disclosure.NPU300 is stored in memory 301 by operation Software program, instruction and module, thereby executing functional application and data processing, i.e., in realization above method embodiment Data processing method.
Memory 301 may include storing program area and storage data area, wherein storing program area can storage program area, Application program needed at least one function;Storage data area, which can be stored, uses created data etc. according to terminal device. In addition, memory 301 may include high-speed random access memory, it can also include nonvolatile memory.
The technical solution of the embodiment of the present disclosure can be embodied in the form of software products, which deposits Storage in one storage medium, including one or more instruction is used so that computer equipment (can be personal computer, Server or the network equipment etc.) execute embodiment of the present disclosure the method all or part of the steps.And storage above-mentioned is situated between Matter can be non-transient storage media, comprising: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), with Machine accesses a variety of Jie that can store program code such as memory (RAM, Random Access Memory), magnetic or disk Matter is also possible to transitory memory medium.When in the application, although term " first ", " second " etc. may be in this Shens Please in using to describe each element, but these elements should not be limited by these terms.These terms are only used to by a member Part is differentiated with another element.For example, first element can be called second yuan in the case where not changing the meaning of description Part, and same the, second element can be called first element, if " first element " occurred unanimously renaming and " second element " occurred unanimously renames.First element and second element are all elements, but be can not be identical Element.
Word used herein is only used for description embodiment and is not used in limitation claim.Such as embodiment with And used in the description of claim, unless context clearly illustrates, otherwise "one" (a) of singular, "one" (an) and " described " (the) is intended to include equally plural form.Similarly, term "and/or" as used in this specification Refer to comprising one or more associated any and all possible combinations listed.In addition, when being used for the application When middle, term " includes " (comprise) and its modification " comprising " (comprises) and/or refer to including (comprising) etc. old The presence of feature, entirety, step, operation, element and/or the component stated, but be not excluded for one or more other features, Entirety, step, operation, element, component and/or these grouping presence or addition.
Various aspects, embodiment, realization or feature in described embodiment can be used alone or in any combination Mode use.Various aspects in described embodiment being implemented in combination with by software, hardware or software and hardware.Described reality Applying example can also be embodied by the computer-readable medium for being stored with computer-readable code, which includes can be by The instruction that at least one computing device executes.The computer-readable medium can be filled with any data-storable data storage Set associated, which can be read by computer system.Computer-readable medium for citing may include read-only memory, Random access memory, CD-ROM, HDD, DVD, tape and optical data storage devices etc..The computer-readable medium may be used also To be distributed in the computer system by net connection, such computer-readable code distributed storage and can be executed.
Above-mentioned technical description can refer to attached drawing, these attached drawings form a part of the application, and by description attached The embodiment according to described embodiment is shown in figure.Although the description of these embodiments is enough in detail so that this field Technical staff can be realized these embodiments, but these embodiments are non-limiting;Other implementations thus can be used Example, and variation can also be made in the case where not departing from the range of described embodiment.For example, described in flow chart Operation order be non-limiting, therefore in flow charts illustrate and according to flow chart description two or more behaviour The sequence of work can be changed according to several embodiments.As another example, in several embodiments, it explains in flow charts It releases and is optional or deletable according to one or more operations that flow chart describes.In addition, certain steps or Function can be added in the disclosed embodiments or more than two sequence of steps are replaced.All these variations are considered Included in the disclosed embodiments and claim.
In addition, using term to provide the thorough understanding of described embodiment in above-mentioned technical description.However, and being not required to Will excessively detailed details to realize described embodiment.Therefore, the foregoing description of embodiment be in order to illustrate and describe and It presents.The embodiment and example disclosed according to these embodiments presented in foregoing description is provided separately, with Addition context simultaneously helps to understand described embodiment.Description above, which is not used in, accomplishes exhaustive or by described reality Apply the precise forms that example is restricted to the disclosure.According to the above instruction, it is several modification, selection be applicable in and variation be feasible.? In some cases, processing step well known is not described in avoid described embodiment is unnecessarily influenced.

Claims (9)

1. a kind of data processing method is applied to embedded nerve network system, which is characterized in that the data processing method packet It includes:
Obtain N number of input data in list entries set;
N number of input data in the list entries set is respectively fed to the first LSTM system and the 2nd LSTM system, And obtain 2N intermediate data;
The 2N intermediate data is distributed to N number of memory space;
The 2N intermediate data is carried out to operation in N number of memory space and obtains N number of output data;Wherein
N is the quantity of input data described in the list entries set.
2. data processing method according to claim 1, which is characterized in that by the N in the list entries set A input data is respectively fed to the first LSTM system and the 2nd LSTM system, comprising:
By N number of input data in the list entries set according to the first sequence the first LSTM system of feeding, and according to Second sequence is sent into the 2nd LSTM system, wherein second sequence is the backward of first sequence, and according to the first sequence After every feeding the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, according to Secondary single alternately feeding, until being respectively fed to N number of input data according to first sequence and second sequence.
3. data processing method according to claim 1, which is characterized in that distributing the 2N intermediate data to N number of Memory space, comprising:
The 2N intermediate data is assigned to corresponding N number of memory space according to its genesis sequence;Wherein,
Two intermediate data are stored in each memory space.
4. data processing method according to claim 1, which is characterized in that by the 2N intermediate data described N number of Operation is carried out in memory space obtains N number of output data, comprising:
Input data described in each of N number of input data in the list entries set is sent into the first LSTM system Two intermediate results obtained after system and the 2nd LSTM system carry out operation in corresponding memory space, obtain N number of defeated Data out.
5. a kind of data processing equipment, which is characterized in that the data processing equipment includes:
Data capture unit configures to obtain N number of input data in list entries set;
N number of input data in the list entries set is respectively fed to the first LSTM by data processing unit, configuration System and the 2nd LSTM system, and obtain 2N intermediate data;
Data allocation unit, configuration distribute the 2N intermediate data to N number of memory space;
Data Computation Unit, configuration the 2N intermediate data is carried out in N number of memory space operation obtain it is N number of defeated Data out;Wherein
N is the quantity of input data described in the list entries set.
6. data processing equipment according to claim 5, which is characterized in that by the N in the list entries set A input data is respectively fed to the first LSTM system and the 2nd LSTM system, comprising:
By N number of input data in the list entries set according to the first sequence the first LSTM system of feeding, and according to Second sequence is sent into the 2nd LSTM system, wherein second sequence is the backward of first sequence, and according to the first sequence After every feeding the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, according to Secondary single alternately feeding, until being respectively fed to N number of input data according to first sequence and second sequence.
7. data processing equipment according to claim 5, which is characterized in that distributing the 2N intermediate data to N number of Memory space, comprising:
The 2N intermediate data is assigned to corresponding N number of memory space according to its genesis sequence;Wherein,
Two intermediate data are stored in each memory space.
8. data processing equipment according to claim 5, which is characterized in that by the 2N intermediate data described N number of Operation is carried out in memory space obtains N number of output data, comprising:
Input data described in each of N number of input data in the list entries set is sent into the first LSTM system Two intermediate results obtained after system and the 2nd LSTM system carry out operation in corresponding memory space, obtain N number of defeated Data out.
9. a kind of electronic equipment characterized by comprising
At least one embedded neural network processor;And
With the memory of at least one embedded neural network processor communication connection;Wherein,
The memory is stored with the instruction that can be executed by least one described embedded neural network processor, described instruction quilt When at least one described embedded neural network processor executes, execute at least one described embedded neural network processor Data processing method of any of claims 1-4.
CN201810708160.XA 2018-07-02 2018-07-02 Data processing method, data processing device and electronic equipment Active CN109165721B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810708160.XA CN109165721B (en) 2018-07-02 2018-07-02 Data processing method, data processing device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810708160.XA CN109165721B (en) 2018-07-02 2018-07-02 Data processing method, data processing device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109165721A true CN109165721A (en) 2019-01-08
CN109165721B CN109165721B (en) 2022-03-01

Family

ID=64897532

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810708160.XA Active CN109165721B (en) 2018-07-02 2018-07-02 Data processing method, data processing device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109165721B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150356075A1 (en) * 2014-06-06 2015-12-10 Google Inc. Generating representations of input sequences using neural networks
US20170186420A1 (en) * 2013-12-10 2017-06-29 Google Inc. Processing acoustic sequences using long short-term memory (lstm) neural networks that include recurrent projection layers
CN107832476A (en) * 2017-12-01 2018-03-23 北京百度网讯科技有限公司 A kind of understanding method of search sequence, device, equipment and storage medium
US20180174576A1 (en) * 2016-12-21 2018-06-21 Google Llc Acoustic-to-word neural network speech recognizer

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170186420A1 (en) * 2013-12-10 2017-06-29 Google Inc. Processing acoustic sequences using long short-term memory (lstm) neural networks that include recurrent projection layers
US20150356075A1 (en) * 2014-06-06 2015-12-10 Google Inc. Generating representations of input sequences using neural networks
US20180174576A1 (en) * 2016-12-21 2018-06-21 Google Llc Acoustic-to-word neural network speech recognizer
CN107832476A (en) * 2017-12-01 2018-03-23 北京百度网讯科技有限公司 A kind of understanding method of search sequence, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MIKE SCHUSTER .ETC: ""Bidirectional Recurrent Neural Networks"", 《IEEE TRANSACTIONS ON SIGNAL PROCESSING》 *

Also Published As

Publication number Publication date
CN109165721B (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN109919315B (en) Forward reasoning method, device, equipment and storage medium of neural network
Zhang et al. Finite-time observers for multi-agent systems without velocity measurements and with input saturations
DE112016002298T5 (en) PREVIEW OF WEIGHTS FOR USE IN A NEURONAL NETWORK PROCESSOR
DE112016005536T5 (en) DETERMINING THE ORDER OF A CONSTRUCTION OF A NEURONAL NETWORK
DE112016002296T5 (en) VECTOR CONTROL UNIT IN A NEURONAL NETWORK PROCESSOR
DE112020003128T5 (en) DILATED CONVOLUTION WITH SYSTOLIC ARRAY
CN104317749B (en) Information write-in method and device
EP4209902A1 (en) Memory allocation method, related device, and computer readable storage medium
Ma et al. Synchronization of continuous-time Markovian jumping singular complex networks with mixed mode-dependent time delays
US20160198000A1 (en) Methods and apparatus for iterative nonspecific distributed runtime architecture and its application to cloud intelligence
CN108304925B (en) Pooling computing device and method
CN108304926B (en) Pooling computing device and method suitable for neural network
CN110430444A (en) A kind of video stream processing method and system
CN106339802A (en) Task allocation method, task allocation device and electronic equipment
CN106326339A (en) Task allocating method and device
CN115511086B (en) Distributed reasoning deployment system for oversized model
CN107832151A (en) A kind of cpu resource distribution method, device and equipment
EP3502974A1 (en) Method for realizing a neural network
CN112799852B (en) Multi-dimensional SBP distributed signature decision system and method for logic node
Wu et al. Pinning adaptive synchronization of general time-varying delayed and multi-linked networks with variable structures
CN109214511A (en) Data processing method, data processing equipment and electronic equipment
CN109165721A (en) Data processing method, data processing equipment and electronic equipment
CN109783155A (en) Service Component management method, device, electronic equipment and storage medium
DE112022000723T5 (en) BRANCHING PROCESS FOR A CIRCUIT OF A NEURONAL PROCESSOR
Bahalkeh et al. Efficient system matrix calculation for manufacturing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20190422

Address after: 100192 2nd Floor, Building 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing

Applicant after: BEIJING BITMAIN TECHNOLOGY CO., LTD.

Address before: 100192 Building 3, Building 3, No. 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing

Applicant before: Feng Feng Technology (Beijing) Co., Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210816

Address after: 100192 Building No. 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing, No. 301

Applicant after: SUANFENG TECHNOLOGY (BEIJING) Co.,Ltd.

Address before: 100192 2nd Floor, Building 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing

Applicant before: BITMAIN TECHNOLOGIES Inc.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220225

Address after: 100176 901, floor 9, building 8, courtyard 8, KEGU 1st Street, Beijing Economic and Technological Development Zone, Daxing District, Beijing (Yizhuang group, high-end industrial area of Beijing Pilot Free Trade Zone)

Patentee after: Beijing suneng Technology Co.,Ltd.

Address before: 100192 Building No. 25, No. 1 Hospital, Baosheng South Road, Haidian District, Beijing, No. 301

Patentee before: SUANFENG TECHNOLOGY (BEIJING) CO.,LTD.