Summary of the invention
To solve the above-mentioned problems, according to the one side of the application, data processing method, data processing equipment and electricity are proposed
Sub- equipment.Wherein, data processing method is applied to embedded nerve network system, includes: obtaining N number of in list entries set
Input data;N number of input data in list entries set is respectively fed to the first LSTM system and the 2nd LSTM system, and is obtained
To 2N intermediate data;2N intermediate data is distributed to N number of memory space;By 2N intermediate data in N number of memory space
It carries out operation and obtains N number of output data;Wherein N is the quantity of input data in list entries set.
In some embodiments, N number of input data in list entries set is respectively fed to the first LSTM system and
Two LSTM systems, comprising: N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and
The 2nd LSTM system is sent into according to the second sequence, wherein the second sequence is the backward of the first sequence, and is often sent according to the first sequence
After entering the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single
It is a to be alternately sent into, until being respectively fed to N number of input data according to the first sequence and the second sequence.
In some embodiments, 2N intermediate data is distributed to N number of memory space, comprising: press 2N intermediate data
Corresponding N number of memory space is alternately allocated to according to its genesis sequence;Wherein, two intermediate data are stored in each memory space.
In some embodiments, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data,
It include: that each of N number of input data in list entries set input data is sent into the first LSTM system and second
Two intermediate results obtained after LSTM system carry out operation in corresponding memory space, obtain N number of output data.
According to the another aspect of the application, a kind of data processing equipment is proposed;Wherein, data processing equipment includes: data
Acquiring unit configures to obtain N number of input data in list entries set;Data processing unit is configured list entries
N number of input data in set is respectively fed to the first LSTM system and the 2nd LSTM system, and obtains 2N intermediate data;Data
Allocation unit, configuration distribute 2N intermediate data to N number of memory space;Data Computation Unit, configuration are intermediate by 2N
Data carry out operation in N number of memory space and obtain N number of output data;Wherein N is the number of input data in list entries set
Amount.
In some embodiments, N number of input data in list entries set is respectively fed to the first LSTM system and
Two LSTM systems, comprising: N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and
The 2nd LSTM system is sent into according to the second sequence, wherein the second sequence is the backward of the first sequence, and is often sent according to the first sequence
After entering the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single
It is a to be alternately sent into, until being respectively fed to N number of input data according to the first sequence and the second sequence.
In some embodiments, 2N intermediate data is distributed to N number of memory space, comprising: press 2N intermediate data
Corresponding N number of memory space is alternately allocated to according to its genesis sequence;Wherein, two intermediate data are stored in each memory space.
In some embodiments, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data,
It include: that each of N number of input data in list entries set input data is sent into the first LSTM system and second
Two intermediate results obtained after LSTM system carry out operation in corresponding memory space, obtain N number of output data.
According to the another aspect of the application, a kind of electronic equipment is proposed;Wherein, electronic equipment includes: at least one insertion
Formula neural network processor;And the memory communicated to connect at least one embedded neural network processor;Wherein, it stores
Device is stored with the instruction that can be executed by least one embedded neural network processor, instructs by least one embedded nerve net
When network processor executes, at least one embedded neural network processor is made to execute data processing method as described above.
Pass through the data processing method of the application, data processing equipment and electronic equipment, it is possible to reduce in calculating process
Memory space, while saving the backout to list entries, calculate space and time to save, accelerate to calculate.
Referring to following description and accompanying drawings, apply for specific implementations of the present application in detail, specifies the original of the application
Reason can be in a manner of adopted.It should be understood that presently filed embodiment is not so limited in range.In appended power
In the range of the spirit and terms that benefit requires, presently filed embodiment includes many changes, modifications and is equal.
The feature for describing and/or showing for a kind of embodiment can be in a manner of same or similar one or more
It uses in a other embodiment, is combined with the feature in other embodiment, or the feature in substitution other embodiment.
It should be emphasized that term "comprises/comprising" refers to the presence of feature, one integral piece, step or component when using herein, but simultaneously
It is not excluded for the presence or additional of one or more other features, one integral piece, step or component.
Specific embodiment
The characteristics of in order to more fully hereinafter understand the embodiment of the present disclosure and technology contents, with reference to the accompanying drawing to this public affairs
The realization for opening embodiment is described in detail, appended attached drawing purposes of discussion only for reference, is not used to limit the embodiment of the present disclosure.
In technical description below, for convenience of explanation for the sake of, disclosed embodiment is fully understood with providing by multiple details.
However, one or more embodiments still can be implemented in the case where without these details.It in other cases, is simplification
Attached drawing, well known construction and device can simplify displaying.
Below with reference to several representative embodiments of the application, the principle and spirit of the application are illustrated in detail.
Fig. 1 is a kind of typical network architecture schematic diagram of LSTM study sequence signature.As shown in Figure 1, input node Xt-1,
Xt, Xt+1Form list entries Xt-1XtXt+1It is input in LSTM network system, by being corresponded to after a series of calculating
Output node ht-1, ht, ht+1Form output sequence ht-1htht+1。
Fig. 2 to Fig. 4 is the schematic diagram of the two-way LSTM implementation method of tradition.
Traditional two-way LSTM implementation method in practical applications, as shown in Figures 2 to 4, usually inputs same data
Sequence is first input in LSTM1 system from forward direction, obtains an intermediate result sequence, list entries is carried out backward again later
Operation, obtains the backward sequence of list entries, by backward sequence inputting into LSTM2 system, obtains another intermediate result sequence
Column, then by the corresponding addition of each data in two intermediate result sequences, to obtain output sequence to reach better sequence
Feature.As shown in Fig. 2, might as well assume that list entries is X0X1X2X3X4X5X6(each letter represents a feature vector), it is positive
When by X0X1X2X3X4X5X6It is directly inputted in LSTM1 grid, obtaining intermediate result sequence is h0h1h2h3h4h5h6;Reversely
When, then it needs first to carry out backout to list entries, as shown in figure 3, obtaining the backward sequence of list entries
X6X5X4X3X2X1X0, then by backward sequence inputting into LSTM2 grid, as shown in figure 4, obtaining another intermediate result sequence
It is classified as g0g1g2g3g4g5g6.Finally two intermediate result sequence results are added to obtain output sequence [h0+g6][h1+g5][h2+
g4][h3+g3][h4+g2][h5+g1][h6+g0]。
However, needing to carry out backout to list entries, while also needing additional deposit in above-mentioned calculating process
It stores up space and stores backward sequence X6X5X4X3X2X1X0, that is to say, that list entries is needed positive and is inversely stored twice, is such as schemed
Shown in 5, positive list entries X should be stored0X1X2X3X4X5X6, also to store will be inverse after the processing of positive list entries backward
Sequence sequence X6X5X4X3X2X1X0;On the other hand, two corresponding intermediate sequence result h are stored respectively0h1h2h3h4h5h6With
g0g1g2g3g4g5g6, so greatly waste memory space and reduce arithmetic speed.
To solve the above-mentioned problems, embodiments herein provides a kind of data processing method, is applied to embedded mind
Through network system, include: obtaining N number of input data in list entries set;By N number of input data in list entries set
It is respectively fed to the first LSTM system and the 2nd LSTM system, and obtains 2N intermediate data;2N intermediate data is distributed to N number of
Memory space;2N intermediate data is carried out to operation in N number of memory space and obtains N number of output data;Wherein N is list entries
The quantity of input data in set.
In some embodiments, N number of input data in list entries set is respectively fed to the first LSTM system and
Two LSTM systems, comprising: N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and
The 2nd LSTM system is sent into according to the second sequence, wherein the second sequence is the backward of the first sequence, and is often sent according to the first sequence
After entering the first LSTM system a data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single
It is a to be alternately sent into, until being respectively fed to N number of input data according to the first sequence and the second sequence.
In some embodiments, 2N intermediate data is distributed to N number of memory space, comprising: press 2N intermediate data
Corresponding N number of memory space is alternately allocated to according to its genesis sequence;Wherein, two intermediate data are stored in each memory space.
In some embodiments, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data,
It include: that each of N number of input data in list entries set is sent into the first LSTM system and the 2nd LSTM system
Two intermediate results obtained after system carry out operation in corresponding memory space, obtain N number of output data.
Pass through above-mentioned data processing method, it is possible to reduce the memory space in calculating process, while saving to input sequence
The backout of column calculates space and time to save, and accelerates to calculate.
The data processing method of embodiments herein is discussed in detail below in conjunction with Fig. 6 to Fig. 8.
Fig. 6 is the overall flow figure of the data processing method provided according to an embodiment of the present application;Such as the flow chart institute of Fig. 6
Show, step S61, i.e. N number of input data in acquisition list entries set is first carried out, wherein N is in list entries set
The quantity of input data, and N is an integer more than or equal to 1, in a preferred embodiment of the present application, each input data can
To be a feature vector.
After obtaining N number of input data in input set arrangement set, as shown in fig. 6, execute step S62, i.e., it will be defeated
The N number of input data entered in arrangement set is respectively fed to the first LSTM system and the 2nd LSTM system, and obtains 2N mediant
According to.Wherein, N number of input data in list entries set is respectively fed to the first LSTM system and the 2nd LSTM system, comprising:
N number of input data in list entries set is sent into the first LSTM system according to the first sequence, and is sent into according to the second sequence
2nd LSTM system, wherein the second sequence is the backward of the first sequence, and is sent into the first LSTM system according to the first sequence is every
After uniform bit data, it is sent into the 2nd LSTM system a data according to the second sequence, it is successively single to be alternately sent into, until pressing
N number of input data is respectively fed to according to the first sequence and the second sequence.
Fig. 7 is the concrete structure schematic diagram of the data processing method provided according to an embodiment of the present application;As shown in fig. 7,
Input data in list entries set is X0X1X2X3X4X5X6, therefore in the present embodiment, N 7;Input in the present embodiment
The value of the quantity N of input data in arrangement set is only schematical, and those skilled in the art can be according to practical feelings
N is taken different values by condition.Obtaining input data X0X1X2X3X4X5X6Later, it is respectively fed to the first LSTM system
LSTM1 and the 2nd LSTM system LSTM2.Wherein, the mode of feeding are as follows: first with the first sequence by X0It is sent into LSTM1, is obtained pair
The output h answered0;Later again with the second sequence by X6It is sent into LSTM2, obtains corresponding output g0;Later again with the first sequence by X1
It is sent into LSTM1, obtains corresponding output h1;Later again with the second sequence by X5It is sent into LSTM2, obtains corresponding output g1;Later
Again with the first sequence by X2It is sent into LSTM1, obtains corresponding output h2;Later again with the second sequence by X4It is sent into LSTM2, is obtained
Corresponding output g2;Later again with the first sequence by X3It is sent into LSTM1, obtains corresponding output h3;It later again will with the second sequence
X3It is sent into LSTM2, obtains corresponding output g3;Later again with the first sequence by X4It is sent into LSTM1, obtains corresponding output h4;It
Afterwards again with the second sequence by X2It is sent into LSTM2, obtains corresponding output g4;Later again with the first sequence by X5It is sent into LSTM1, is obtained
To corresponding output h5;Later again with the second sequence by X1It is sent into LSTM2, obtains corresponding output g5;Later again with the first sequence
By X6It is sent into LSTM1, obtains corresponding output h6;Finally by X0It is sent into LSTM2 with the second sequence, obtains corresponding output g6.?
In the present embodiment, the first sequence is the data X inputted in list entries set from left to right0X1X2X3X4X5X6, the second sequence is
Data X in the sequence of sets of input input from right to left0X1X2X3X4X5X6, the second sequence is the backward of the first sequence.However this
The definition of the first sequence and the second sequence in application is without being limited thereto, and those skilled in the art can be according to different situations by
One sequence and the second sequence carry out different definition, for example, the first sequence is defined as to input list entries set from right to left
In sequence, the second sequence is defined as to the sequence inputted in list entries set from left to right.
The above-mentioned feeding mode by the input data in list entries set, i.e., by N number of input in list entries set
Data are sent into the first LSTM system according to the first sequence, and are sent into the 2nd LSTM system according to the second sequence, wherein the second sequence
For the backward of the first sequence, and after being sent into the first LSTM system a data according to the first sequence is every, according to the second sequence
It is sent into the 2nd LSTM system a data, it is successively single to be alternately sent into, until according to the first sequence and the second sequence point
It is not sent into N number of input data;The above-mentioned feeding mode to input data in the embodiment of the present application, which is eliminated, gathers sequence to input
The backout of input data in column eliminates the backward processing step in Fig. 3, reduces operation time, improves fortune
Calculate efficiency.
As shown in fig. 7, by 7 (at this point, N=7) input data X in list entries set0X1X2X3X4X5X6According to upper
State the mode in embodiment to be respectively fed to after LSTM1 and LSTM2, obtained 2N intermediate data, in the present embodiment for
14 intermediate data h0h1h2h3h4h5h6With g0g1g2g3g4g5g6。
Step S63 is executed later, i.e., distributes 2N intermediate data to N number of memory space;Wherein by the 2N data
Distribution is to N number of memory space, comprising: the 2N intermediate data is alternately allocated to corresponding N number of deposit according to its genesis sequence
Store up space;Wherein, two intermediate data are stored in each memory space.
The mode of the data storage in embodiments herein in data processing method is described in detail below in conjunction with Fig. 8.
Fig. 8 be according to an embodiment of the present application in data processing method data storage memory space data structure schematic diagram;Such as
Shown in Fig. 8, only need to store once the input data in input sequence of sets using the data processing method in the present embodiment,
Only need to store positive list entries X into memory space 7 in memory space 10X1X2X3X4X5X6.Due to not needing to input
Data carry out backout, therefore eliminate the backward sequence X to the input data after backout6X5X4X3X2X1X0It is deposited
The memory space of storage, while first intermediate data h is sequentially generated with first according to the above-mentioned data method of the present embodiment0It
Afterwards, data h0It can be directly entered its corresponding memory space 8, be sequentially generated second intermediate data g with second0Later, data g0
It can be directly entered its corresponding memory space 14, as shown in figure 8, and so on, 14 intermediate data h0h1h2h3h4h5h6With
g0g1g2g3g4g5g6It is alternately allocated in corresponding 7 memory spaces according to its genesis sequence, i.e., as shown in figure 8, in the application
Embodiment in, 14 intermediate data are assigned to memory space 8 into memory space 14, wherein each memory space memory
Store up two intermediate data.
Step S64 is finally executed, 2N intermediate data is carried out to operation in N number of memory space and obtains N number of output data;
Wherein, 2N intermediate data is subjected in N number of memory space operation and obtains N number of output data, comprising: by list entries set
In each of N number of input data input data be sent into after the first LSTM system and the 2nd LSTM system in two obtained
Between result operation is carried out in corresponding memory space, obtain N number of output data.
As shown in figure 8, in embodiments herein, 14 intermediate data h0h1h2h3h4h5h6With g0g1g2g3g4g5g6It presses
After sequentially entering 7 memory spaces according to its genesis sequence, by 7 input data X in list entries set0X1X2X3X4X5X6
Each of be sent into two intermediate results obtained after LSTM1 system and LSTM2 system and carried out in corresponding memory space
It is added, N number of output data is obtained, as shown in figure 8, X0Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h0,
Being sent to the intermediate data that LSTM2 system obtains by the second sequence is g6, by h0With g6It is added in memory space 8 and obtains first
Data [h is stated in a output0+g6];X1Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h1, sent by the second sequence
Entering to the intermediate data that LSTM2 system obtains is g5, by h1With g5It is added in memory space 9 and obtains second output data [h1
+g5];X2Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h2, LSTM2 system is sent to by the second sequence and is obtained
The intermediate data arrived is g4, by h2With g4It is added in memory space 10 and obtains third output data [h2+g4];X3It is suitable by first
It is h that sequence, which is sent to the intermediate data that LSTM1 system obtains,3, being sent to the intermediate data that LSTM2 system obtains by the second sequence is
g3, by h3With g3It is added in memory space 11 and obtains the 4th output data [h3+g3];X4LSTM1 is sent to by the first sequence
The intermediate data that system obtains is h4, being sent to the intermediate data that LSTM2 system obtains by the second sequence is g2, by h4With g2?
It is added in memory space 12 and obtains the 5th output data [h4+g2];X5It is sent to during LSTM1 system obtains by the first sequence
Between data be h5, being sent to the intermediate data that LSTM2 system obtains by the second sequence is g1, by h5With g1In memory space 13
Addition obtains the 6th output data [h5+g1];X6Being sent to the intermediate data that LSTM1 system obtains by the first sequence is h6, by
It is g that second sequence, which is sent to the intermediate data that LSTM2 system obtains,0, by h6With g0It is added in memory space 14 and obtains the 7th
Output data [h6+g0];However the operation in the application in memory space is without being limited thereto, those skilled in the art can be with
Other operations other than addition are carried out to intermediate data in memory space according to the actual situation;Pass through the embodiment of the present application
In operation is directly carried out in memory space to the storage mode of intermediate data and by intermediate data, do not have to as shown in figure 5,
Two intermediate result sequences that list entries is obtained by LSTM1 and LSTM2 two different LSTM systems respectively
h0h1h2h3h4h5h6With g0g1g2g3g4g5g6It stores in memory, further saves memory space.
According to above-mentioned data processing method disclosed in the present application, it is possible to reduce the memory space in calculating process saves simultaneously
The backout to list entries is gone, calculates space and time to save, accelerates to calculate.
It should be noted that although describing the operation of the method for the present invention in the accompanying drawings with particular order, this is not required that
Or hint must execute these operations in this particular order, or have to carry out operation shown in whole and be just able to achieve the phase
The result of prestige.Additionally or alternatively, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/or will
One step is decomposed into execution of multiple steps.
After describing the data processing method of the embodiment of the present application, next, with reference to Fig. 9 to the exemplary reality of the present invention
The data processing equipment for applying mode is introduced.The implementation of the device may refer to the implementation of the above method, repeat place no longer
It repeats.Term " module " used below and " unit " can be the software and/or hardware for realizing predetermined function.Although with
Module described in lower embodiment is preferably realized with software, but the combined realization of hardware or software and hardware
It may and be contemplated.
Fig. 9 is the overall structure diagram of the data processing equipment provided according to an embodiment of the present application;As shown in figure 9,
Data processing equipment 900 in the embodiment of the present application includes: data capture unit 901, configures to obtain in list entries set
N number of input data;N number of input data in list entries set is respectively fed to first by data processing unit 902, configuration
LSTM system and the 2nd LSTM system, and obtain 2N intermediate data;Data allocation unit 903 is configured 2N mediant
According to distribution to N number of memory space;Data Computation Unit 904, configuration transport 2N intermediate data in N number of memory space
Calculation obtains N number of output data;Wherein N is the quantity of input data in list entries set.
In some embodiments, N number of input data in list entries set is distinguished in the configuration of data processing unit 902
It is sent into the first LSTM system and the 2nd LSTM system, comprising: by N number of input data in list entries set according to the first sequence
It is sent into the first LSTM system, and is sent into the 2nd LSTM system according to the second sequence, wherein the second sequence is the inverse of the first sequence
Sequence, and after being sent into the first LSTM system a data according to the first sequence is every, described second is sent into according to the second sequence
LSTM system a data, it is successively single to be alternately sent into, until being respectively fed to N number of input according to the first sequence and the second sequence
Data.
In some embodiments, the configuration of data allocation unit 903 distributes 2N intermediate data to N number of memory space,
It include: that 2N intermediate data is alternately allocated to corresponding N number of memory space according to its genesis sequence;Wherein, each storage is empty
Two intermediate data of interior storage.
In some embodiments, the configuration of Data Computation Unit 904 carries out 2N intermediate data in N number of memory space
Operation obtains N number of output data, comprising: is sent into each of N number of input data in list entries set input data
Two intermediate results obtained after first LSTM system and the 2nd LSTM system carry out operation in corresponding memory space, obtain
N number of output data.
According to above-mentioned data processing equipment disclosed in the present application, it is possible to reduce the memory space in calculating process saves simultaneously
The backout to list entries is gone, calculates space and time to save, accelerates to calculate.
The embodiment of the present disclosure additionally provides a kind of electronic equipment, and structure is as shown in Figure 10, which includes: at least
In one 300, Figure 10 of built-in network neuron processor (NPU) by taking a NPU300 as an example;With memory (memory) 301,
It can also include communication interface (Communication Interface) 302 and bus 303.Wherein, NPU300, communication interface
302, memory 301 can complete mutual communication by bus 303.Communication interface 302 can be used for information transmission.
NPU300 can call the logical order in memory 301, to execute the data processing method of above-described embodiment.
In addition, the logical order in above-mentioned memory 301 can be realized by way of SFU software functional unit and conduct
Independent product when selling or using, can store in a computer readable storage medium.
Memory 301 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer
Sequence, such as the corresponding program instruction/module of the method in the embodiment of the present disclosure.NPU300 is stored in memory 301 by operation
Software program, instruction and module, thereby executing functional application and data processing, i.e., in realization above method embodiment
Data processing method.
Memory 301 may include storing program area and storage data area, wherein storing program area can storage program area,
Application program needed at least one function;Storage data area, which can be stored, uses created data etc. according to terminal device.
In addition, memory 301 may include high-speed random access memory, it can also include nonvolatile memory.
The technical solution of the embodiment of the present disclosure can be embodied in the form of software products, which deposits
Storage in one storage medium, including one or more instruction is used so that computer equipment (can be personal computer,
Server or the network equipment etc.) execute embodiment of the present disclosure the method all or part of the steps.And storage above-mentioned is situated between
Matter can be non-transient storage media, comprising: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), with
Machine accesses a variety of Jie that can store program code such as memory (RAM, Random Access Memory), magnetic or disk
Matter is also possible to transitory memory medium.When in the application, although term " first ", " second " etc. may be in this Shens
Please in using to describe each element, but these elements should not be limited by these terms.These terms are only used to by a member
Part is differentiated with another element.For example, first element can be called second yuan in the case where not changing the meaning of description
Part, and same the, second element can be called first element, if " first element " occurred unanimously renaming and
" second element " occurred unanimously renames.First element and second element are all elements, but be can not be identical
Element.
Word used herein is only used for description embodiment and is not used in limitation claim.Such as embodiment with
And used in the description of claim, unless context clearly illustrates, otherwise "one" (a) of singular, "one"
(an) and " described " (the) is intended to include equally plural form.Similarly, term "and/or" as used in this specification
Refer to comprising one or more associated any and all possible combinations listed.In addition, when being used for the application
When middle, term " includes " (comprise) and its modification " comprising " (comprises) and/or refer to including (comprising) etc. old
The presence of feature, entirety, step, operation, element and/or the component stated, but be not excluded for one or more other features,
Entirety, step, operation, element, component and/or these grouping presence or addition.
Various aspects, embodiment, realization or feature in described embodiment can be used alone or in any combination
Mode use.Various aspects in described embodiment being implemented in combination with by software, hardware or software and hardware.Described reality
Applying example can also be embodied by the computer-readable medium for being stored with computer-readable code, which includes can be by
The instruction that at least one computing device executes.The computer-readable medium can be filled with any data-storable data storage
Set associated, which can be read by computer system.Computer-readable medium for citing may include read-only memory,
Random access memory, CD-ROM, HDD, DVD, tape and optical data storage devices etc..The computer-readable medium may be used also
To be distributed in the computer system by net connection, such computer-readable code distributed storage and can be executed.
Above-mentioned technical description can refer to attached drawing, these attached drawings form a part of the application, and by description attached
The embodiment according to described embodiment is shown in figure.Although the description of these embodiments is enough in detail so that this field
Technical staff can be realized these embodiments, but these embodiments are non-limiting;Other implementations thus can be used
Example, and variation can also be made in the case where not departing from the range of described embodiment.For example, described in flow chart
Operation order be non-limiting, therefore in flow charts illustrate and according to flow chart description two or more behaviour
The sequence of work can be changed according to several embodiments.As another example, in several embodiments, it explains in flow charts
It releases and is optional or deletable according to one or more operations that flow chart describes.In addition, certain steps or
Function can be added in the disclosed embodiments or more than two sequence of steps are replaced.All these variations are considered
Included in the disclosed embodiments and claim.
In addition, using term to provide the thorough understanding of described embodiment in above-mentioned technical description.However, and being not required to
Will excessively detailed details to realize described embodiment.Therefore, the foregoing description of embodiment be in order to illustrate and describe and
It presents.The embodiment and example disclosed according to these embodiments presented in foregoing description is provided separately, with
Addition context simultaneously helps to understand described embodiment.Description above, which is not used in, accomplishes exhaustive or by described reality
Apply the precise forms that example is restricted to the disclosure.According to the above instruction, it is several modification, selection be applicable in and variation be feasible.?
In some cases, processing step well known is not described in avoid described embodiment is unnecessarily influenced.