CN115713097A - Time calculation method of electron microscope based on seq2seq algorithm - Google Patents
Time calculation method of electron microscope based on seq2seq algorithm Download PDFInfo
- Publication number
- CN115713097A CN115713097A CN202310017145.1A CN202310017145A CN115713097A CN 115713097 A CN115713097 A CN 115713097A CN 202310017145 A CN202310017145 A CN 202310017145A CN 115713097 A CN115713097 A CN 115713097A
- Authority
- CN
- China
- Prior art keywords
- current
- state
- time
- sequence
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Complex Calculations (AREA)
Abstract
The invention discloses an electron microscope time calculation method based on a seq2seq algorithm, which comprises the steps of firstly screening current data of a typical scanning electron microscope and a typical transmission electron microscope as a sample set, training a model through a seq2seq algorithm structure, converting a characteristic sequence obtained by CNN network processing into a state probability distribution sequence through an Encoder-Decoder structure, obtaining an optimal training model after multiple iterations, and storing an obtained model file to a cloud server. The current data of the electron microscope collected subsequently only needs to be subjected to the same data segmentation and input into the model, and the machine-hour state corresponding to the current can be quickly obtained, so that the machine-hour data can be obtained. The invention integrates the current information of the same type of scientific instruments, trains the universal model of the same type of instruments through the current information, and predicts the machine hour of the same type of scientific instruments more accurately, thereby having greater engineering value.
Description
Technical Field
The invention belongs to the technical field of data analysis, and particularly relates to an electron microscope time calculation method based on a seq2seq algorithm.
Background
Nowadays, society is entering a big data era, and with continuous improvement of data acquisition technology and more data acquisition products, data which can be provided for people to carry out analysis and processing are also continuously increased. Meanwhile, with the further development of scientific research in colleges and universities, the number and the types of instruments are more and more, experimental technicians are limited, the problems that the idle rate of the instruments is high, the using time of the instruments is not clear and the like generally exist in large-scale instruments, the working state of the instruments can be clearly reflected when the instruments are on the machine (the machine is turned off, the machine is in a standby state and the machine is in a working state), and the method plays an important role in instrument management. The demands of collecting the current of a large instrument through a sensor and extracting the machine-hour information of the instrument through a big data analysis algorithm are urgently needed to be solved by utilizing big data analysis and mining technologies and the like.
For such problems, it is common to extract a characteristic value of an instrument and classify data points by clustering or classification algorithms in machine learning, for example, documents [ hu feng, zhu cheng zhi, wang shihua ] research on power load classification based on an improved K-means algorithm [ J ] electronic measurement technology, 2018] propose a power load classification method based on an improved K-means algorithm, which clusters characteristics of power loads, but this method has a poor clustering effect if types of data are unbalanced, such as a severe data amount imbalance or variances of classes are different. For example, in the document [ liu yong mei, jun huai, prosperous, residual current classification method [ J ] electronic technology application, 2018] an AdaBoost algorithm-based residual current classification method is proposed, the method firstly obtains characteristic components of different types of residual current components through an extraction experiment, then maps the component characteristics into an AdaBoost algorithm, and detects the type of electric shock current components in the total residual current by using the AdaBoost algorithm.
The two methods ignore the sequence characteristics of the current data, can not mine the regularity in the sequence, the current data is the result of sampling the instrument current according to the specified sampling rate in the equal interval time period, and has the characteristics of strong data continuity, compact time points, many related variables and the like, and the change of each variable not only depends on the historical data value of the variable, but also depends on the influence of other related variables on the variable. Therefore, the currents of the same type of instrument may have the same rule, and the universal model of the instrument can be obtained by training the currents of the same type of instrument.
Disclosure of Invention
In view of the above, the invention provides an electron microscope time calculation method based on a seq2seq algorithm, which integrates current information of scientific instruments of the same type, trains a general model of the instruments of the same type through the current information, and predicts the time of the instruments of the same type more accurately, thereby having greater engineering value.
A time calculation method of an electron microscope based on seq2seq algorithm comprises the following steps:
(1) Establishing a data set about the operating current of an electron microscope scientific instrument;
(2) Preprocessing a data set, and dividing the whole data set into a training set and a testing set;
(3) Constructing a state prediction model based on a seq2seq algorithm, wherein the state prediction model is formed by sequentially connecting an encoder, an attention mechanism layer and a decoder, the encoder is used for carrying out characteristic encoding on input current data, the attention mechanism layer is used for giving different weights to different hidden layer states in characteristic information, and the decoder is used for decoding the characteristic information to obtain state probability distribution of an instrument at each moment;
(4) Training the model by using current data of a training set;
(5) And inputting the current data of the test set into the trained model, so that the state indication of the instrument at each moment can be predicted, and the duration of the working operation state of the instrument is further counted.
Further, the specific implementation manner of the step (1) is as follows: firstly, screening out a plurality of groups of running current sequences related to a scanning electron microscope and a transmission electron microscope from a database of an instrument background management system, wherein the sequences have complete current periods (standby, working and shutdown) including current values at all times in the periods; then, labeling each current value in the operating current sequence to obtain a corresponding label sequence, namely for the current value at any moment, if the instrument is in a working operation state at the moment, assigning a value of 2 to the corresponding label; if the instrument is in a standby state at the moment, the corresponding label is assigned to be 1; at the moment, the instrument is in a shutdown state, and the value of the corresponding label is assigned to be 0; and combining the operating current sequence and the corresponding label sequence to form a group of current data, and obtaining a plurality of groups of current data to construct a data set.
Further, the specific implementation manner of preprocessing the data set in the step (2) is as follows: firstly, the operation current sequence in the data set is normalized, and then the operation current sequence is converted and divided into current input vectors at all moments, namely for any moment in the operation current sequencetCurrent value ofi t According to the size of a predetermined windowwGet thei t And front ofwCurrent value composition at each timetCurrent input vector of timei t-w ,i t-w+1 ,…,i t-1 ,i t ]If at allWherein the current value is insufficient in numberw+1, then copyi t Completing; and traversing the current values at all moments in the running current sequence so as to obtain the current input vector at all moments.
Further, the encoder adopts a BilSTM network, which comprises a forward LSTM unit and a backward LSTM unit, and the specific calculation expression is as follows:
wherein:His the encoded feature vector output by the encoder,h t for a BilsTM networktThe hidden state of the moment in time,tis a natural number and is not more than 1t≤m,mFor the length of time the current sequence is run,x t is composed oftThe current at the time is input into the vector,andare respectively forward LSTM units intTime of day andt-a hidden state at time 1,andare respectively backward LSTM unitstTime of day andthidden state at time +1, bilSTM + () And BilsTM - () Representing the internal computation functions of the forward LSTM unit and the backward LSTM unit, respectively.
Further, the computational expression of the attention mechanism layer is as follows:
wherein:C t to take care of the output of the mechanical layer, i.e.tA time-sequence vector of the time of day,α t is in a hidden stateh t The weight value of (a) is calculated,v t is in a hidden layer stateh t The score of attention of (a) is,exp() Expressed as natural constantseAn exponential function of the base is used,tanh() Which represents a function of the hyperbolic tangent,V e andW e as the weight matrix to be learned, T the transpose is represented by,s t-1 is a decoder attHidden state at time 1.
Further, the decoder calculates the hidden layer state by adopting a unidirectional LSTM networks t Further hide the layer states t Attention mechanism layer outputC t Andt-1 State probability density distribution vector of the instrument at time instantp t-1 Dimension reduction is carried out on the full connection layer after splicing to obtaintState probability density distribution vector of time of day instrumentp t As a model output.
Further, the hidden layer states t The calculation expression of (a) is as follows:
wherein:s t for the decoder attThe hidden state of the moment in time,p t-1 is composed oftThe state probability density distribution vector of the instrument at time 1, LSTM () representing the internal calculation function of a one-way LSTM network.
Further, for the initial hidden states 0 In a state of being hidden by a full connection layerh m Reducing the vitamin content to obtain the product.
Further, the specific process of training the model in the step (4) is as follows:
4.1 Initializing model parameters including a bias vector and a weight matrix of each layer, a learning rate and an optimizer;
4.2 Inputting current input vector of each moment in the training current collection data to the model, outputting the forward propagation of the model to obtain corresponding prediction result, namely state probability density distribution vector, and calculating loss function between the prediction result and corresponding label sequenceL;
4.3 According to a loss functionLModel parameters are continuously and iteratively updated by an optimizer through a gradient descent method until a loss function is obtainedLAnd (5) converging and finishing training.
Further, the loss functionLThe expression of (a) is as follows:
wherein:p t as a result of prediction output by the modeltThe state probability density distribution vector of the instrument at a time,y t in the corresponding tag sequencetThe value of the tag at the time of day,tis a natural number and is not more than 1t≤m,mIs the length of time that the current sequence is run.
The method comprises the steps of firstly screening current data of a typical scanning electron microscope and a typical transmission electron microscope as sample sets, and training a model through a seq2seq algorithm structure. The core idea of the seq2seq network used in the invention is that a characteristic sequence obtained by CNN network processing is converted into a state probability distribution sequence through an Encoder-Decoder structure, an optimal training model is obtained after multiple iterations, and an obtained model file is stored in a cloud server; the current data of the electron microscope collected subsequently only needs to be subjected to the same data segmentation and input into the model, and the machine-hour state corresponding to the current can be quickly obtained, so that the machine-hour data can be obtained.
In addition, the invention integrates the current information of the same type of scientific instruments, trains the general model of the same type of instruments through the current information, and predicts the machine hour of the same type of scientific instruments more accurately, thereby having greater engineering value.
Drawings
FIG. 1 is a schematic flow chart of a calculation method in an electron microscope apparatus according to the present invention.
Fig. 2 is an exemplary graph of current waveforms for a typical scanning electron microscope and transmission electron microscope.
FIG. 3 is a schematic diagram of the basic structure of the seq2seq algorithm model of the present invention.
FIG. 4 is a diagram illustrating simulation results of the seq2seq algorithm model of the present invention.
FIG. 5 is a diagram showing the comparison between the simulation results of seq2seq algorithm model with and without the addition of the Attention mechanism.
Detailed Description
In order to describe the present invention more specifically, the following detailed description of the present invention is made with reference to the accompanying drawings and the detailed description of the present invention.
The invention provides an on-machine time calculation method for an electron microscope instrument by screening and selecting current data of a typical scanning electron microscope and a typical transmission electron microscope based on an instrument background management system, the specific flow is shown in figure 1, the method firstly needs to train a model, the typical running current of the electron microscope is selected in the training process, and the specific implementation process is as follows:
(1) And establishing a data set, including data labels and data preprocessing.
1.1 The electron microscope current sequence with a complete current cycle (working, standby, shut down) is screened from the instrument background management system database as shown in fig. 2.
1.2 And normalizing the current sequence, labeling, and dividing into a training set and a test set.
The current sequence is first normalized, with the maximum and minimum normalization formula as follows:
wherein:i min represents the minimum value of the current values in the sequence,i max representing the maximum value of the current values in the sequence.
The current data of each electron microscope is labeled according to actual conditions, assuming that current data of a certain electron microscope is S = {0.04, 0.04, 0.05, 0.04, 0.05, 0.04, 0.04, 1.34, 1.37, 1.33, 1.35, 1.31, 0.34, 0.34, 0.35, 0.36, 0.35}, and current data D after normalization = {0.03, 0.03, 0.04, 0.03, 0.04, 0.03, 0.03, 0.98, 1, 0.98, 0.99, 0.96, 0.25, 0.25, 0.26, 0.26}; after the marking process, the corresponding label set C = {0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 2, 1, 1, 1, 1, 1}, where 0 represents a power-off state, 1 represents a standby state, and 2 represents an operating state.
And finally, combining the obtained operating current sequence and the corresponding label sequence to form a group of samples, obtaining a plurality of groups of samples according to the group of samples to form a data set, and dividing the obtained data set into a training set and a testing set according to the proportion of 8.
(2) And constructing a seq2seq algorithm model.
2.1 The Encoder structure of the BiLSTM network is used.
As shown in fig. 3, the encoder is responsible for processing the incoming current sequence, putting all information about the incoming sequence into a fixed length vector. The Encoder structure uses a BiLSTM network instead of a unidirectional LSTM, because the BiLSTM network has a stronger ability to utilize input sequence information, the formula of the Encoder structure is as follows:
wherein:is a forward LSTM cell intThe hidden state of the moment in time,is a backward LSTM celltThe hidden state of the moment in time,x t is at the same timetThe current at the time is input into the vector,is a forward LSTM cellIn thatt-a hidden state at time 1,is a backward LSTM celltHidden state at time +1, bilSTM + () And BilSTM - () Representing the computation functions of the forward LSTM unit and backward LSTM unit respectively,h t for a BilsTM networktHidden layer state of the moment;andthe value of (c) is initialized to the hidden layer code corresponding to the window interval with the current value of all 0.
2.2 An Attention mechanism was introduced.
An Attention mechanism is introduced in a seq2seq algorithm, and before an output vector enters a Decoder structure, coding feature vectors of all moments output in an Encoder are read firstlyH=[h 1 ,h 2 ,…,h m ],mThe number of states of the hidden layer (namely the time length of the running current sequence) is represented, different weights are given to different variables through an Attention mechanism, so that the Decoder structure can pay more Attention to effective characteristic information, and the formula of the Decoder structure is as follows:
wherein:s t in representation Decoder structuretThe hidden state of the moment in time,s t-1 in the representation Decoder structuret-a hidden state at time 1,C t representtThe time-sequence vector of the time of day,p t-1 to representt-1 the state probability density distribution vector of the instrument, LSTM () representing the internal calculation function of the unidirectional LSTM network; dynamically variable timing vectorC t Can store the complete effective information input by the model at the current moment, which is calculatedThe process can be divided into the following 3 steps:
step 1: the attention mechanism score was calculated over 2 fully connected layers as shown below:
wherein:V e 、W e as a weight matrix, the weight matrix is,v t representing hidden statesh t The score of attention of (a) is,tanh() Representing a hyperbolic tangent function.
Step 2: obtained by the above formulav t The softmax function is used to calculate the weight of each hidden state as shown in the following equation:
wherein:exp() Expressed as natural constantseAn exponential function of the base (A) is,α t representing hidden statesh t The corresponding weight value.
And 3, step 3: weights obtained according to the above formulaα t And its corresponding hidden layer stateh t Weighted addition to obtain a timing vectorC t As shown in the following formula:
2.3 The Decoder structure is used to obtain the final decomposition result.
The Decoder structure converts the time sequence characteristic sequence obtained in the step 2.2 into a machine-time state sequence; in contrast to the Encode section, since Decoder requires state encoding that sequentially generates a current sequence, a unidirectional LSTM network must be used. BiLSTM network last hidden layer state in Encoderh m Corresponding to the initial state of the LSTM layer in the Decoder structures 0 (ii) a Due to hidden layer stateIs formed by splicing a forward hidden layer state and a backward hidden layer state, and a dimension heels 0 Mismatch, so it is necessary to pass through the full connection layer pairh m And transmitting the data into the Decoder after dimension reduction.
The Decoder willtHidden state of times t 、tTime sequence vector of timeC t Andt-probability density distribution vector of time of day 1 instrumentp t-1 After splicing, the full connecting layer is obtainedtProbability density distribution vector of time of day instrumentp t ,p t Is calculated as follows:
wherein: initial valuep 0 The number of the corresponding states according to the current value of the instrument is equally divided into { 0.5%,0.5} or {0.33 },0.33,0.33}。
Because some instruments have current values corresponding to two states and some instruments have current values corresponding to three states, probability density distribution vectors are carried out by different instrumentsp t During prediction, different full-connection layers are required to process, and the number of the states of the instrument corresponds to the number of neurons of the full-connection layers.
2.4 A loss function and an optimization function are set.
The model of the invention converts the decomposition problem into a multi-classification problem for solving the states of each instrument through clustering and coding operation, thereby using a cross entropy function as a loss function of model training, as shown in the following formula:
wherein:y t in the corresponding tag sequencetThe tag value at the time.
The optimizer adopts SGD + Momentum, and introduces a first-order Momentum on the basis of SGD, as shown in the following formula:
wherein:βindicating a hyper-parameter that can be set by itself,αin order to obtain the learning rate of the learning,Wandbin order to be the weight, the weight is,dWanddbis toWAndbpartial derivatives of (a). From the viewpoint of momentum, by weightWFor the purpose of example only,V dW it can be understood that the speed is,dWit can be understood as acceleration, and the exponentially weighted average is actually to calculate the current velocity, which is affected by both the previous velocity and the current acceleration; and thenβLess than 1, and can limit the speedV dW Too large, this ensures the smoothness and accuracy of the gradient descent, reduces oscillations, and reaches the minimum faster.
(3) The method is used for predicting the current data type of an electron microscope scientific instrument and mainly comprises the following steps:
3.1 And acquiring a current sequence to be calculated.
3.2 And converting the obtained current sequence into tensor, and predicting through a seq2seq algorithm model to finally obtain a machine-time state sequence mapped with the current sequence.
3.3 And performing time counting according to the prediction result.
In the result of the seq2seq algorithm simulation shown in fig. 4, it can be seen that the model trained by the present invention is more accurate for the time-of-flight calculation analysis of the electron microscope instrument. Generally speaking, the scanning electron microscope has a small current peak value and small fluctuation, while the transmission electron microscope has a large current peak value and more obvious fluctuation; the algorithm model can accurately identify the operation conditions of the same type of electron microscopes with different operation conditions and different waveforms, and can identify the starting-up operation under the condition of large current or strong fluctuation; the standby state can be identified when the non-working current or the fluctuation is stable; the other conditions are identified as shutdown, and the aim of the invention is basically achieved.
To verify the effect of the algorithm model of the present invention, it is compared with seq2seq algorithm without adding the Attention mechanism for optimization. It can be seen from the comparison result of the algorithm simulation shown in fig. 5 that the anchoring mechanism enables the Decoder structure to focus more on the effective feature information, the time-of-flight prediction effect for the current interval with continuous fluctuation is better, and the situation of time-of-flight prediction in the current interval with continuous fluctuation rarely occurs.
The foregoing description of the embodiments is provided to enable one of ordinary skill in the art to make and use the invention, and it is to be understood that other modifications of the embodiments, and the generic principles defined herein may be applied to other embodiments without the use of inventive faculty, as will be readily apparent to those skilled in the art. Therefore, the present invention is not limited to the above embodiments, and those skilled in the art should make improvements and modifications to the present invention based on the disclosure of the present invention within the protection scope of the present invention.
Claims (10)
1. A time calculation method of an electron microscope based on seq2seq algorithm comprises the following steps:
(1) Aiming at scientific instruments such as an electron microscope, establishing a data set about the operating current of the scientific instruments;
(2) Preprocessing a data set, and dividing the whole data set into a training set and a testing set;
(3) Constructing a state prediction model based on a seq2seq algorithm, wherein the state prediction model is formed by sequentially connecting an encoder, an attention mechanism layer and a decoder, the encoder is used for carrying out characteristic encoding on input current data, the attention mechanism layer is used for endowing different hidden layer states in characteristic information with different weights, and the decoder is used for decoding the characteristic information to obtain state probability distribution of an instrument at each moment;
(4) Training the state prediction model by using the current data of the training set;
(5) And inputting the current data of the test set into the trained model, so that the state indication of the instrument at each moment can be predicted, and the duration of the working operation state of the instrument is counted.
2. The electron microscopy time calculation method according to claim 1, characterized in that: the specific implementation manner of the step (1) is as follows: firstly, screening out a plurality of groups of running current sequences related to a scanning electron microscope and a transmission electron microscope from a database of an instrument background management system, wherein the sequences have a complete current period and contain current values at various moments in the period; then, labeling each current value in the operating current sequence to obtain a corresponding label sequence, namely for the current value at any moment, if the instrument is in a working operation state at the moment, assigning a value of 2 to the corresponding label; if the instrument is in a standby state at the moment, assigning the value of the corresponding label as 1; at this moment, the instrument is in a shutdown state, and the corresponding label is assigned to be 0; and combining the operating current sequence and the corresponding tag sequence to form a group of current data, and obtaining a plurality of groups of current data to form a data set.
3. The electron microscopy machine time calculation method according to claim 2, characterized in that: the specific implementation manner of preprocessing the data set in the step (2) is as follows: firstly, the operation current sequence in the data set is normalized, and then the operation current sequence is converted and divided into current input vectors at various moments, namely, for any moment in the operation current sequencetCurrent value ofi t According to the size of a predetermined windowwGeti t And before itwCurrent value composition at each timetCurrent input vector of time of dayIf the current value is insufficientw+1, then copyi t Completing; and traversing the current value at each moment in the operation current sequence so as to obtain the current input vector at each moment.
4. The electron microscopy machine time calculation method according to claim 3, characterized in that: the encoder adopts a BilSTM network which comprises a forward LSTM unit and a backward LSTM unit, and the specific calculation expression is as follows:
wherein:His the encoded feature vector output by the encoder,h t for a BilsTM network intThe hidden state of the moment in time,tis a natural number and is not less than 1t≤m,mFor the length of time the current sequence is run,x t is composed oftThe current at the time is input into the vector,andare respectively forward LSTM units intTime of day andt-a hidden state at time 1,andare respectively backward LSTM unitstTime of day andthidden state at time +1, bilSTM + () And BilSTM - () Representing the internal computation functions of the forward LSTM unit and the backward LSTM unit, respectively.
5. The electron microscopy machine time calculation method according to claim 4, characterized in that: the computational expression of the attention mechanism layer is as follows:
wherein:C t to take care of the output of the mechanical layer, i.e.tA time-sequence vector of the time of day,α t is in a hidden layer stateh t The weight value of (a) is set,v t is in a hidden stateh t The score of attention of (a) is,exp() Expressed as natural constantseAn exponential function of the base is used,tanh() Which represents a function of the hyperbolic tangent,V e andW e as the weight matrix to be learned, T which represents a transposition of the image,s t-1 for the decoder att-hidden state at time 1.
6. The electron microscopy machine time calculation method according to claim 5, characterized in that: the decoder adopts a unidirectional LSTM network to calculate the hidden layer states t Further hide the layer states t Attention mechanism layer outputC t Andt-1 State probability density distribution vector of the instrument at time instantp t-1 After splicing, the dimension is reduced through a full connection layer to obtaintState probability density distribution vector of time of day instrumentp t As a model output.
7. The electron microscopy machine time calculation method according to claim 6, characterized in that: the hidden states t The calculation expression of (a) is as follows:
wherein:s t for the decoder attThe hidden state of the moment in time,p t-1 is composed oft-1 state probability density distribution vector of the instrument, LSTM () representing the internal calculation function of the unidirectional LSTM network.
8. The electron microscopy time calculation method according to claim 7, characterized in that: for the initial hidden layer states 0 Which is a state of being hidden by a full connection layerh m Reducing the dimension to obtain the product.
9. The electron microscopy time calculation method according to claim 2, characterized in that: the specific process of training the model in the step (4) is as follows:
4.1 Initializing model parameters including a bias vector and a weight matrix of each layer, a learning rate and an optimizer;
4.2 Inputting current input vector of each moment in the training current collection data to the model, outputting the forward propagation of the model to obtain corresponding prediction result, namely state probability density distribution vector, and calculating loss function between the prediction result and corresponding label sequenceL;
4.3 According to a loss functionLContinuously and iteratively updating model parameters by using an optimizer through a gradient descent method until a loss function is obtainedLAnd (5) converging and finishing training.
10. The electron microscopy time calculation method according to claim 9, characterized in that: said loss functionLThe expression of (a) is as follows:
wherein:p t as a result of prediction output by the modeltThe state probability density distribution vector of the instrument at the moment,y t in the sequence of corresponding tagstThe value of the tag at the time of day,tis a natural number and is not more than 1t≤m,mIs the length of time that the current sequence is run.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310017145.1A CN115713097A (en) | 2023-01-06 | 2023-01-06 | Time calculation method of electron microscope based on seq2seq algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310017145.1A CN115713097A (en) | 2023-01-06 | 2023-01-06 | Time calculation method of electron microscope based on seq2seq algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115713097A true CN115713097A (en) | 2023-02-24 |
Family
ID=85236139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310017145.1A Pending CN115713097A (en) | 2023-01-06 | 2023-01-06 | Time calculation method of electron microscope based on seq2seq algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115713097A (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180329884A1 (en) * | 2017-05-12 | 2018-11-15 | Rsvp Technologies Inc. | Neural contextual conversation learning |
CN110442707A (en) * | 2019-06-21 | 2019-11-12 | 电子科技大学 | A kind of multi-tag file classification method based on seq2seq |
CN115510964A (en) * | 2022-09-21 | 2022-12-23 | 浙江省科技项目管理服务中心 | On-machine computing method for liquid chromatograph scientific instruments |
-
2023
- 2023-01-06 CN CN202310017145.1A patent/CN115713097A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180329884A1 (en) * | 2017-05-12 | 2018-11-15 | Rsvp Technologies Inc. | Neural contextual conversation learning |
CN110442707A (en) * | 2019-06-21 | 2019-11-12 | 电子科技大学 | A kind of multi-tag file classification method based on seq2seq |
CN115510964A (en) * | 2022-09-21 | 2022-12-23 | 浙江省科技项目管理服务中心 | On-machine computing method for liquid chromatograph scientific instruments |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110399850B (en) | Continuous sign language recognition method based on deep neural network | |
CN113806494B (en) | Named entity recognition method based on pre-training language model | |
CN111597340A (en) | Text classification method and device and readable storage medium | |
CN113435208A (en) | Student model training method and device and electronic equipment | |
CN117037427B (en) | Geological disaster networking monitoring and early warning system | |
CN115130591A (en) | Cross supervision-based multi-mode data classification method and device | |
CN114019370A (en) | Motor fault detection method based on gray level image and lightweight CNN-SVM model | |
CN115859142A (en) | Small sample rolling bearing fault diagnosis method based on convolution transformer generation countermeasure network | |
CN116703642A (en) | Intelligent management system of product manufacturing production line based on digital twin technology | |
CN117332206A (en) | RCNN-FA-BiGRU escalator bearing fault diagnosis method based on attention mechanism | |
CN115659254A (en) | Power quality disturbance analysis method for power distribution network with bimodal feature fusion | |
CN117217277A (en) | Pre-training method, device, equipment, storage medium and product of language model | |
CN111783464A (en) | Electric power-oriented domain entity identification method, system and storage medium | |
CN117350898A (en) | Intelligent early warning system and method for annual patent fee | |
CN117034123B (en) | Fault monitoring system and method for fitness equipment | |
CN117333146A (en) | Manpower resource management system and method based on artificial intelligence | |
CN115713097A (en) | Time calculation method of electron microscope based on seq2seq algorithm | |
CN114036947B (en) | Small sample text classification method and system for semi-supervised learning | |
CN115994204A (en) | National defense science and technology text structured semantic analysis method suitable for few sample scenes | |
CN116521863A (en) | Tag anti-noise text classification method based on semi-supervised learning | |
CN114706054A (en) | Method for identifying human body motion micro Doppler signal | |
CN114357166A (en) | Text classification method based on deep learning | |
CN110825851A (en) | Sentence pair relation discrimination method based on median conversion model | |
CN111158640B (en) | One-to-many demand analysis and identification method based on deep learning | |
CN113836942B (en) | Text matching method based on hidden keywords |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |