CN117668549A - Data extraction method, device and storage medium - Google Patents
Data extraction method, device and storage medium Download PDFInfo
- Publication number
- CN117668549A CN117668549A CN202311668220.7A CN202311668220A CN117668549A CN 117668549 A CN117668549 A CN 117668549A CN 202311668220 A CN202311668220 A CN 202311668220A CN 117668549 A CN117668549 A CN 117668549A
- Authority
- CN
- China
- Prior art keywords
- training
- data
- batch
- measurement
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 106
- 238000013075 data extraction Methods 0.000 title claims abstract description 52
- 238000012549 training Methods 0.000 claims abstract description 352
- 238000005259 measurement Methods 0.000 claims abstract description 200
- 238000012937 correction Methods 0.000 claims abstract description 48
- 238000000605 extraction Methods 0.000 claims abstract description 34
- 230000009897 systematic effect Effects 0.000 claims abstract description 32
- 230000008569 process Effects 0.000 claims description 37
- 230000008859 change Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 52
- 238000010586 diagram Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 6
- 210000002569 neuron Anatomy 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000013135 deep learning Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000010187 selection method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/10—Pre-processing; Data cleansing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0985—Hyperparameter optimisation; Meta-learning; Learning-to-learn
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Testing Or Calibration Of Command Recording Devices (AREA)
Abstract
The invention discloses a data extraction method, a device and a storage medium, wherein the method comprises the following steps: before training an end-to-end depth network model for random sensor measurement data and system error correction of the aerospace measurement system, acquiring the sensor measurement data of the aerospace measurement system to form a measurement data set; determining a training data set to be trained currently based on the measurement data set; extracting a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method; based on the extracted training samples of one training batch, training is carried out by utilizing a preset network model to obtain a required end-to-end depth network model, so as to be used for correcting random sensor measurement data and systematic errors of the aerospace measurement system. According to the scheme, the measurement data are extracted in a disordered extraction mode, so that the training effect of the depth network model can be improved, and the reliability of error correction is improved.
Description
Technical Field
The invention belongs to the technical field of aerospace measurement systems, and particularly relates to a data extraction method, a device and a storage medium, in particular to a method, a device and a storage medium for extracting time sequence data.
Background
In the space measurement system composed of multiple sensors (including a remote outside radar, an optical telescope measurement system and the like), measurement data obtained by measurement of each sensor comprises a systematic error (also called a coarse error) and a random error. In the related scheme, the systematic errors are calibrated and corrected by a method of periodically measuring the fixed targets, the correction of the errors is based on model assumptions of various priori experiences, then the measurement data are used for fitting (generally, the maximum square fitting) to obtain assumption parameters for error correction, and the reliability is low.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention aims to provide a data extraction method, a device and a storage medium, which are used for solving the problems that in a related scheme, the systematic errors of a spaceflight measurement system are calibrated and corrected by a method for periodically measuring a fixed target, the correction of the errors is based on model assumptions of various priori experiences, then the measurement data are used for fitting and solving assumption parameters to correct errors, and the reliability is lower.
The invention provides a data extraction method which is applied to training an end-to-end depth network model for random sensor measurement data and systematic error correction of a spaceflight measurement system; the data extraction method comprises the following steps: before training an end-to-end depth network model for random sensor measurement data and system error correction of the aerospace measurement system, acquiring the sensor measurement data of the aerospace measurement system to form a measurement data set; determining a training data set to be trained currently based on the measurement data set; extracting a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method; based on the extracted training samples of one training batch, training is carried out by utilizing a preset network model to obtain a required end-to-end depth network model, so as to be used for correcting random sensor measurement data and systematic errors of the aerospace measurement system.
In some embodiments, determining the training data set currently to be trained based on the measurement data set comprises: extracting partial batches of measurement data from the measurement data set to form a training data set D; writing a set of single measurement point combinations for each measurement batch B in the training dataset D; and splicing the measurement point sets of each measurement batch B in the training data set D to form a new training set D' serving as the training data set to be trained currently.
In some embodiments, the new training set D' is:where i=1, 2,..n, represents the i-th measurement sequence, m i The number of data points obtained by the ith measurement is represented, and n and m are positive integers.
In some embodiments, extracting a training sample of a training batch from the training dataset to be trained by out-of-order extraction method includes: the data in the training data set to be trained currently is disturbed and recorded as out-of-order training data to be trained currently; randomly extracting a batch of data from the current disorder training data to be trained to form a training sample of a training batch.
In some embodiments, training is performed using a preset network model based on a training sample of the extracted training batch to obtain a required end-to-end depth network model, including: training by using a preset network model based on a training sample of the extracted training batch to obtain a model gradient; extracting training samples of another training batch from the training data set to be trained again by adopting an out-of-order extraction method; training by using a preset network model based on the extracted training samples of another training batch to obtain another model gradient; and circulating until the obtained model gradient is not reduced any more or the training times reach the set maximum training times, and storing the model parameters obtained by training at the moment to obtain the required end-to-end depth network model.
In some embodiments, extracting training samples of another training batch from the training dataset to be trained again by using an out-of-order extraction method, including: in the new data extraction process, the data is no longer extracted according to the measurement batch, and the characteristic x 6 ~x 11 Maintaining the change; wherein x is 6 At atmospheric temperature x 7 Atmospheric humidity, x 8 The pressure of the air is higher than the atmospheric pressure,x 9 to measure the photoelectric level difference of the equipment, x 10 To measure the photoelectric vertical difference of the device, x 11 For measuring the levelness of the large disc of the equipment.
In some embodiments, training with a predetermined network model based on training samples of a training batch extracted, to obtain a model gradient, includes: based on the extracted training samples of a training batch, calculating the value of the loss function by using a preset network model, and carrying out gradient correction to obtain a model gradient.
In accordance with the foregoing method, another aspect of the present invention provides a data extraction apparatus for training an end-to-end depth network model for stochastic and systematic error correction of sensor measurement data for a aerospace measurement system; the data extraction device includes: an acquisition unit configured to acquire sensor measurement data of the aerospace measurement system to form a measurement data set prior to training an end-to-end depth network model for random sensor measurement data and systematic error correction of the aerospace measurement system; a control unit configured to determine a training data set currently to be trained based on the measurement data set; the control unit is further configured to extract a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method; the control unit is further configured to perform training by using a preset network model based on the extracted training samples of one training batch to obtain a required end-to-end depth network model for correcting random sensor measurement data and systematic errors of the aerospace measurement system.
In some embodiments, the control unit determines a training data set currently to be trained based on the measurement data set, comprising: extracting partial batches of measurement data from the measurement data set to form a training data set D; writing a set of single measurement point combinations for each measurement batch B in the training dataset D; and splicing the measurement point sets of each measurement batch B in the training data set D to form a new training set D' serving as the training data set to be trained currently.
In some casesIn an embodiment, the new training set D' is:wherein i=1, 2, … n, represents the ith measurement sequence, m i The number of data points obtained by the ith measurement is represented, and n and m are positive integers.
In some embodiments, the control unit extracts a training sample of a training batch from the training dataset to be trained, by using an out-of-order extraction method, including: the data in the training data set to be trained currently is disturbed and recorded as out-of-order training data to be trained currently; randomly extracting a batch of data from the current disorder training data to be trained to form a training sample of a training batch.
In some embodiments, the control unit performs training by using a preset network model based on the extracted training samples of one training batch to obtain a required end-to-end depth network model, including: training by using a preset network model based on a training sample of the extracted training batch to obtain a model gradient; extracting training samples of another training batch from the training data set to be trained again by adopting an out-of-order extraction method; training by using a preset network model based on the extracted training samples of another training batch to obtain another model gradient; and circulating until the obtained model gradient is not reduced any more or the training times reach the set maximum training times, and storing the model parameters obtained by training at the moment to obtain the required end-to-end depth network model.
In some embodiments, the control unit re-extracts training samples of another training batch from the training dataset to be trained, by using an out-of-order extraction method, including: in the new data extraction process, the data is no longer extracted according to the measurement batch, and the characteristic x 6 ~x 11 Maintaining the change; wherein x is 6 At atmospheric temperature x 7 Atmospheric humidity, x 8 Atmospheric pressure, x 9 For measuring photoelectric level difference of equipment,x 10 To measure the photoelectric vertical difference of the device, x 11 For measuring the levelness of the large disc of the equipment.
In some embodiments, the control unit performs training with a preset network model based on the extracted training samples of a training batch to obtain a model gradient, including: based on the extracted training samples of a training batch, calculating the value of the loss function by using a preset network model, and carrying out gradient correction to obtain a model gradient.
In accordance with another aspect of the present invention, there is provided a terminal comprising: the data extraction device described above.
In accordance with the above method, a further aspect of the present invention provides a storage medium, where the storage medium includes a stored program, where the program, when executed, controls a device in which the storage medium is located to perform the above data extraction method.
Therefore, the scheme of the invention obtains the sensor measurement data of the aerospace measurement system to form a measurement data set before training an end-to-end depth network model for random sensor measurement data and system error correction of the aerospace measurement system; determining a training data set to be trained currently based on the measurement data set; extracting a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method; based on the extracted training samples of one training batch, training is performed by using a preset network model to obtain a required end-to-end depth network model for correcting random sensor measurement data and systematic errors of the aerospace measurement system, so that the training effect of the depth network model can be improved by extracting the measurement data in an out-of-order extraction mode before training the end-to-end depth network model for correcting the random sensor measurement data and the systematic errors of the aerospace measurement system, and the reliability of error correction is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
FIG. 1 is a flow chart of a data extraction method according to an embodiment of the invention;
FIG. 2 is a flow chart of an embodiment of the training data set to be trained currently in the method of the present invention;
FIG. 3 is a flow chart of an embodiment of the method of the present invention for extracting training samples of a training lot by out-of-order extraction;
FIG. 4 is a flow chart of an embodiment of training a predetermined network model to obtain a desired end-to-end deep network model in the method of the present invention;
FIG. 5 is a schematic diagram of a data extraction device according to an embodiment of the invention;
FIG. 6 is a schematic diagram of the overall structure of an end-to-end deep network model;
FIG. 7 is a schematic diagram of the structure of an embedded layer;
FIG. 8 is a schematic diagram of a structure of a transducer layer;
FIG. 9 is a schematic diagram of the structure of a fully connected layer;
fig. 10 is a flow chart of a method for extracting out-of-order data for time series data.
In the embodiment of the present invention, reference numerals are as follows, in combination with the accompanying drawings:
102-an acquisition unit; 104-a control unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to specific embodiments of the present invention and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Considering that the systematic error correction method in the related scheme generally sets a ground fixed target before measurement or periodically measures the ground fixed target, since the target position is accurately measured, systematic errors such as a large disc horizontal error, a photoelectric axis inconsistent error and the like can be back calculated through measuring the target, and the method has the main problems that: 1) The assumption is not true, that the systematic error remains unchanged throughout the measurement process, but in practice, the systematic error will change during the operation of the device; 2) The fixed target can only be built on the ground, so that only data at low elevation angle can be obtained, and the systematic error of the measuring equipment at different elevation angles can be changed along with the elevation angle; the method of correcting random errors requires various prior knowledge and hypothesis models, which often depend on the knowledge level and experience of the data processing personnel, model hypotheses are more dependent on experience of the processing personnel and are thus unreliable in practice, different persons have different insights, and thus the final result has a large uncertainty.
Therefore, there is a need to protect an end-to-end depth network model for sensor measurement data randomization and systematic error correction, so some solutions propose a depth network based sensor measurement data randomization and systematic error correction method, see in particular the examples shown in fig. 6-9.
Fig. 6 is a schematic diagram of the overall structure of an end-to-end deep network model. The overall structure of the model is shown in fig. 6. Wherein, input X is a measurement sequence, x= { X [1] ,x [2] ,…,x [m] -wherein each measurement point is:
x [i] ={x 0 ,x 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ,x 8 ,x 9 ,x 10 ,x 11 i=0, 1,2 …, m, i, m are positive integers.
A total of m measurements were obtained for each measurement. X is x 0 For the target distance R, x 1 For the target azimuth angle A, x 2 For the target pitch angle E, x 3 Controlling electricity for measuring devicesFlat AGC, x 4 For the target RCS, x 5 To measure the time T, x 6 At atmospheric temperature x 7 Atmospheric humidity, x 8 Atmospheric pressure, x 9 To measure the photoelectric level difference of the equipment, x 10 To measure the photoelectric vertical difference of the device, x 11 To measure the device's large disk non-levelness, each input data (measurement point) consists of 12 features.
Output Y is error corrected data, y= { Y [0] ,y [1] ,...,y [N] -wherein y [i] ={y 1 ,y 2 ,y 3 ,y 4 I=0, 1, 2..n is N measured values, i, N are positive integers; y is 0 For the target distance R, y 1 For target azimuth A, y2 is target pitch E, y 3 Is the target RCS. The output data consists of 4 eigenvalues.
Fig. 7 is a schematic structural diagram of an embedded layer. The embedded layer structure in fig. 6 is shown in fig. 7. The embedded layer is a prediction-based self-encoder. The input data dimension of the embedded layer is 1×12 and the output data dimension is 12×m, where m is the embedded layer depth. Output O of the first layer l =σ(W l O l-1 +b l ) Wherein l=1, 2. W (W) l Weight of layer I, b l For the first layer bias, O l-1 For the previous layer output, O 0 For input layer, W l ,b l Is a parameter to be learned. σ is a ReLU function whose expression is f (x) =max (0, x). Output o= [ O ] of embedded layer 1 ,O 1 ,...,O m ] T T represents the transpose.
FIG. 8 is a schematic diagram of a structure of a transducer layer. The structure of the transducer layer is shown in fig. 8, and the specific structure and model are consistent as described in paper Attention Is All You Need. Output o= [ O ] of embedded layer 1 ,O 1 ,...,O m ] T As input to the transducer layer, output of the transducer layer is input to the fully connected layer, and the fully connected layer structure is shown in fig. 9. The output of the transducer layer is O T Defined according to the above paper.
Fig. 9 is a schematic structural diagram of a fully connected layer. In FIG. 9, the leftmost endThe output layer is divided into a transducer layer, a full-connection layer is arranged in the middle, and the output layer is arranged on the rightmost side. The output layer outputs the final predicted value Wherein W is F B is the full connection layer parameter F Bias for full link layer, O T Is a full connection layer input, i.e., a transducer layer output.
The specific process of training and utilizing the end-to-end depth network model comprises the following steps:
step 11, error function and loss function definition.
The error function Err is defined by:
in the method, in the process of the invention,for the predicted value when inputting X, Y is a true value corresponding to X, i represents the ith component, and m represents the total component number. y is i Representing the i-th component, y, in the measurement result 0 Target distance R, y 1 Target bearing A, y 2 Target pitch E, y 3 Target RCS.
Representing input X, true value Y and predicted value +.>Errors in time.
Loss function during batch trainingDefined by the formula:
where i represents the ith sample and n represents the number of samples per batch.
Step 12, training steps are as follows:
step 121, initializing each parameter of the model by using the random number.
Step 122, prepare the measurement data and the corresponding true value.
Step 123, extracting 1 batch of data from the prepared dataset.
And 124, inputting data to the embedded layer, calculating a predicted value by using the model, and calculating loss by using a loss function.
And step 125, optimizing model parameters according to the loss value by using a random gradient descent method.
Step 126, repeat steps 123 to 125 until the loss function no longer decreases or reaches the predetermined training number.
Step 127, maintaining the trained model parameters.
And 13, performing error correction by using a model, wherein the error correction is specifically as follows:
step 131, creating the depth network model defined in fig. 6;
step 132, loading the model parameters saved in step 127 to the network model created in step 131;
step 133, inputting the original measurement data to the model, and outputting the result of the model as the measurement value after error correction.
In the model training of some of the above schemes, multiple batches of training data are generally used, with each batch of training data from a single measurement process. Let the training dataset be:
D={B 1 ,B 2 ,…,B n }。
wherein n represents a total of n measurement sequence combination data sets D, B i (i=1, 2, … n) is the time series data generated by the ith measurement procedure.
According to the description of some of the above schemes, B i =X={x [1] ,X [2] ,…,x [m] M represents the number of total data points of the time sequence obtained by one measurement. Each of whichThe measurement points are:
x [i] ={x 0 ,x 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ,x 8 ,x 9 ,x 10 ,x 11 },i=0,1,2…,m。
each measuring point x [i] Wherein x is 0 For the target distance R, x 1 For the target azimuth angle A, x 2 For the target pitch angle E, x 3 Control level AGC, x for measuring device 4 For the target RCS, x 5 To measure the time T, x 6 At atmospheric temperature x 7 Atmospheric humidity, x 8 Atmospheric pressure, x 9 To measure the photoelectric level difference of the equipment, x 10 To measure the photoelectric vertical difference of the device, x 11 For measuring the levelness of the large disc of the equipment. At each measurement, x 6 ~x 11 All obtained by calibration before starting the actual measurement and thus remain unchanged throughout the measurement. During the model training process of the related scheme, some data are selected from the data set D to form a batch, such as selecting { B } 1 ,B 3 ,B 4 ,B 6 ,B 8 ,B 9 ,B 10 The problem with such selection methods is that when these data are selected, the input models are selected, and x is the time the batch is measured, except that when the batch is measured 6 ~x 11 These features are not changed at other times, and thus the model has difficulty fitting them correctly or over fitting them to certain values.
Specifically, because model training actually calculates predicted values based on input values, then calculates errors and loss values by using the predicted values and true values, and then adjusts model parameters by using a gradient descent by using an error back propagation mechanism, if some of the input features remain unchanged all the time, the parameters used to process these features in the network will quickly have a gradient vanishing and thus overfit to these parameters. Switching from one lot measurement data to the next, the network again fits quickly to the values of the current lot, so that the values of one lot are always overfitted to the values of the other lot throughout the training process. Overall, this part of the network does not fit well to the actual situation, but is easily overfitted to the values of the current batch.
In order to solve the above problems, the present invention proposes a data extraction method, in particular, an out-of-order extraction method for similar time-allowed data, which improves training effect of a deep network model and avoids the model from being over-fitted to certain specific parameters (in particular, parameters which are constantly kept unchanged, such as x 6 ~x 11 This type of parameter) improves the reliability of the error correction.
According to an embodiment of the present invention, a data extraction method is provided, and a flowchart of an embodiment of the method of the present invention is shown in fig. 1. The data extraction method is applied to training an end-to-end depth network model for random sensor measurement data and systematic error correction of the aerospace measurement system; the data extraction method comprises the following steps: step S110 to step S140.
At step S110, sensor measurement data of a aerospace measurement system is acquired to form a measurement dataset prior to training an end-to-end depth network model for stochastic and systematic error correction of the sensor measurement data of the aerospace measurement system.
At step S120, a training dataset to be currently trained is determined based on the measurement dataset.
In some embodiments, the specific procedure of determining the training dataset to be trained currently in step S120 is based on the measurement dataset, see the following exemplary description.
The following is a schematic flow chart of an embodiment of determining the training data set to be trained in the method of the present invention in connection with fig. 2, further describes a specific process of determining the training data set to be trained in step S120, including: step S210 to step S230.
Step S210, extracting a partial batch of measurement data from the measurement data set to form a training data set D.
Step S220, writing a set of single measurement point combinations for each measurement lot B in the training data set D.
Step S210, concatenating the measurement point sets of each measurement lot B in the training data set D to form a new training set D' as the training data set to be trained currently.
Specifically, fig. 10 is a flow chart of a method for extracting out-of-order data for time series data. As shown in fig. 10, the out-of-order data extraction method for time series data includes: step 21, extracting part of batch measurement data from the data set to form a training data set D, and writing each measurement batch B in the training data set D into a set formed by single measurement points. The measurement point sets of each measurement lot B in the training dataset D are concatenated to form a new training set D ', D' being a set of a series of measurement points.
Wherein the new training set D' is:
wherein i=1, 2, … n, represents the ith measurement sequence, m i The number of data points obtained by the ith measurement is represented, and n and m are positive integers.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: in step 21, for the training data set d= { B 1 ,B 2 ,…,B n }, B therein i =X={x [1] ,x [2] ,…,x [m] Overwrite the following form:
where i=1, 2, … n, represents the i-th measurement sequence. m is m i Representing the number of data points obtained for the ith measurement. Thus, training data set D may be rewritten as:
at step S130, an out-of-order extraction method is adopted to extract a training sample of a training batch from the training dataset to be trained.
In some embodiments, in step S130, an out-of-order extraction method is used to extract training samples of a training batch from the training dataset to be trained, which is described in the following exemplary description.
An embodiment of the process for extracting training samples of a training lot by the out-of-order extraction method in the method of the present invention shown in fig. 3 is further described below, which further describes a specific process for extracting training samples of a training lot by the out-of-order extraction method in step S130, including: step S310 to step S320.
Step S310, the data in the training data set to be trained is disturbed, and the data is recorded as out-of-order training data to be trained.
Step 320, randomly extracting a batch of data from the current out-of-order training data to be trained, so as to form a training sample of a training batch.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, training a model, and when the model starts to train, executing the following steps:
first, the data in the training data set D' is disturbed.
Wherein the scrambling of the ordered data may be achieved by randomization methods, common methods include shuffling algorithms and random sampling. For example: assuming a ordered list of data [1,2,3,4,5], a shuffling algorithm may be used to shuffle the order of the list. The shuffling algorithm is as follows:
(1) First, the data list is copied to a copy to obtain a new list.
(2) Then, an element is randomly selected from the new list and swapped with the element at the corresponding position in the original list.
(3) The above steps are repeated until all elements have been processed.
By this method, the original ordered data list is disordered, and a new list arranged randomly is obtained. In deep learning, randomization of training data has several important purposes: overfitting was avoided: by randomizing the training data, the dependence of the model on a specific sequence can be reduced, and the memory of the model on the training data can be reduced, so that the risk of overfitting is reduced; improving generalization capability: the randomized training data can better generalize the model to a new data set, because the model can contact different data arrangements and distributions in the training process, thereby better adapting to different data conditions; preventing gradient descent from falling into a locally optimal solution: by randomizing the training data, the model can be prevented from falling into a locally optimal solution during the gradient descent process, as different data arrangements can lead to different gradient descent paths, thereby increasing the likelihood that the model finds a globally optimal solution.
Therefore, randomization of the deep network training data can help improve the generalization ability of the model, reduce the risk of overfitting, and increase the likelihood that the model will find a globally optimal solution.
In a second step, a batch of data, e.g., 1000 pieces of data, is randomly extracted from the scrambled training data set D' to form a training batch T. Wherein, random extraction refers to extracting a batch of data from random positions from a random set.
At step S140, training is performed by using a preset network model based on the extracted training samples of one training batch to obtain a required end-to-end depth network model for correcting the random sensor measurement data and the systematic errors of the aerospace measurement system.
According to the scheme, before training an end-to-end depth network model for random sensor measurement data and system error correction of the aerospace measurement system, sensor measurement data of the aerospace measurement system are obtained, and a measurement data set is formed; determining a training data set to be trained currently based on the measurement data set; extracting a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method; based on the extracted training samples of one training batch, training is performed by using a preset network model to obtain a required end-to-end depth network model for correcting random sensor measurement data and systematic errors of the aerospace measurement system, so that the training effect of the depth network model can be improved by extracting the measurement data in an out-of-order extraction mode before training the end-to-end depth network model for correcting the random sensor measurement data and the systematic errors of the aerospace measurement system, and the reliability of error correction is improved.
In some embodiments, step S140 is based on the extracted training samples of one training batch, and training is performed using a preset network model, so as to obtain a specific process of the required end-to-end deep network model, which is described in the following exemplary description.
The following is a flowchart of an embodiment of training with a preset network model to obtain a desired end-to-end deep network model in the method of the present invention shown in fig. 4, further describing a specific process of training with a preset network model to obtain a desired end-to-end deep network model in step S140, including: step S410 to step S440.
Step S410, training is performed by using a preset network model based on the extracted training samples of a training batch, so as to obtain a model gradient.
Step S420, extracting training samples of another training batch from the training data set to be trained again by adopting an out-of-order extraction method.
Step S430, training by using a preset network model based on the extracted training samples of the other training batch to obtain another model gradient.
Step S440, the method is circulated until the obtained model gradient is no longer reduced or the training times reach the set maximum training times, and the model parameters obtained by training at the moment are saved so as to obtain the required end-to-end depth network model.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, performing model training, and when the model starts training, further performing the following steps:
thirdly, inputting the extracted batch of data T into a model for calculation and gradient correction, and specifically calculating the value of a loss function and carrying out gradient correction.
The step of calculating the value of the loss function and performing gradient correction generally comprises the following steps:
(1) Forward propagation (Forward Propagation): firstly, inputting input data into a neural network model, and finally obtaining an output result of the model through the calculation of a plurality of layers and the action of an activation function;
(2) Calculate Loss function (computer Loss): comparing the output result of the model with a real label, and calculating the value of the loss function;
(3) Counter propagation (Backward Propagation): calculating the gradient of each parameter to the loss function through a back propagation algorithm according to the value of the loss function; the back propagation algorithm calculates the gradient of each parameter layer by layer through a chained method, and then updates the parameters according to a gradient descent rule;
(4) Parameter update (Gradient device): according to a gradient descent algorithm, updating parameters of the model by using the calculated gradient so as to gradually reduce a loss function;
(5) Repeating the iteration: the above steps are repeated until the loss function converges to a satisfactory level or a preset number of iterations is reached.
Fourth, repeating the second and third steps until the model gradient no longer drops or the maximum training times are reached.
Fifth, the trained model parameters are maintained for error correction.
Wherein the model parameters include in each layer of network: weights (Weights): the weights are parameters in the neural network that connect between different neurons for adjusting the strength of the input signal. Bias (Biases): bias is a parameter of each neuron in the neural network that is used to adjust the activation threshold of the neuron; the bias can help neurons fit data better, improving the expressive power of the model.
In some embodiments, training with a preset network model based on the extracted training samples of a training batch in step S410, to obtain a model gradient includes: based on the extracted training samples of a training batch, calculating the value of the loss function by using a preset network model, and carrying out gradient correction to obtain a model gradient.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, performing model training, and when the model starts training, further performing the following steps: in the third step, the extracted batch of data T is input into the model for calculation and gradient correction, specifically, the value of the loss function is calculated and gradient correction is performed.
In deep learning, the step of calculating the value of the loss function and performing gradient correction generally includes the following steps:
(1) Forward propagation (Forward Propagation): firstly, inputting input data into a neural network model, and finally obtaining an output result of the model through the calculation of a plurality of layers and the action of an activation function. For example: a simple neural network model is assumed to include an input layer, a hidden layer, and an output layer. The input layer has 2 features, the hidden layer has 3 neurons, and the output layer has 1 neuron. In the forward propagation process, the input data is subjected to linear transformation and activation function processing through a weight matrix and an offset vector, and finally an output result is obtained.
(2) Calculate Loss function (computer Loss): and comparing the output result of the model with the real label, and calculating the value of the loss function. Common loss functions include mean square error (Mean Squared Error), cross entropy loss (Cross Entropy Loss), and the like. For example: assuming that the mean square error is used as the loss function, the gap between the predicted value and the real label is calculated.
(3) Counter propagation (Backward Propagation): the gradient of each parameter to the loss function is calculated by a back-propagation algorithm based on the value of the loss function. The back propagation algorithm calculates the gradient of each parameter layer by a chained method, and then updates the parameters according to the gradient descent law. For example: the gradient of the weights and biases of the hidden layer and the output layer is calculated using the chain law according to the gradient of the loss function.
(4) Parameter update (Gradient device): and updating parameters of the model by using the calculated gradient according to the gradient descent algorithm so as to gradually reduce the loss function. For example: and updating weights and biases of the hidden layer and the output layer according to a gradient descent algorithm, and reducing the value of the loss function.
(5) Repeating the iteration: the above steps are repeated until the loss function converges to a satisfactory level or a preset number of iterations is reached.
The method is a basic step of calculating the value of the loss function in deep learning and carrying out gradient correction, and through the steps, the model can be continuously learned and optimized, and the prediction performance is improved.
In some embodiments, in step S420, extracting training samples of another training batch from the training dataset to be trained by out-of-order extraction method includes: in the new data extraction process, the data is no longer extracted according to the measurement batch, and the characteristic x 6 ~x 11 Maintaining the change; wherein x is 6 At atmospheric temperature x 7 Atmospheric humidity, x 8 Atmospheric pressure, x 9 To measure the photoelectric level difference of the equipment, x 10 To measure the photoelectric vertical difference of the device, x 11 For measuring the levelness of the large disc of the equipment.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, training the model, wherein when the model starts training, in the process of executing the first step to the fifth step, in the new data extraction process, the data is not extracted according to the measurement batch, thus x 6 ~x 11 Essentially always, rather than not being changed for a long time, thus improving model training efficiency and avoiding overfitting of the model to specific values of these parameters.
According to an embodiment of the present invention, there is also provided a data extraction apparatus corresponding to the data extraction method. Referring to fig. 5, a schematic diagram of an embodiment of the apparatus of the present invention is shown. The data extraction device is applied to training an end-to-end depth network model for random sensor measurement data and systematic error correction of the aerospace measurement system; the data extraction device includes: an acquisition unit 102 and a control unit 104.
Wherein the acquiring unit 102 is configured to acquire sensor measurement data of the aerospace measurement system to form a measurement data set before training an end-to-end depth network model for random sensor measurement data and systematic error correction of the aerospace measurement system. The specific function and process of the acquisition unit 102 refer to step S110.
The control unit 104 is configured to determine a training data set currently to be trained based on the measurement data set. The specific function and process of the control unit 104 refer to step S120.
In some embodiments, the control unit 104 determines a training data set to be currently trained based on the measurement data set, including:
the control unit 104 is in particular further configured to extract a partial batch of measurement data from the measurement data set, forming a training data set D. The specific function and process of the control unit 104 also refer to step S210.
The control unit 104 is in particular further configured to write a set of single combinations of measurement points for each measurement batch B in the training data set D. The specific function and process of the control unit 104 is also referred to as step S220.
The control unit 104 is specifically further configured to splice the measurement point set of each measurement lot B in the training data set D to form a new training set D' as the training data set to be trained currently. The specific function and process of the control unit 104 is also referred to as step S230.
Specifically, fig. 10 is a flow chart of a method for extracting out-of-order data for time series data. As shown in fig. 10, the out-of-order data extraction method for time series data includes: step 21, extracting part of batch measurement data from the data set to form a training data set D, and writing each measurement batch B in the training data set D into a set formed by single measurement points. The measurement point sets of each measurement lot B in the training dataset D are concatenated to form a new training set D ', D' being a set of a series of measurement points.
Wherein the new training set D' is:
/>
wherein i=1, 2, … n, represents the ith measurement sequence, m i The number of data points obtained by the ith measurement is represented, and n and m are positive integers.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: in step 21, for the training data set d= { B 1 ,B 2 ,…,B n }, B therein i =X={x [1] ,x [2] ,…,x [m] Overwrite the following form:
where i=1, 2, …, n represents the i-th measurement sequence. m is m i Representing the number of data points obtained for the ith measurement. Thus, training data set D may be rewritten as:
the control unit 104 is further configured to extract a training sample of a training batch from the training dataset to be trained currently by using an out-of-order extraction method. The specific function and processing of the control unit 104 is also referred to in step S130.
In some embodiments, the control unit 104 extracts a training sample of a training batch from the training dataset to be trained by using an out-of-order extraction method, including:
the control unit 104 is specifically further configured to scramble the data in the training data set to be trained, and record the scrambled training data as current training data to be trained. The specific function and process of the control unit 104 also refer to step S310.
The control unit 104 is specifically further configured to randomly extract a batch of data from the current out-of-order training data to be trained, so as to form a training sample of a training batch. The specific function and process of the control unit 104 also refer to step S320.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, training a model, and when the model starts to train, executing the following steps:
first, the data in the training data set D' is disturbed.
In a second step, a batch of data, e.g., 1000 pieces of data, is randomly extracted from the scrambled training data set D' to form a training batch T.
The control unit 104 is further configured to perform training by using a preset network model based on the extracted training samples of one training batch, so as to obtain a required end-to-end depth network model, so as to be used for correcting the random sensor measurement data and the systematic error of the aerospace measurement system. The specific function and process of the control unit 104 also refer to step S140.
According to the scheme, before training an end-to-end depth network model for random sensor measurement data and system error correction of the aerospace measurement system, sensor measurement data of the aerospace measurement system are obtained, and a measurement data set is formed; determining a training data set to be trained currently based on the measurement data set; extracting a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method; based on the extracted training samples of one training batch, training is performed by using a preset network model to obtain a required end-to-end depth network model for correcting random sensor measurement data and systematic errors of the aerospace measurement system, so that the training effect of the depth network model can be improved by extracting the measurement data in an out-of-order extraction mode before training the end-to-end depth network model for correcting the random sensor measurement data and the systematic errors of the aerospace measurement system, and the reliability of error correction is improved.
In some embodiments, the control unit 104 performs training with a preset network model based on the extracted training samples of one training batch to obtain a required end-to-end depth network model, including:
the control unit 104 is specifically further configured to perform training by using a preset network model based on the extracted training samples of a training batch, so as to obtain a model gradient. The specific function and process of the control unit 104 also refer to step S410.
The control unit 104 is specifically further configured to extract training samples of another training batch from the training dataset to be trained again by using an out-of-order extraction method. The specific function and process of the control unit 104 also refer to step S420.
The control unit 104 is specifically further configured to perform training by using a preset network model based on the extracted training samples of the other training batch, so as to obtain another model gradient. The specific function and process of the control unit 104 also refer to step S430.
The control unit 104 is specifically further configured to cycle through the loop until the obtained model gradient is no longer reduced or the training number has reached the set maximum training number, and save the model parameters obtained by training at this time, so as to obtain the required end-to-end depth network model. The specific function and processing of the control unit 104 is also referred to in step S440.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, performing model training, and when the model starts training, further performing the following steps:
thirdly, inputting the extracted batch of data T into a model for calculation and gradient correction, and specifically calculating the value of a loss function and carrying out gradient correction.
Fourth, repeating the second and third steps until the model gradient no longer drops or the maximum training times are reached.
Fifth, the trained model parameters are maintained for error correction.
In some embodiments, the control unit 104 performs training with a preset network model based on the extracted training samples of a training batch to obtain a model gradient, including: the control unit 104 is specifically further configured to calculate a value of the loss function and perform gradient correction by using a preset network model based on the extracted training samples of one training lot, so as to obtain a model gradient.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, performing model training, and when the model starts training, further performing the following steps: in the third step, the extracted batch of data T is input into the model for calculation and gradient correction, specifically, the value of the loss function is calculated and gradient correction is performed.
In some embodiments, the control unit 104 extracts training samples of another training batch from the training dataset to be trained again by using an out-of-order extraction method, including: the control unit 104 is in particular further configured to, during a new data extraction process, not extract data according to the measurement batch anymore, feature x 6 ~x 11 Maintaining the change; wherein x is 6 At atmospheric temperature x 7 Atmospheric humidity, x 8 Atmospheric pressure, x 9 To measure the photoelectric level difference of the equipment, x 10 To measure the photoelectric vertical difference of the device, x 11 For measuring the levelness of the large disc of the equipment.
Specifically, as shown in fig. 10, the out-of-order data extraction method for time series data further includes: step 22, training the model, wherein when the model starts training, in the process of executing the first step to the fifth step, in the new data extraction process, the data is not extracted according to the measurement batch, thus x 6 ~x 11 Basically, the model is always kept to be changed, but not changed for a long time, so that the model training efficiency is improved and the model training is avoidedOverfitting of the model to specific values of these parameters is avoided.
Since the processes and functions implemented by the apparatus of the present embodiment substantially correspond to the embodiments, principles and examples of the foregoing methods, the descriptions of the embodiments are not exhaustive, and reference may be made to the descriptions of the foregoing embodiments and their descriptions are omitted herein.
According to an embodiment of the present invention, there is also provided a storage medium corresponding to a data extraction method, the storage medium including a stored program, wherein the device in which the storage medium is controlled to execute the data extraction method described above when the program runs.
Since the processes and functions implemented by the storage medium of the present embodiment substantially correspond to the embodiments, principles and examples of the foregoing methods, the descriptions of the present embodiment are not exhaustive, and reference may be made to the related descriptions of the foregoing embodiments, which are not repeated herein.
In summary, it is readily understood by those skilled in the art that the above-described advantageous ways can be freely combined and superimposed without conflict.
The above description is only an example of the present invention and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.
Claims (10)
1. The data extraction method is characterized by being applied to training an end-to-end depth network model for random sensor measurement data and systematic error correction of a space measurement system; the data extraction method comprises the following steps:
Before training an end-to-end depth network model for random sensor measurement data and system error correction of the aerospace measurement system, acquiring the sensor measurement data of the aerospace measurement system to form a measurement data set;
determining a training data set to be trained currently based on the measurement data set;
extracting a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method;
based on the extracted training samples of one training batch, training is carried out by utilizing a preset network model to obtain a required end-to-end depth network model, so as to be used for correcting random sensor measurement data and systematic errors of the aerospace measurement system.
2. The data extraction method according to claim 1, wherein determining a training data set currently to be trained based on the measurement data set comprises:
extracting partial batches of measurement data from the measurement data set to form a training data set D;
writing a set of single measurement point combinations for each measurement batch B in the training dataset D;
and splicing the measurement point sets of each measurement batch B in the training data set D to form a new training set D' serving as the training data set to be trained currently.
3. The data extraction method according to claim 2, wherein the new training set D' is:
wherein i=1, 2, … n, represents the ith measurement sequence, m i The number of data points obtained by the ith measurement is represented, and n and m are positive integers.
4. A method of data extraction according to any one of claims 1 to 3, wherein extracting training samples of a training batch from the current training dataset to be trained by out-of-order extraction comprises:
the data in the training data set to be trained currently is disturbed and recorded as out-of-order training data to be trained currently;
randomly extracting a batch of data from the current disorder training data to be trained to form a training sample of a training batch.
5. A data extraction method according to any one of claims 1 to 3, wherein training with a preset network model based on training samples of one training batch extracted, results in a desired end-to-end depth network model, comprising:
training by using a preset network model based on a training sample of the extracted training batch to obtain a model gradient;
Extracting training samples of another training batch from the training data set to be trained again by adopting an out-of-order extraction method;
training by using a preset network model based on the extracted training samples of another training batch to obtain another model gradient;
and circulating until the obtained model gradient is not reduced any more or the training times reach the set maximum training times, and storing the model parameters obtained by training at the moment to obtain the required end-to-end depth network model.
6. The method of claim 5, wherein extracting training samples of another training batch from the current training dataset to be trained using out-of-order extraction, comprises:
in the new data extraction process, the data is no longer extracted according to the measurement batch, and the characteristic x 6 ~x 11 Maintaining the change;
wherein x is 6 At atmospheric temperature x 7 Atmospheric humidity, x 8 Atmospheric pressure, x 9 To measure the photoelectric level difference of the equipment, x 10 To measure the photoelectric vertical difference of the device, x 11 For measuring the levelness of the large disc of the equipment.
7. The method of claim 5, wherein training with a predetermined network model based on training samples of a training batch extracted to obtain a model gradient comprises:
Based on the extracted training samples of a training batch, calculating the value of the loss function by using a preset network model, and carrying out gradient correction to obtain a model gradient.
8. The data extraction device is characterized by being applied to training an end-to-end depth network model for random sensor measurement data and system error correction of a space measurement system; the data extraction device includes:
an acquisition unit configured to acquire sensor measurement data of the aerospace measurement system to form a measurement data set prior to training an end-to-end depth network model for random sensor measurement data and systematic error correction of the aerospace measurement system;
a control unit configured to determine a training data set currently to be trained based on the measurement data set;
the control unit is further configured to extract a training sample of a training batch from the training data set to be trained currently by adopting an out-of-order extraction method;
the control unit is further configured to perform training by using a preset network model based on the extracted training samples of one training batch to obtain a required end-to-end depth network model for correcting random sensor measurement data and systematic errors of the aerospace measurement system.
9. A terminal, comprising: the data extraction device of claim 8.
10. A storage medium comprising a stored program, wherein the program, when run, controls a device in which the storage medium is located to perform the data extraction method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311668220.7A CN117668549A (en) | 2023-12-07 | 2023-12-07 | Data extraction method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311668220.7A CN117668549A (en) | 2023-12-07 | 2023-12-07 | Data extraction method, device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117668549A true CN117668549A (en) | 2024-03-08 |
Family
ID=90072983
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311668220.7A Pending CN117668549A (en) | 2023-12-07 | 2023-12-07 | Data extraction method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117668549A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109506650A (en) * | 2018-09-12 | 2019-03-22 | 广东嘉腾机器人自动化有限公司 | AGV navigation deviation of stroke modification method based on BP network |
CN110334741A (en) * | 2019-06-06 | 2019-10-15 | 西安电子科技大学 | Radar range profile's recognition methods based on Recognition with Recurrent Neural Network |
CN110609229A (en) * | 2019-09-17 | 2019-12-24 | 电子科技大学 | Wind driven generator blade imbalance fault detection method based on deep learning |
CN110704664A (en) * | 2019-08-28 | 2020-01-17 | 宁波大学 | Hash retrieval method |
CN111913175A (en) * | 2020-07-02 | 2020-11-10 | 哈尔滨工程大学 | Water surface target tracking method with compensation mechanism under transient failure of sensor |
CN112836820A (en) * | 2021-01-31 | 2021-05-25 | 云知声智能科技股份有限公司 | Deep convolutional network training method, device and system for image classification task |
CN114969990A (en) * | 2022-08-02 | 2022-08-30 | 中国电子科技集团公司第十研究所 | Multi-model fused avionic product health assessment method |
CN115356965A (en) * | 2022-08-29 | 2022-11-18 | 中国兵器科学研究院 | Loose coupling actual installation data acquisition device and data processing method |
CN115437924A (en) * | 2022-08-17 | 2022-12-06 | 电子科技大学 | Uncertainty estimation method of end-to-end automatic driving decision algorithm |
CN116341377A (en) * | 2023-03-20 | 2023-06-27 | 北京理工大学长三角研究院(嘉兴) | Lower casting type detection component track prediction method based on LSTM neural network |
-
2023
- 2023-12-07 CN CN202311668220.7A patent/CN117668549A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109506650A (en) * | 2018-09-12 | 2019-03-22 | 广东嘉腾机器人自动化有限公司 | AGV navigation deviation of stroke modification method based on BP network |
CN110334741A (en) * | 2019-06-06 | 2019-10-15 | 西安电子科技大学 | Radar range profile's recognition methods based on Recognition with Recurrent Neural Network |
CN110704664A (en) * | 2019-08-28 | 2020-01-17 | 宁波大学 | Hash retrieval method |
CN110609229A (en) * | 2019-09-17 | 2019-12-24 | 电子科技大学 | Wind driven generator blade imbalance fault detection method based on deep learning |
CN111913175A (en) * | 2020-07-02 | 2020-11-10 | 哈尔滨工程大学 | Water surface target tracking method with compensation mechanism under transient failure of sensor |
CN112836820A (en) * | 2021-01-31 | 2021-05-25 | 云知声智能科技股份有限公司 | Deep convolutional network training method, device and system for image classification task |
CN114969990A (en) * | 2022-08-02 | 2022-08-30 | 中国电子科技集团公司第十研究所 | Multi-model fused avionic product health assessment method |
CN115437924A (en) * | 2022-08-17 | 2022-12-06 | 电子科技大学 | Uncertainty estimation method of end-to-end automatic driving decision algorithm |
CN115356965A (en) * | 2022-08-29 | 2022-11-18 | 中国兵器科学研究院 | Loose coupling actual installation data acquisition device and data processing method |
CN116341377A (en) * | 2023-03-20 | 2023-06-27 | 北京理工大学长三角研究院(嘉兴) | Lower casting type detection component track prediction method based on LSTM neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6599294B2 (en) | Abnormality detection device, learning device, abnormality detection method, learning method, abnormality detection program, and learning program | |
JP5832644B2 (en) | A computer-aided method for forming data-driven models of technical systems, in particular gas turbines or wind turbines | |
CN109523013B (en) | Air particulate matter pollution degree estimation method based on shallow convolutional neural network | |
US20110288835A1 (en) | Data processing device, data processing method and program | |
CN115018021A (en) | Machine room abnormity detection method and device based on graph structure and abnormity attention mechanism | |
CN110717525B (en) | Channel adaptive optimization anti-attack defense method and device | |
CN116627027B (en) | Optimal robustness control method based on improved PID | |
CN107832789B (en) | Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation | |
CN114881090B (en) | Satellite telemetry data feature selection method, device and medium based on improved particle swarm optimization | |
CN112734012A (en) | Impulse neural network training method, data processing method, electronic device, and medium | |
CN112733273A (en) | Method for determining Bayesian network parameters based on genetic algorithm and maximum likelihood estimation | |
CN114842343A (en) | ViT-based aerial image identification method | |
CN112149825A (en) | Neural network model training method and device, electronic equipment and storage medium | |
CN115972211A (en) | Control strategy offline training method based on model uncertainty and behavior prior | |
CN117668549A (en) | Data extraction method, device and storage medium | |
CN114048811A (en) | Wireless sensor node fault diagnosis method and device based on deep learning | |
CN115938104A (en) | Dynamic short-time road network traffic state prediction model and prediction method | |
CN116089844B (en) | non-Gaussian feature verification method for pose data of unmanned aerial vehicle | |
CN116880191A (en) | Intelligent control method of process industrial production system based on time sequence prediction | |
CN110708469B (en) | Method and device for adapting exposure parameters and corresponding camera exposure system | |
CN109709624B (en) | Method for determining flash elements of infrared detector based on LSTM model | |
JP2024003643A (en) | Method of learning neural network, computer program, and remaining life prediction system | |
CN115643104A (en) | Network intrusion detection method based on deep supervision discrete hash | |
CN113807505A (en) | Method for improving cyclic variation learning rate through neural network | |
CN104134091B (en) | Neural network training method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |