CN116090352A - Full waveform inversion method based on gate cycle unit and attention mechanism - Google Patents

Full waveform inversion method based on gate cycle unit and attention mechanism Download PDF

Info

Publication number
CN116090352A
CN116090352A CN202310115909.0A CN202310115909A CN116090352A CN 116090352 A CN116090352 A CN 116090352A CN 202310115909 A CN202310115909 A CN 202310115909A CN 116090352 A CN116090352 A CN 116090352A
Authority
CN
China
Prior art keywords
neural network
attention
model
data
network model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310115909.0A
Other languages
Chinese (zh)
Inventor
马飞
朱红达
史浩
***
郝骞
张克刚
陈钢
徐恩博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huadian Lituo Energy Technology Co ltd
Original Assignee
Beijing Huadian Lituo Energy Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huadian Lituo Energy Technology Co ltd filed Critical Beijing Huadian Lituo Energy Technology Co ltd
Priority to CN202310115909.0A priority Critical patent/CN116090352A/en
Publication of CN116090352A publication Critical patent/CN116090352A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/08Probabilistic or stochastic CAD

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Educational Administration (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Computer Hardware Design (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

The invention discloses a full waveform inversion method based on a gate circulation unit and an attention mechanism, which comprises the following steps: acquiring actual seismic data; data preprocessing and data set dividing; constructing a neural network model and setting super parameters; training a neural network model to fit a forward process; and loading optimal model parameters and carrying out inversion mapping on unknown seismic data. By using the method, the calculation time is reduced, the precision of full waveform inversion is obviously improved, and the generalization capability of full waveform inversion is improved. The full waveform inversion method based on the gate circulation unit and the attention mechanism can be widely applied to deep reservoir exploration tasks, and can be used for predicting a geological velocity model more quickly and clearly and realizing accurate geological interpretation.

Description

Full waveform inversion method based on gate cycle unit and attention mechanism
Technical Field
The invention relates to the technical field of artificial intelligence and geophysics, in particular to a full waveform inversion method based on a gate circulation unit and an attention mechanism.
Background
Full waveform inversion (Full Waveform Inversions, FWI) enables subsurface physical properties to be determined from seismic data and imaged with high accuracy and resolution, playing an important role in subsurface characterization in the earth science. In general, FWI is expressed mathematically as a nonlinear inverse problem, whose numerical implementation can be in either the time or frequency domain. However, the FWI calculation method is not only computationally intensive to solve, but also converges to local minima due to nonlinearity of the inversion problem, reducing the applicability and robustness of the algorithm. In order to alleviate the above problems, many conventional methods have been proposed by scholars and expanded on the basis of the above, such as regularization-based techniques, a priori information-based methods, multi-scale inversion methods and preprocessing methods.
In recent years, with the great increase of computing power and the rising of deep neural networks, many deep learning methods are applied to the problems of geological exploration data processing and inversion. Full waveform inversion methods based on deep learning are mainly divided into three types, namely deep neural network-based, convolution neural network-based and cyclic neural network-based: 1) Full waveform inversion method based on deep neural network: mauritio et al propose an accurate deep neural network model directly based on original data training, a prediction model obtained by training the model maps out the relation between a data space and a speed model, and the final output result has only small error with experimental data obtained from a real ground surface, which shows the research value of the deep neural network on a full waveform inversion method (mauritio a, joseph J, amir a, et al deep-learning tomography [ J ]. Lead edge.2018, 37 (1): 58-66.); 2) Full waveform inversion method based on convolutional neural network: luan et al used a finite difference method to simulate the actual observed data to construct a training dataset, which was used as an input to train a full convolutional neural network to predict an initial velocity model. The output result of the method has approximate details of real observation data, and after accurate inversion is carried out by a full waveform inversion method, the underground medium physical imaging (Luan C, peterson N S, erick G S N.Estimating Initial Velocity Models for the FWI Using Deep Learning [ C ]. International Congress of the Brazilian Geophysical Society & Expogef.2019) with high precision and high resolution can be obtained; 3) Full waveform inversion method based on cyclic neural network: sun et al propose a recurrent neural network capable of modeling one-dimensional or multidimensional scalar acoustic seismic forward propagation and demonstrate that training the network and updating its weights using observed data is equivalent to solving the inverse problem of geographical exploration, namely gradient-based seismic full waveform inversion (Sun J, niu Z, kristopher A I, et al A-guided deep-learning formulation and optimization of seismic waveform inversion [ J ]. Geophysics,2020, 85 (2): R87-R99.). The multiparameter full waveform inversion modeled by a cyclic neural network in an isotropic elastic wave medium by Wang et al, and demonstrates the equivalence of full-batch automatic differentiation and inversion in an elastic isotropic medium by the traditional contiguous state method (Wang W, mcMechan G A, ma J. Elastic isotropic and anisotropic full waveform inversions using automatic differentiation for gradient calculations in a framework of recurrent neural networks [ J ]. Geophysics,2021, 86 (6): R795-R810.). However, most of existing full waveform inversion methods based on deep learning can only perform two-dimensional imaging on physical inversion of underground media, have inaccurate structure characterization, and have the problems that a deep learning model for full waveform inversion is weak in generalization capability, poor in robustness and the like, is easily restricted by an initial model, and is low in calculation efficiency and the like.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a full waveform inversion method based on a gate circulation unit and an attention mechanism, which not only can improve the nonlinear fitting capacity of the traditional full waveform inversion method based on a circulation neural network, but also can improve the attention degree of important features in input information, thereby accelerating the whole process of full waveform inversion.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a full waveform inversion method based on a gate cycle unit and an attention mechanism, comprising the steps of:
step 1: acquiring actual working face seismic observation data, preprocessing the obtained actual working face seismic observation data, taking a corresponding geological speed model as a tag set, and constructing the actual working face seismic observation data into a training set and a test set through random sampling;
step 2: constructing a neural network model of a forward modeling process in full waveform inversion mapping based on a portal circulation unit and an attention mechanism, inputting data of the training set obtained in the step 1 and a geological speed model label set corresponding to the data as input data into the neural network model, and setting super parameters required by training the neural network model, wherein the super parameters comprise maximum iteration times, learning rate and network depth; initializing the neural network model parameters and training; testing the performance of the neural network model through data of a test set at a preset iteration detection node, and storing the parameters of the neural network model at the moment;
step 3: and optimally selecting the output result of the neural network model stored by each iteration detection node according to a preset evaluation index, loading corresponding model parameters into the neural network model to obtain an optimal neural network model, taking unknown seismic observation data as the input of the optimal neural network model, carrying out inversion calculation on fixed parameters, and obtaining a predicted speed model according to the obtained output result, namely obtaining an inversion mapped physical property imaging result of the underground medium.
Further, in step 1, the specific process of acquiring the actual working face seismic observation data and performing data preprocessing on the obtained actual working face seismic observation data is as follows:
according to the actual condition of the working face, a position arrangement scheme of the seismic data detectors is designed to acquire actual working face seismic observation data; the obtained actual working face seismic observation data is preprocessed, wherein the preprocessing comprises the steps of eliminating bad channels in signals, eliminating random impulse noise and suppressing power frequency interference.
Further, in step 2, the structure of the neural network model includes two Att-GRU blocks connected in series and a regression layer for feature decoding;
the forward propagation of Att-GRU blocks is as follows:
input data is encoded by an attention encoding layer, and the attention degree of important features in the current information is improved, which can be formed as follows:
Figure BDA0004078579710000041
wherein x is e And y is input and output of the attention encoding layer, W encod er In order to pay attention to the weights learned by the coding layer,
Figure BDA0004078579710000042
for attention coding layer attention weights obtained by the attention mechanism, b e Is a bias vector; * For matrix multiplication operator, ++>
Figure BDA0004078579710000043
Is a Hadamard operator;
the GRU neural network takes the characteristics after passing through the attention coding layer as input; the GRU neural network controls the flow of input information through a reset gate and an update gate; the mathematical expression is as follows:
Figure BDA0004078579710000051
wherein r is t A gate control signal for resetting the gate for controlling whether the previous hidden state is ignored; u (u) t A gating signal for updating the gate for controlling whether to update the candidate hidden state variable; x is x t 、y t 、h t-1 And h t Respectively input and output at the current moment, a state at the previous moment and a state at the current moment;
Figure BDA0004078579710000052
is a candidate state at the current moment;
Figure BDA0004078579710000053
W o all are weight matrix, b u 、b r 、b h 、b o Are offset vectors; sigma (·) is a sigmoid function; the degree is the Hadamard product operator;
the attention decoding layer decodes the features with long and short time memories and improves the attention of important information in the global features, and the mathematical expression is as follows:
y=(x d *W decoder +b d )°W attd (3)
wherein x is d And y is input and output of the attention decoding layer, W decoder In order to pay attention to the weights learned by the decoding layer,
Figure BDA0004078579710000054
attention weight, b, for attention decoding layer attention weight by attention mechanism d Is a bias vector; * Is a matrix multiplication operator, and is a Hadamard operator;
after the input data is subjected to feature extraction by the Att-GRU block, performing normalization processing by a BN layer and activating by using a ReLU activation function, wherein the mathematical expression is as follows:
y=ReLU(BN(x)) (4)
Figure BDA0004078579710000061
Figure BDA0004078579710000062
wherein x and y are input data and output data, respectively; gamma and beta are two superparameters, mu B And
Figure BDA0004078579710000063
the mean value and the variance obtained by all numerical calculation on the same feature diagram in the small batch processing process are obtained; e is a constant value added for stability of the value;
and obtaining a speed model corresponding to the final full waveform inversion through a regression layer of the activated feature map.
Further, the specific process of step 3 is as follows:
optimally selecting the output result of the neural network model stored by each iteration detection node according to the preset evaluation index obtained in the step 2, and loading the corresponding model parameters into the neural network model to obtain an optimal neural network model; and updating parameters of the frozen model, fixing the frozen model, and predicting the corresponding geological speed model end to end by taking new unknown seismic observation data as input.
Further, the preset evaluation index is a mean square error value of the predicted speed model and the real speed model corresponding to the tag set.
The invention has the beneficial effects that: the invention models the forward modeling process in full waveform inversion by adopting the GRU neural network with time sequence correlation and high-efficiency nonlinear mapping, emphasizes important characteristics in input information by the inversion process in the Xi Quan waveform inversion of the end-to-end geology through the back propagation of the neural network and introducing a attention mechanism in the GRU neural network. The method of the invention obviously improves the precision of full waveform inversion and the generalization capability of full waveform inversion while reducing the calculation time. The full waveform inversion method based on the gate circulation unit and the attention mechanism can be widely applied to deep reservoir exploration tasks, and can be used for predicting a geological velocity model more quickly and clearly and realizing accurate geological interpretation.
Drawings
FIG. 1 is a flow chart of full waveform inversion based on an attention mechanism and a GRU neural network in accordance with an embodiment of the present invention;
FIG. 2 is a schematic diagram of a neural network model according to an embodiment of the present invention;
FIG. 3 is a flowchart of the training and testing steps of a model in an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that, while the present embodiment provides a detailed implementation and a specific operation process on the premise of the present technical solution, the protection scope of the present invention is not limited to the present embodiment.
As shown in fig. 1, the embodiment provides a full waveform inversion method based on a gate cycle unit and an attention mechanism, which specifically includes the following implementation processes:
1. data acquisition, preprocessing and data set partitioning
And designing a position arrangement scheme of the seismic data detectors according to the actual condition of the working face, acquiring actual seismic observation data, and collecting 3100 actual seismic observation data. The obtained data is preprocessed to obtain high quality, high signal to noise ratio seismic data reflecting the actual geology. The data preprocessing comprises the steps of eliminating bad tracks in signals, removing random impulse noise, suppressing power frequency interference, shot gather equalization and detector point gather equalization. The corresponding geologic velocity model is used as a label set to be divided into a training set and a testing set. For the actual seismic data after preprocessing, at 5:1 to construct a training set and a test set, i.e. the training set size is 2600 parts and the test set size is 500 parts.
2. Building a neural network model structure and continuing training:
the method comprises the steps of constructing a neural network model based on an attention mechanism and a forward modeling process in full waveform inversion mapping of a GRU neural network, and setting relevant super parameters, wherein the neural network model specifically comprises maximum iteration times, learning rate, batch size, network depth and regularized attenuation factors.
Initializing parameters of the neural network model by using a Kaiming initialization method, and inputting data in a training set and corresponding labels into the neural network model as input data to train the parameters of the neural network. As shown in fig. 2 (b), the neural network model structure includes two Att-GRU blocks connected in series and one regression layer for feature decoding.
Specifically, for the Att-GRU block, as shown in fig. 2 (a), the forward propagation process is: input data is encoded by an attention encoding layer, and the attention degree of important features in the current information is improved, which can be formed as follows:
y=(x e *W encoder +b e )°W atte (1)
wherein x is e And y is the input of the attention encoding layer, W encoder Weight learned for the layer, W atte To get the layer attention weight through the attention mechanism, b e Is a bias vector. * Is a matrix multiplication operator, and is a hadamard product operator.
The GRU neural network takes as input the attention-encoded features. The GRU is a highly efficient recurrent neural network with long-term memory capability and capable of alleviating the problem of gradient extinction in back propagation, which controls the flow of input information by resetting gates and updating gates. The mathematical expression is as follows:
Figure BDA0004078579710000091
wherein r is t A gate control signal for resetting the gate to control whether the previous hidden state is ignored; u (u) t To update the gating signal of the gate, it is used to control whether the candidate hidden state variable is updated. X is x t 、y t 、h t-1 And h t Respectively input and output at the current moment, a state at the previous moment and a state at the current moment.
Figure BDA0004078579710000092
Is a candidate state for the current time. W (W) * Is a weight matrix, b * Is the bias vector. Sigma (·) is a sigmoid function. The degree is the Hadamard product operator.
Finally, the attention decoding layer decodes the features with long and short time memories and improves the attention of important information in the global features, and the mathematical expression is as follows:
Figure BDA0004078579710000093
wherein x is d And y is input and output of the attention decoding layer, W decoder Weight learned for the layer, W attd To get the layer attention weight through the attention mechanism, b d Is a bias vector. * Is a matrix multiplication operator, and is a hadamard product operator.
When the input data is subjected to characteristic extraction by the Att-GRU block, the input data is activated by normalization processing of BN layer and using a ReLU activation function, and the mathematical expression is as follows
y=ReLU(BN(x)) (4)
Figure BDA0004078579710000094
Figure BDA0004078579710000101
Where x and y are input data and output data, respectively. Gamma and beta are two superparameters, mu B And
Figure BDA0004078579710000102
the mean and variance obtained by all numerical calculation on the same feature map in the small batch processing process are obtained. E is a very small constant value added for stability of the value. The activated feature map passes through a feature decoding layer to obtain the final full waveform inversion corresponding toAs shown in the training section of fig. 3.
Training the parameters of the neural network model by adopting an Adam optimizer, wherein the selected loss function is mean square error, and the mathematical expression is that
Figure BDA0004078579710000103
Wherein y and
Figure BDA0004078579710000105
respectively, a predicted result and a real label, +.>
Figure BDA0004078579710000104
And n is the number of samples, which is the loss function of the neural network model. In the process of training the neural network, the loss function is continuously minimized, and the loss gradient of the model parameters is obtained according to error back propagation calculation, so as to guide the updating direction of the model parameters. And in the process of continuously iterating and updating in the early stage, testing the performance of the neural network by using test set data at a preset iterated node, and storing model parameters until the training stage of the maximum iterated times is completed.
3. Inversion geological velocity model corresponding to prediction unknown actual seismic observation data
And (3) optimally selecting the output result of the neural network model obtained in the step (2) according to a preset evaluation index, and loading corresponding model parameters into the neural network model. And updating parameters of the frozen model, fixing the parameters, and predicting the corresponding geological speed model end to end by taking new position seismic observation data as input, as shown in a prediction part of fig. 3.
Various modifications and variations of the present invention will be apparent to those skilled in the art in light of the foregoing teachings and are intended to be included within the scope of the following claims.

Claims (5)

1. A full waveform inversion method based on a gate cycle unit and an attention mechanism, comprising the steps of:
step 1: acquiring actual working face seismic observation data, preprocessing the obtained actual working face seismic observation data, taking a corresponding geological speed model as a tag set, and constructing the actual working face seismic observation data into a training set and a test set through random sampling;
step 2: constructing a neural network model of a forward modeling process in full waveform inversion mapping based on a portal circulation unit and an attention mechanism, inputting data of the training set obtained in the step 1 and a geological speed model label set corresponding to the data as input data into the neural network model, and setting super parameters required by training the neural network model, wherein the super parameters comprise maximum iteration times, learning rate and network depth; initializing the neural network model parameters and training; testing the performance of the neural network model through data of a test set at a preset iteration detection node, and storing the parameters of the neural network model at the moment;
step 3: and optimally selecting the output result of the neural network model stored by each iteration detection node according to a preset evaluation index, loading corresponding model parameters into the neural network model to obtain an optimal neural network model, taking unknown seismic observation data as the input of the optimal neural network model, carrying out inversion calculation on fixed parameters, and obtaining a predicted speed model according to the obtained output result, namely obtaining an inversion mapped physical property imaging result of the underground medium.
2. The method according to claim 1, wherein in step 1, the specific process of acquiring actual working surface seismic observation data and performing data preprocessing on the obtained actual working surface seismic observation data is as follows:
according to the actual condition of the working face, a position arrangement scheme of the seismic data detectors is designed to acquire actual working face seismic observation data; the obtained actual working face seismic observation data is preprocessed, wherein the preprocessing comprises the steps of eliminating bad channels in signals, eliminating random impulse noise and suppressing power frequency interference.
3. The method of claim 1, wherein in step 2, the neural network model structure comprises two Att-GRU blocks in series and a regression layer for feature decoding;
the forward propagation of Att-GRU blocks is as follows:
input data is encoded by an attention encoding layer, and the attention degree of important features in the current information is improved, which can be formed as follows:
Figure FDA0004078579670000021
wherein x is e And y is input and output of the attention encoding layer, W encoder In order to pay attention to the weights learned by the coding layer,
Figure FDA0004078579670000023
for attention coding layer attention weights obtained by the attention mechanism, b e Is a bias vector; * For matrix multiplication operator, ++>
Figure FDA0004078579670000024
Is a Hadamard operator;
the GRU neural network takes the characteristics after passing through the attention coding layer as input; the GRU neural network controls the flow of input information through a reset gate and an update gate; the mathematical expression is as follows:
Figure FDA0004078579670000022
wherein r is t A gate control signal for resetting the gate for controlling whether the previous hidden state is ignored; u (u) t A gating signal for updating the gate for controlling whether to update the candidate hidden state variable; x is x t 、y t 、h t-1 And h t Respectively input and output at the current momentThe previous time state and the current time state;
Figure FDA0004078579670000031
is a candidate state at the current moment; />
Figure FDA0004078579670000032
W o All are weight matrix, b u 、b r 、b h 、b o Are offset vectors; sigma (·) is a sigmoid function; the degree is the Hadamard product operator;
the attention decoding layer decodes the features with long and short time memories and improves the attention of important information in the global features, and the mathematical expression is as follows:
Figure FDA0004078579670000033
wherein x is d And y is input and output of the attention decoding layer, W decoder In order to pay attention to the weights learned by the decoding layer,
Figure FDA0004078579670000034
attention weight, b, for attention decoding layer attention weight by attention mechanism d Is a bias vector; * Is a matrix multiplication operator, and is a Hadamard operator;
after the input data is subjected to feature extraction by the Att-GRU block, performing normalization processing by a BN layer and activating by using a ReLU activation function, wherein the mathematical expression is as follows:
y=ReLU(BN(x)) (4)
Figure FDA0004078579670000035
Figure FDA0004078579670000036
wherein x and y are input data and output data, respectively; gamma and beta are two superparameters, mu B And
Figure FDA0004078579670000037
the mean value and the variance obtained by all numerical calculation on the same feature diagram in the small batch processing process are obtained; e is a constant value added for stability of the value;
and obtaining a speed model corresponding to the final full waveform inversion through a regression layer of the activated feature map.
4. The method according to claim 1, wherein the specific procedure of step 3 is as follows:
optimally selecting the output result of the neural network model stored by each iteration detection node according to the preset evaluation index obtained in the step 2, and loading the corresponding model parameters into the neural network model to obtain an optimal neural network model; and updating parameters of the frozen model, fixing the frozen model, and predicting the corresponding geological speed model end to end by taking new unknown seismic observation data as input.
5. The method of claim 1 or 4, wherein the predetermined evaluation index is a mean square error value of the predicted velocity model and a real velocity model corresponding to the tag set.
CN202310115909.0A 2023-02-01 2023-02-01 Full waveform inversion method based on gate cycle unit and attention mechanism Pending CN116090352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310115909.0A CN116090352A (en) 2023-02-01 2023-02-01 Full waveform inversion method based on gate cycle unit and attention mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310115909.0A CN116090352A (en) 2023-02-01 2023-02-01 Full waveform inversion method based on gate cycle unit and attention mechanism

Publications (1)

Publication Number Publication Date
CN116090352A true CN116090352A (en) 2023-05-09

Family

ID=86208219

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310115909.0A Pending CN116090352A (en) 2023-02-01 2023-02-01 Full waveform inversion method based on gate cycle unit and attention mechanism

Country Status (1)

Country Link
CN (1) CN116090352A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117592381A (en) * 2024-01-18 2024-02-23 中国船舶集团有限公司第七〇七研究所 Atmospheric waveguide parameter inversion model training method, device, equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117592381A (en) * 2024-01-18 2024-02-23 中国船舶集团有限公司第七〇七研究所 Atmospheric waveguide parameter inversion model training method, device, equipment and medium
CN117592381B (en) * 2024-01-18 2024-05-17 中国船舶集团有限公司第七〇七研究所 Atmospheric waveguide parameter inversion model training method, device, equipment and medium

Similar Documents

Publication Publication Date Title
Kaur et al. Seismic data interpolation using deep learning with generative adversarial networks
US11693139B2 (en) Automated seismic interpretation-guided inversion
CN107589448B (en) A kind of multitrace seismogram reflection coefficient sequence Simultaneous Inversion method
Aleardi et al. 1D elastic full‐waveform inversion and uncertainty estimation by means of a hybrid genetic algorithm–Gibbs sampler approach
WO2020123084A1 (en) Machine learning-augmented geophysical inversion
CN111596366B (en) Wave impedance inversion method based on seismic signal optimization processing
US11181653B2 (en) Reservoir characterization utilizing ReSampled seismic data
CN109407151A (en) Time-domain full waveform inversion method based on wave field local correlation time shift
CN108897042A (en) Content of organic matter earthquake prediction method and device
Wang et al. Seismic velocity inversion transformer
Mousavi et al. Applications of deep neural networks in exploration seismology: A technical survey
CN105467442A (en) A globally optimized time-varying sparse deconvolution method and apparatus
CN116090352A (en) Full waveform inversion method based on gate cycle unit and attention mechanism
CN116047583A (en) Adaptive wave impedance inversion method and system based on depth convolution neural network
CN110146923B (en) High-efficiency high-precision depth domain seismic wavelet extraction method
Gao et al. Global optimization with deep-learning-based acceleration surrogate for large-scale seismic acoustic-impedance inversion
Zhang et al. Extracting Q anomalies from marine reflection seismic data using deep learning
Gou et al. Bayesian physics-informed neural networks for the subsurface tomography based on the eikonal equation
Zhang et al. Parameter estimation of acoustic wave equations using hidden physics models
CN116011338A (en) Full waveform inversion method based on self-encoder and deep neural network
Jin et al. CycleFCN: A physics-informed data-driven seismic waveform inversion method
CN116224265A (en) Ground penetrating radar data inversion method and device, computer equipment and storage medium
CN113722893B (en) Seismic record inversion method, device, equipment and storage medium
Zu et al. Robust local slope estimation by deep learning
CN113253350B (en) Porosity inversion method based on joint dictionary

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination