CN105631554B - A kind of oil well oil liquid moisture content multi-model prediction technique based on time series - Google Patents

A kind of oil well oil liquid moisture content multi-model prediction technique based on time series Download PDF

Info

Publication number
CN105631554B
CN105631554B CN201610094429.0A CN201610094429A CN105631554B CN 105631554 B CN105631554 B CN 105631554B CN 201610094429 A CN201610094429 A CN 201610094429A CN 105631554 B CN105631554 B CN 105631554B
Authority
CN
China
Prior art keywords
data
value
output
itenum
firefly
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610094429.0A
Other languages
Chinese (zh)
Other versions
CN105631554A (en
Inventor
李琨
韩莹
魏泽飞
佘东生
杨一柳
于震
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bohai University
Original Assignee
Bohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bohai University filed Critical Bohai University
Priority to CN201610094429.0A priority Critical patent/CN105631554B/en
Publication of CN105631554A publication Critical patent/CN105631554A/en
Application granted granted Critical
Publication of CN105631554B publication Critical patent/CN105631554B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24147Distances to closest patterns, e.g. nearest neighbour classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Agronomy & Crop Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Animal Husbandry (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention relates to the oil well oil liquid moisture content multi-model prediction techniques based on time series, which comprises the steps of: 1), using historical data establishing oil well oil liquid moisture content data set is { xi, i=1,2 ..., N };2), using wavelet analysis method to oil well oil liquid moisture content data set { xi, i=1,2 ..., N in data pre-processed;3), by neighbour's propagation clustering algorithm by { xi}WaveClassify;4), the data in each cluster are indicated by following time series form:5) time series models of each cluster, are established according to ExtremeLearningMachine algorithm and obtain predicted value using the time series models.The problem of real-time of time-consuming and laborious, influence production monitoring which solve existing oil well oil liquid moisture content manual samplings and oil recovery data.

Description

Oil well oil water content multi-model prediction method based on time sequence
Technical Field
The invention relates to the field of petroleum production, in particular to a multi-model prediction method for oil water content of an oil well based on a time sequence.
Background
The water content of oil in the oil well is an important index for oil field production, and is not only related to the development life of the oil well, but also related to the economic benefit problem of enterprises. Therefore, the method has extremely important significance in measuring the oil well yield, evaluating the exploitation value and the exploitation degree of the oil reservoir, formulating an exploitation scheme and the like. At present, the method of manual sampling and redistillation is still commonly adopted for measuring the water content of oil in an oil well, workers periodically sample the oil well and then send the oil back to a technical department for experimental analysis, and the method wastes time and labor, influences the production monitoring and the real-time performance of oil extraction data, and cannot timely make or adjust a reasonable exploitation scheme according to the actual production condition of the oil well.
Disclosure of Invention
The invention aims to provide a multi-model prediction method for the oil water content of an oil well based on a time sequence, which solves the problems that the manual sampling of the oil water content of the oil well is time-consuming and labor-consuming and influences the production monitoring and the real-time performance of oil extraction data.
The technical scheme of the invention is as follows:
a multi-model prediction method for oil water content of an oil well based on a time sequence is characterized by comprising the following steps:
1) and establishing an oil water content data set of the oil well as { x ] by utilizing historical dataiAnd i is 1,2, …, N, the data in the oil well oil water content data set are arranged according to the sequence of time points, the unit of the time points is day or month, and the arrangement sequence number of each data in the oil well oil water content data set is recorded as { index } according to the arrangement sequence of the data in the oil well oil water content data seti,i=1,2,…,N};
2) And adopting a wavelet analysis method to carry out oil water content data set { x ] of the oil welliData in 1,2, …, N are preprocessed by Mallat algorithm for { x }iAnd performing three-layer wavelet decomposition on the data in the i-1, 2, …, N to obtain a wavelet decomposition sequenceWherein:represents the pair { xiI is 1,2, …, N, wavelet decomposing the ith data to obtain the third layer low-frequency component,andrespectively represent the pairs { xiThe high-frequency components of the third layer, the second layer and the first layer after wavelet decomposition of the ith data in 1,2, …, N; after wavelet decomposition, { xiEach data in 1,2, …, N is composed of corresponding dataInstead, the wavelet decomposed data set is composed of { x }i}WaveRepresents;
3) the neighbor propagation clustering algorithm will be used to calculate { xi}WaveAre classified according to { xi}WaveThe corresponding data array indexiDividing data in the oil water content data set of the oil well into various categories, wherein i is 1,2, …, N, and forming K clusters, wherein K is more than or equal to 1;
4) and the data in each cluster is arranged according to the original arrangement serial number { index of each dataiI ═ 1,2, …, N } and the data in each cluster is then represented in time series form as follows:
where α is 1,2, …, K, t is 1,2, …, M is the embedding dimension, τ is the delay time, M is N- (M-1) τ; the output of the time series for each cluster is represented as:
5) establishing a time series model of each cluster according to an extreme learning machine algorithm, wherein the input and the output of the model are respectivelyAndfor K clusters, there are K ELM modulesType, then a total of K output values are obtained from the inputThe K output values are considered to be the next values predicted by the data in each cluster, and their permutation numbers are labeled as { index }α max+1}, where, indexα maxRepresents the largest ranking number in the alpha cluster; the permutation sequence number is marked as { indexd max+1}(indexmaxK-1 output values Y not equal to N, d-1, 2, …, K-1)t d(d-1, 2, …, K-1) is rearranged from small to large in the order of the arrangement numbers to form a new time-series data set { Δ }d(d ═ 1,2, …, K-1), which is then represented as the input of a new time series: gamma-shapedt=[Δtt+τ,…,Δt+(m-1)τ]Where t is 1,2, …, M is N- (M-1) τ, and the output of the new time series is expressed as: lambdat=Δt+1+(m-1)τWill gammatAnd ΛtRespectively as the input and output of the new time sequence, and re-establishing a new time sequence model according to the extreme learning machine method, and calculating to obtain LambdatOutputting the value; output value Lambda of new time sequencetAnd permutation sequence number is marked as { indexα max+1}(indexα maxN) output valueAnd taking the average value to finally obtain a predicted value.
The oil water content multi-model prediction method based on the time sequence comprises the following steps of 2):
2.1, first, the data set { x ] is aligned according to equation (1)iI ═ 1,2, …, N } decomposition:
wherein J is 0,1, 2;h and G are respectively a decomposition low-pass filter and a decomposition high-pass filter;
2.2 and then reconstructed according to equation (2):
wherein H*And G*Dual operators of H and G, respectively;
then, the oil water content data set { x of the oil welliData x in (c) }iAfter three-layer wavelet decomposition, the expression is:wherein: i is 1,2, …, N, prepared fromThe constructed data set is denoted as { x }i}Wave
The oil well oil water content multi-model prediction method based on the time sequence comprises the step 3) of utilizing a neighbor propagation clustering algorithm to carry out { x } predictioni}WaveThe specific process of classification is as follows:
3.1, from data set { x after three-layer wavelet decompositioni}WaveComputing similarity matrix S, { xi}WaveTwo data in (1)Andsimilarity of (S)jhThe calculation is performed by equation (3):
wherein,andrespectively representing dataAndthe first dimension of (1);
then, the similarity matrix S is expressed as:
wherein, p is a deviation parameter, and the initial value of p is the mean value of all initial similarity values;
3.2 setting the attraction matrix R ═ RjhAnd the attribution matrix a ═ ajhAre respectively expressed as:
wherein, { rjhAnd { a } andjhthe initial value of 0;
3.3, setting the maximum iteration times MaxLoop, and iterating the attraction degree matrix R and the attribution degree matrix A according to the following formula:
rjh=sjh-max{ajh+sjh} (7)
rkk=pkk-max{akh+skh} (9)
where k is 1,2, …, N, k is the data set { x }i}WaveMiddle dataThe subscript of (a) is,representing candidate clustering centers, j is not equal to h is not equal to k;
3.4, when the maximum iteration times MaxLoop is met, the iteration process is terminated, the sum of diagonal elements of the two matrixes R and A is judged, and if the sum of diagonal elements of the two matrixes R and A is met, R iskk+akk> 0, then selectIs a clustering center;
3.5 determining the clustering centers of the K clustersThereafter, the data sets { x ] are calculated respectively according to the formula (3)i}WaveMiddle removingEvery data and K cluster centersIs set of data { xi}WaveMiddle removingEach data outside of the cluster is classified into the cluster center with the smallest distance to itIn the class to which the completion data set { x }belongsi}WaveClassification of the data.
The oil water content multi-model prediction method based on the time sequence for the oil well comprises the following steps of 5) establishing a time sequence model by using an extreme learning machine algorithm according to the following principle:
is provided with WTraining sampleWherein u isqAs input vector, vqFor the output vector, a training output comprising L hidden layers, an activation function f (·), and a model is set to Q ═ c1,c2,…,cq]TThen, the ELM model is described by the following system of equations:
wherein, betalqThe connection weight value between the l hidden layer neuron and the q output neuron; omegalThe connection weight between the hidden layer neuron and the input neuron; blBias for the l-th hidden layer neuron;
if the training model can approximate W training samples with zero error, thenThen the following holds true for equation (11),
then the mathematical description of the ELM model can be rewritten as a matrix as follows:
Hβ=V (13)
in formula (13), there are:
h is the hidden layer output matrix, ω and b are randomly given at initialization, then the training of the ELM model can be transformed into a problem solving the minimum of the nonlinear equation, namely:
output weight matrix beta*Can be obtained by the following formula,
β*=H+V (16)
wherein H+Moore-Penrose generalized inverse of the hidden layer output matrix H;
then, the training process of ELM can be generalized as the following optimization problem:
wherein g (·) represents a function determined by ω and b, g (ω, b) represents the output value of the function when ω and b take different values, respectively, and the objective of ELM training is to find the optimal β*Make the training output value c of the modelqAnd the true value vqThe error between is minimal;
for the selection of the activation function f (·), an integration manner of a gaussian function and a Sigmoid function is adopted, and f (·) is defined as follows:
where u represents the input vector, λ ∈ [0,1 ]]As a weight, σ2Is the width parameter of the Gaussian function;
based on the above, the calculation steps for establishing the time series model are as follows: initializing, randomly generating hidden layer input weight omega, hidden layer neuron bias b and width parameter sigma of Gaussian function2And a weight λ; calculating a hidden layer output matrix H according to formula (14); calculating the output weight matrix beta according to the formula (16)*(ii) a The function output value is calculated according to equation (17).
In the process of establishing the time sequence model, the oil water content multi-model prediction method of the oil well based on the time sequence adopts the improved firefly algorithm to carry out m, tau, omega, b and sigma2And the values of the sum lambda are optimally selected, and m, tau, omega, b and sigma are subjected to optimization selection2And λ are considered as a set of solution vectors, i.e., each "firefly" is a 1 × 6 dimensional vector, where the 6 dimensional variables are m, τ, ω, b, σ, respectively2And λ, the calculation steps are as follows:
first, an initial "firefly" population is generated, denoted as: { F1,F2,…,FNpop}, wherein: n is a radical ofpopRepresents the number of "fireflies" in the initial "firefly" population;
second, the luminance of each "firefly" is calculated, as: { I1,I2,…,INpop};
Third, each firefly moves to a firefly having a larger brightness than it, performs location update according to formula (22) -formula (24),
Φ1(Itenum+1)=Φ(Itenum)+η(ε)·(Ψbest(Itenum)-Φ(Itenum))+ξ·(rand-0.5) (22)
Φ2(Itenum+1)=Φ(Itenum)+η(ε)·(Ψsecbest(Itenum)-Φ(Itenum))+ξ·(rand-0.5)
(23)
where xi represents the step size, xi is the [0,1 ]](ii) a rand is [0,1 ]]A random number that is uniformly distributed; ΨbestIs the "glowworm" with the maximum brightnesssecbestTo a brightness level inferior to Ψbest"firefly"; itenum represents the iteration times in the running process of the IFA algorithm; phi (Itenum) represents a position of "firefly" at the Itenum iteration, phi (Itenum +1) represents a position of "firefly" at the Itenum +1 iteration, and phi1(Itenum +1) indicates the position where "firefly" is attracted by "firefly" with the maximum brightness at the Itenum +1 iteration, Φ2(Itenum +1) means that "firefly" receives a luminance of a magnitude inferior to ΨbestThe position of attraction of "firefly" at the Itenum +1 iteration;
the fourth step is to define the fitness function fitness (m, tau, omega, b, sigma)2λ) evaluation of m, τ, ω, b, σ2And the value of λ, when m, τ, ω, b, σ2And λ is the optimum value, fitness (m, τ),ω,b,σ2λ) minimum function value, defining variable Local _ pop records m, τ, ω, b, σ for each iteration2And optimum values of λ, the defining variable Global _ pop records m, τ, ω, b, σ in all iterations2And an optimum value of λ, after the third step is performed, updating the values in Local _ pop and Global _ pop, respectively;
fifthly, if the maximum iteration number is reached, stopping iteration, and respectively outputting m, tau, omega, b and sigma in Local _ pop and Global _ pop2The value of λ; otherwise, returning to the third step for re-iteration.
According to the oil water content multi-model prediction method based on the time sequence, for each cluster in K clusters, the calculation steps of establishing a time sequence model by adopting an extreme learning machine algorithm are as follows:
step 5.1 initialization, setting m, tau, omega, b, sigma2And the value range of λ;
step 5.2 normalize the data in each cluster to [0, 1%]Within the interval, determining the input and output of the model to be established asAnd
step 5.3 generating an initial population of "fireflies" at m, τ, ω, b, σ2Randomly giving m, tau, omega, b and sigma in the value range of the sum lambda2And the value of λ;
step 5.4 calculating the hidden layer output matrix H according to the formula (14), and calculating the output weight matrix beta according to the formula (16)*Calculating a function output value according to formula (17);
step 5.5 calculate the fitness function fitness (m, τ, ω, b, σ)2λ), recording m, τ, ω, b, σ in Local _ pop and Global _ pop2And an initial value of λ;
step 5.6, calculating the brightness of each firefly, and updating the position according to a formula (22) to a formula (24);
step 5.7 repeat step 5.4, calculate fitness function fitness (m, τ, ω, b, σ)2λ), update m, τ, ω, b, σ in Local _ pop and Global _ pop2The value of λ;
and 5.8, if the maximum iteration number is reached, stopping iteration, and respectively outputting m, tau, omega, b and sigma in Local _ pop and Global _ pop2The value of λ; otherwise, returning to the step 5.6 to the step 5.7 for re-iteration;
step 5.9 clustering the best m, tau, omega, b, sigma obtained by each cluster2The value of the sum lambda is used as a parameter for establishing a time series model by an extreme learning machine algorithm and is inputValue calculation output ofAnd performing inverse normalization processing.
The oil water content multi-model prediction method based on the time sequence adopts single-step prediction, namely, one oil water content is predicted at a timePredicting the next outputWhen in use, willAdd to next inputTo (3) finally; one Λ is predicted at a timetPredicting the next outputt+1When, will ΛtAdded to the next input Γt+1And finally.
According to the oil water content multi-model prediction method based on the time sequence, the number N of data in the oil water content data set of the oil well is generally 300-800, so that the calculation accuracy is ensured and the calculation complexity is reduced.
The oil-well oil-liquid water content multi-model prediction method based on the time sequence is m, tau, omega, b and sigma2And the value ranges of λ are respectively: m is an element of [1,50 ]],τ∈[1,10],ω∈[0,1],b∈[0,10],σ2∈[0.01,1000],λ∈[0,1]。
The invention has the beneficial effects that:
1. the production of the oil well is a continuous process, the change of some production parameters has relevance in a period of time, the value of a certain time point in the future is predicted according to historical data of a period of time in the past, the production dynamics of the oil well can be mastered, the production condition of the oil well can be evaluated in time, reasonable production measures can be made or adjusted, and the method has certain practical significance for reasonable and efficient production of oil field enterprises.
2. The water content of oil in the oil well is an important index for oil field production, and has important influence on the exploitation, measurement, transportation and the like of crude oil. The water content of the oil liquid is related to the working life of an oil well and the economic benefit of oil field production enterprises. Therefore, the accurate measurement of the water content of the oil liquid has very important significance for measuring the yield of the oil well, mastering the production dynamic of the oil well, evaluating the development degree and value of an oil reservoir, making reasonable development measures and the like.
3. The oil-well oil-liquid water-content multi-model prediction method based on the time sequence is simple in principle, small in calculation complexity and few in required samples. The multi-model structure established by the clustering method has high accuracy for judging the prediction sample, and particularly has higher prediction accuracy for the condition that the data set contains abnormal data.
4. The problem of current oil well fluid moisture content manual sampling waste time and energy, influence production control and oil recovery data's real-time is solved, labour saving and time saving.
Drawings
FIG. 1 is a diagram of the components of each dimension after a trilayer wavelet decomposition;
FIG. 2 is a schematic diagram of data distribution after oil water content data set clustering of an oil well.
Detailed Description
The oil water content multi-model prediction method based on the time sequence comprises the following specific steps:
1. establishing a data set of oil water content of the oil well by using historical data, wherein the data set comprises 440 data and is expressed as { x }iAnd i is 1,2, …,440, the 440 data are arranged according to the sequence of time points (the unit is day), and the sequence number of each data in the oil well oil water content data set is recorded as { index according to the sequence of the data in the oil well oil water content data seti,i=1,2,…,440}。
2. Oil well oil water content data set { x by adopting wavelet analysis methodiPreprocessing the data in the data, specifically: first, the Mallat algorithm pairs { xiPerforming three-layer wavelet decomposition on the data in the data set, and performing data set { x } according to formula (1)iAnd (4) decomposing:
wherein i is 1,2, …, 440; j is 0,1, 2;h and G are respectively a decomposition low-pass filter and a decomposition high-pass filter;
the reconstruction is then performed according to equation (2):
wherein H*And G*The dual operators for H and G, respectively.
Thereby obtaining a wavelet decomposition sequenceWherein:represents the pair { xiThe ith data in the data is subjected to wavelet decomposition to obtain a third-layer low-frequency component,andrespectively represent the pairs { xiPerforming wavelet decomposition on ith data in the data to obtain a third layer, a second layer and a first layer of high-frequency components; after wavelet decomposition, { xiEach data (one-dimensional) in the data (one-dimensional) is composed of the corresponding(four-dimensional) substitution of the wavelet decomposed data set { xiComposed of { x }i}WaveI is 1,2, …, 440. Data set { xiThe components of each dimension after the decomposition of the three-layer wavelet are shown in fig. 1.
3. (x) by neighbor propagation clustering Algorithm (AP)i}WaveAre classified according to { xi}WaveThe corresponding data array indexiThe oil water content data set of the oil well { x }iThe data in (f) are divided into classes. The method comprises the following specific steps:
3.1, from data set { x after three-layer wavelet decompositioni}WaveComputing similarity matrix S, { xi}WaveTwo data in (1)Andsimilarity of (S)jhCalculated by the following formula:
wherein,andrespectively representing dataAndthe first dimension of (1);
then, the similarity matrix S can be expressed as:
where, N is 440, p is a bias parameter, and the initial value of p is the average of all initial similarity values.
3.2 setting the attraction matrix R ═ RjhAnd the attribution matrix a ═ ajhAre respectively expressed as:
wherein, N is 440, { rjhAnd { a } andjhthe initial value of 0;
in this embodiment, the maximum iteration number MaxLoop is set to 500 (the maximum iteration number MaxLoop is generally 300-:
rjh=sjh-max{ajh+sjh} (7)
rkk=pkk-max{akh+skh} (9)
where k is 1,2, …,440, k is the data set { x }i}WaveMiddle dataThe subscript of (a) is,indicating a candidate cluster center, j ≠ h ≠ k.
3.3, when the maximum iteration frequency MaxLoop is met, the iteration process is terminated, the sum of diagonal elements of the two matrixes R and A is judged, and if the maximum iteration frequency MaxLoop is met, R is metkk+akk> 0, then selectIs the cluster center.
3.4 determining the clustering centers of the K clustersThereafter, the data sets { x ] are calculated respectively according to the formula (3)i}WaveMiddle removingEvery data and K cluster centersIs set of data { xi}WaveMiddle removingEach data outside of the cluster is classified into the cluster center with the smallest distance to itIn the class to which the completion data set { x }belongsi}WaveClassification of the data.
In this embodiment, each of the K clusters is assignedData ofArranged sequence number index ofiData set of oil water content of oil well { x }i440 data in the cluster are classified into K clusters, and the data in each cluster are respectively classified according to the arrangement sequence numbers index of the dataiAnd (4) carrying out arrangement. As shown in FIG. 2, a data set of oil water content of an oil well { x }i440 data in the cluster are classified into 8 clusters, and the number of data in each cluster is respectively as follows: 82 (K: 1), 57 (K: 2), 25 (K: 3), 101 (K: 4), 43 (K: 5), 69 (K: 6), 22 (K: 7), and 41 (K: 8), and the data in each cluster is assigned the data array index according to the data array number indexiAnd (4) carrying out arrangement.
4. The data in each cluster is represented by the following time series form:where α is 1,2, …,8, t is 1,2, …, M is the embedding dimension, τ is the delay time, M is 440- (M-1) τ; the output of the time series for each cluster is represented as:will be provided withAndas input and output of time series, respectively, the invention uses single-step prediction, i.e. one prediction at a timePredicting the next outputWhen in use, willAdd to next inputTo last。
5. Establishing a time series model of each cluster according to an extreme learning machine algorithm (ELM), wherein the input and the output of the model are respectivelyAnd
(1) the basic principle is as follows:
is provided with W training samplesWherein u isqAs input vector, vqIs the output vector. The set contains L hidden layers (in order to ensure the accuracy of the calculation and reduce the complexity of the calculation, the number L of the hidden layers is 10-30), the activation function is f (·), and the training output of the model is represented as Q ═ c1,c2,…,cq]TThen, the ELM model can be described by the following equation set:
wherein, betalqThe connection weight value between the l hidden layer neuron and the q output neuron; omegalThe connection weight between the hidden layer neuron and the input neuron; blIs the bias of the l-th hidden layer neuron.
If the training model can approximate W training samples with zero error, thenThen the following holds true for equation (11),
then the mathematical description of the ELM model can be rewritten as a matrix as follows:
Hβ=V (13)
in formula (13), there are:
h is the hidden layer output matrix, ω and b are randomly given at initialization. Then, the training of the ELM model can be transformed into a problem that solves the minimum of the nonlinear equations, namely:
output weight matrix beta*Can be obtained by the following formula,
β*=H+V (16)
wherein H+The Moore-Penrose generalized inverse of the hidden layer output matrix H.
Then, the training process of ELM can be generalized as the following optimization problem:
where g (·) denotes a function determined by ω and b, and g (ω, b) denotes a function output value when ω and b take different values, respectively. The goal of ELM training is to find the optimal beta*Make the training output value c of the modelqAnd the true value vqWith the smallest error between.
For the selection of the activation function f (·), an integration manner of a gaussian function and a Sigmoid function is adopted, and f (·) is defined as follows:
where u represents the input vector, λ ∈ [0,1 ]]As a weight, σ2Is a width parameter of a gaussian function.
(2) In the process of establishing the time series model by the ELM method, m, tau, omega, b and sigma2The value of sum λ determines the ELM modeThe invention adopts an Improved Firefly Algorithm (IFA) to calculate m, tau, omega, b, sigma2And carrying out optimization selection on the value of the lambda. The IFA algorithm is described mathematically as follows:
i, defining the brightness of each firefly as follows:
wherein, I0Representing the original luminance; gamma represents the light intensity absorption coefficient and is a constant real number; ε represents the distance between two "fireflies" and is defined as follows:
where φ and Ψ represent any two "fireflies" ("phi")θAndtheta-th dimension elements representing phi and psi, respectively; z is the dimension of Φ and Ψ.
II, defining the attraction degree of the firefly as shown in the following formula:
wherein eta is0Denotes the degree of attraction at ∈ 0.
III, each firefly is attracted by the firefly with greater brightness, and moves, and the position updating formula is as follows:
Φ1(Itenum+1)=Φ(Itenum)+η(ε)·(Ψbest(Itenum)-Φ(Itenum))+ξ·(rand-0.5) (22)
Φ2(Itenum+1)=Φ(Itenum)+η(ε)·(Ψsecbest(Itenum)-Φ(Itenum))+ξ·(rand-0.5)
(23)
where xi represents the step size, xi is the [0,1 ]](ii) a rand is [0,1 ]]A random number that is uniformly distributed; ΨbestIs the "glowworm" with the maximum brightnesssecbestTo a brightness level inferior to Ψbest"firefly"; itenum represents the iteration times in the running process of the IFA algorithm; phi (Itenum) represents a position of "firefly" at the Itenum iteration, phi (Itenum +1) represents a position of "firefly" at the Itenum +1 iteration, and phi1(Itenum +1) indicates the position where "firefly" is attracted by "firefly" with the maximum brightness at the Itenum +1 iteration, Φ2(Itenum +1) means that "firefly" receives a luminance of a magnitude inferior to ΨbestIs attracted at the position of the Itenum +1 iteration.
The invention adopts IFA algorithm to carry out the pair of m, tau, omega, b and sigma2When the values of the sum lambda are optimally selected, m, tau, omega, b and sigma are selected2And λ are considered as a set of solution vectors, i.e., each "firefly" is a 1 × 6 dimensional vector, where the 6 dimensional variables are m, τ, ω, b, σ, respectively2And λ. The specific calculation steps are as follows:
first, an initial "firefly" population is generated, denoted as: { F1,F2,…,FNpop}, wherein: n is a radical ofpopRepresenting the number of "fireflies" in the initial "firefly" population, taking Npop=30;
Second, the luminance of each "firefly" is calculated, as: { I1,I2,…,INpop};
Thirdly, each firefly moves to a firefly with a larger brightness than the firefly, and the position is updated according to formula (22) -formula (24);
the fourth step is to define a fitness function fitness (m, τ, ω, b, σ) as shown in equation (25)2λ) to evaluate m, τ, ω, b, σ2And the value of the lambda value is,
wherein, N is 440, the total weight of the product,represents an actual output value of the ith data,representing a model prediction value of the ith data;
when m, tau, omega, b, sigma2And λ takes the best value, fitness (m, τ, ω, b, σ)2λ) minimum function value, defining variable Local _ pop records m, τ, ω, b, σ for each iteration2And λ, the defining variable Global _ pop records m, τ, ω, b, σ in all iterations2And λ, and after the third step of the step is performed, updating the values in Local _ pop and Global _ pop, respectively;
fifthly, if the maximum iteration time MaxLoop is 500, stopping iteration and respectively outputting m, tau, omega, b and sigma in Local _ pop and Global _ pop2The value of λ; otherwise, the step 4.3 is returned to repeat.
In this embodiment, for each of the 8 clusters, while the model is established by the ELM method, the IFA method is used to calculate m, τ, ω, b, and σ involved in the ELM process2And lambda is optimized and selected, and the calculation steps are as follows:
step 5.1 initialization, setting m, tau, omega, b, sigma2And λ, m ∈ [1,50 ]],τ∈[1,10],ω∈[0,1],b∈[0,10],σ2∈[0.01,1000],λ∈[0,1];
Step 5.2 normalize the data in each cluster to [0, 1%]Within the interval, determining the input and output of the model to be established asAnd
step 5.3 generates an initial population of "fireflies" at m, τ, ω,b、σ2Randomly giving m, tau, omega, b and sigma in the value range of the sum lambda2And the value of λ;
step 5.4 calculating the hidden layer output matrix H according to the formula (14), and calculating the output weight matrix beta according to the formula (16)*Calculating a function output value according to formula (17);
step 5.5 calculates the fitness function fitness (m, τ, ω, b, σ) according to equation (25)2λ), recording m, τ, ω, b, σ in Local _ pop and Global _ pop2And an initial value of λ;
step 5.6, calculating the brightness of each firefly, and updating the position according to a formula (22) to a formula (24);
step 5.7 repeat step 5.4 and calculate the fitness function fitness (m, τ, ω, b, σ) according to equation (25)2λ), update m, τ, ω, b, σ in Local _ pop and Global _ pop2The value of λ;
and 5.8, if the maximum iteration frequency MaxLoop is 500, stopping iteration, and respectively outputting m, tau, omega, b and sigma in Local _ pop and Global _ pop2The value of λ; otherwise, the step 5.6-step 5.7 are returned to repeat the iteration.
Step 5.9 clustering the best m, tau, omega, b, sigma obtained by each cluster2The value of the sum lambda is used as a parameter for building a time series model by an ELM method and is inputValue calculation output ofAnd performing inverse normalization processing on the values.
6. For 8 clusters, there are 8 ELM models, and then a total of 8 output values are obtained from the inputsIn this embodiment, the following are respectively: 60. 79, 77, 75, 73, 67, 76 and 74. The 8 output values are considered to be the next values predicted by the data in each cluster, and their permutation numbers are labeled as { index }α max+1}, where, indexα maxIndicates the largest ranking number in the α -th cluster. According to the calculation result of the ELM model, the permutation numbers of the 8 output values are respectively marked as: 440. 437, 418, 441, 413, 439, 420 and 426.
Then mark the permutation sequence number as { indexα max+1}(indexα maxNot equal to 440) of 7 output valuesRearranging from small to large according to the size sequence of the arrangement serial numbers to form a data set of a new time sequence (delta)dD ═ 1,2, …, 7. The permutation numbers of the rearranged 7 output values are: 413. 418, 420, 426, 437, 439, 440, constituting a new time series of data sets Δd{73,77,76,74,79,67,60}, where index in the 4 th cluster (α ═ 4) }4 max=440。
This is then represented as the input to the new time series: gamma-shapedt=[Δtt+τ,…,Δt+(m-1)τ]Where t is 1,2, …, M is 440- (M-1) τ, and the output of the new time series is expressed as: lambdat=Δt+1+(m-1)τ. Gamma-gamma is formedtAnd ΛtRespectively as the input and output of the new time sequence, establishing a new time sequence model according to the step 5, calculating the output value, adopting single step prediction, and predicting a lambda in each timetPredicting the next outputt+1When, will ΛtAdded to the next input Γt+1And finally. In this embodiment, the output value Λ of the new time series is obtainedt=72。
7. Output value Lambda of new time sequencet72 with permutation number marked index4 max+1}(index4 max440) output value Y)t 4The average was taken 75 to give a final output of 73.5.
And when the value of the clustering number K is 1, namely only one cluster exists, calculating by N data training models in the oil water content data set of the oil well.
In order to further illustrate the effectiveness of the method provided by the invention, the data of ten production wells in a certain oilfield operation area in China are adopted for verification, and the result is shown in table 1.
TABLE 1 prediction result of oil water content of ten production wells

Claims (9)

1. A multi-model prediction method for oil water content of an oil well based on a time sequence is characterized by comprising the following steps:
1) and establishing an oil water content data set of the oil well as { x ] by utilizing historical dataiAnd i is 1,2, …, N, the data in the oil well oil water content data set are arranged according to the sequence of time points, the unit of the time points is day or month, and the arrangement sequence number of each data in the oil well oil water content data set is recorded as { index } according to the arrangement sequence of the data in the oil well oil water content data seti,i=1,2,…,N};
2) And adopting a wavelet analysis method to carry out oil water content data set { x ] of the oil welliData in 1,2, …, N are preprocessed by Mallat algorithm for { x }iAnd performing three-layer wavelet decomposition on the data in the i-1, 2, …, N to obtain a wavelet decomposition sequenceWherein:represents the pair { xiI is 1,2, …, N, wavelet decomposing the ith data to obtain the third layer low-frequency component,andrespectively represent the pairs { xiThe high-frequency components of the third layer, the second layer and the first layer after wavelet decomposition of the ith data in 1,2, …, N; after wavelet decomposition, { xiEach data in 1,2, …, N is composed of corresponding dataInstead, the wavelet decomposed data set is composed of { x }i}WaveRepresents;
3) the neighbor propagation clustering algorithm will be used to calculate { xi}WaveAre classified according to { xi}WaveThe corresponding data array indexiDividing data in the oil water content data set of the oil well into various categories, wherein i is 1,2, …, N, and forming K clusters, wherein K is more than or equal to 1;
4) and the data in each cluster is arranged according to the original arrangement serial number { index of each dataiI ═ 1,2, …, N } and the data in each cluster is then represented in time series form as follows:where α is 1,2, …, K, t is 1,2, …, M is the embedding dimension, τ is the delay time, M is N- (M-1) τ; the output of the time series for each cluster is represented as:
5) establishing a time series model of each cluster according to an extreme learning machine algorithm, wherein the input and the output of the model are respectivelyAnd Yt αFor K clusters, there are K ELM models in total, and then K output values Y are obtained from the inputt αConsidering the K output values as the next values predicted by the data in each cluster,their permutation numbers are respectively marked as { index }α max+1}, where, indexα maxRepresents the largest ranking number in the alpha cluster; the permutation sequence number is marked as { indexα maxK-1 output values Y of +1t αRearranging from small to large according to the size sequence of the arrangement serial numbers to form a data set of a new time sequence (delta)d},indexα maxNot N, d-1, 2, …, K-1, which is then represented as input of a new time sequence: gamma-shapedt=[Δtt+τ,…,Δt+(m-1)τ]Where t is 1,2, …, M is N- (M-1) τ, and the output of the new time series is expressed as: lambdat=Δt+1+(m-1)τWill gammatAnd ΛtRespectively as the input and output of the new time sequence, and re-establishing a new time sequence model according to the extreme learning machine method, and calculating to obtain LambdatOutputting the value; output value Lambda of new time sequencetAnd permutation sequence number is marked as { indexα maxOutput value Y of +1t αTaking the average value to obtain the predicted value, indexα max=N。
2. The time-series-based oil-well oil water content multi-model prediction method of claim 1, characterized in that: the step 2) is as follows:
2.1, first pair of datasets { x according to equation 1iI ═ 1,2, …, N } decomposition:
wherein J is 0,1, 2;h and G are respectively a decomposition low-pass filter and a decomposition high-pass filter;
2.2 and then reconstructed according to equation 2:
wherein H*And G*Dual operators of H and G, respectively;
then, the oil water content data set { x of the oil welliData x in (c) }iAfter three-layer wavelet decomposition, the expression is:wherein: i is 1,2, …, N, prepared fromThe constructed data set is denoted as { x }i}Wave
3. The time-series-based oil-well oil water content multi-model prediction method of claim 1, characterized in that: step 3) utilizing a neighbor propagation clustering algorithm to carry out matching on { xi}WaveThe specific process of classification is as follows:
3.1, from data set { x after three-layer wavelet decompositioni}WaveComputing similarity matrix S, { xi}WaveTwo data in (1)Andsimilarity of (S)jhCalculated from equation 3:
wherein, j is not equal to h,andrespectively representing dataAndthe first dimension of (1);
then, the similarity matrix S is expressed as:
wherein, p is a deviation parameter, and the initial value of p is the mean value of all initial similarity values;
3.2 setting the attraction matrix R ═ RjhAnd the attribution matrix a ═ ajhAre respectively expressed as:
wherein, { rjhAnd { a } andjhthe initial value of 0;
3.3, setting the maximum iteration times MaxLoop, and iterating the attraction degree matrix R and the attribution degree matrix A according to the following formula:
rjh=sjh-max{ajh+sjhequation 7
rkk=pkk-max{akh+skhEquation 9
Where k is 1,2, …, N, k is the data set { x }i}WaveMiddle dataThe subscript of (a) is,representing candidate clustering centers, j is not equal to h is not equal to k;
3.4, when the maximum iteration times MaxLoop is met, the iteration process is terminated, the sum of diagonal elements of the two matrixes R and A is judged, and if the sum of diagonal elements of the two matrixes R and A is met, R iskk+akk> 0, then selectIs a clustering center;
3.5 determining the clustering centers of the K clustersThereafter, the data sets { x ] are calculated according to equation 3, respectivelyi}WaveMiddle removingEvery data and K cluster centersIs set of data { xi}WaveMiddle removingEach data outside of the cluster is classified into the cluster center with the smallest distance to itIn the class to which the completion data set { x }belongsi}WaveClassification of the data.
4. The time-series-based oil-well oil water content multi-model prediction method of claim 1, characterized in that: the principle of establishing the time series model by using the extreme learning machine algorithm in the step 5) is as follows:
is provided with W training samplesWherein u isqAs input vector, vqFor the output vector, a training output comprising L hidden layers, an activation function f (·), and a model is set to Q ═ c1,c2,…,cq]TThen, the ELM model is described by the following system of equations:
wherein, betalqThe connection weight value between the l hidden layer neuron and the q output neuron; omegalThe connection weight between the hidden layer neuron and the input neuron; blBias for the l-th hidden layer neuron;
if the training model can approximate W training samples with zero error, thenThen the following holds true for equation 11,
then the mathematical description of the ELM model can be rewritten as a matrix as follows:
h β ═ V equation 13
In equation 13, there is:
h is the hidden layer output matrix, ω and b are randomly given at initialization, then the training of the ELM model can be transformed into a problem solving the minimum of the nonlinear equation, namely:
output weight matrix beta*Can be obtained by the following formula,
β*=H+v equation 16
Wherein H+Moore-Penrose generalized inverse of the hidden layer output matrix H;
then, the training process of ELM can be generalized as the following optimization problem:
wherein g (·) represents a function determined by ω and b, g (ω, b) represents the output value of the function when ω and b take different values, respectively, and the objective of ELM training is to find the optimal β*Make the training output value c of the modelqAnd the true value vqThe error between is minimal;
for the selection of the activation function f (·), an integration manner of a gaussian function and a Sigmoid function is adopted, and f (·) is defined as follows:
where u represents the input vector, λ ∈ [0,1 ]]As a weight, σ2Is the width parameter of the Gaussian function;
based on the above, the calculation steps for establishing the time series model are as follows: initializing, randomly generating hidden layer input weight omega, hidden layer neuron bias b and width parameter sigma of Gaussian function2And a weight λ; calculating a hidden layer output matrix H according to formula 14; calculate the output weight matrix beta according to equation 16*(ii) a The function output value is calculated according to equation 17.
5. The time series based oil well fluid of claim 4 containing waterThe multi-rate model prediction method is characterized by comprising the following steps: in the process of establishing the time series model, an improved firefly algorithm is adopted for m, tau, omega, b and sigma2And the values of the sum lambda are optimally selected, and m, tau, omega, b and sigma are subjected to optimization selection2And λ are considered as a set of solution vectors, i.e., each "firefly" is a 1 × 6 dimensional vector, where the 6 dimensional variables are m, τ, ω, b, σ, respectively2And λ, the calculation steps are as follows:
first, an initial "firefly" population is generated, denoted as: { F1,F2,…,FNpop}, wherein: n is a radical ofpopRepresents the number of "fireflies" in the initial "firefly" population;
second, the luminance of each "firefly" is calculated, as: { I1,I2,…,INpop};
Third, each firefly moves to a firefly having a brightness greater than that of the firefly, performs location update according to the formula 22 to the formula 24,
Φ1(Itenum+1)=Φ(Itenum)+η(ε)·(Ψbest(Itenum) - Φ (Itenum)) + ξ (rand-0.5) equation 22
Φ2(Itenum+1)=Φ(Itenum)+η(ε)·(Ψsecbest(Itenum) - Φ (Itenum)) + ξ (rand-0.5) equation 23
Where xi represents the step size, xi is the [0,1 ]](ii) a rand is [0,1 ]]A random number that is uniformly distributed; ΨbestIs the "glowworm" with the maximum brightnesssecbestTo a brightness level inferior to Ψbest"firefly"; itenum represents the iteration times in the running process of the IFA algorithm; phi (Itenum) represents a position of "firefly" at the Itenum iteration, phi (Itenum +1) represents a position of "firefly" at the Itenum +1 iteration, and phi1(Itenum +1) indicates the position where "firefly" is attracted by "firefly" with the maximum brightness at the Itenum +1 iteration, Φ2(Itenum +1) means that "firefly" receives a luminance of a magnitude inferior to ΨbestThe position of attraction of "firefly" at the Itenum +1 iteration;
the fourth step is to define the fitness function fitness (m, tau, omega, b, sigma)2λ) evaluation of m, τ, ω, b, σ2And the value of λ, when m, τ, ω, b, σ2And λ is the optimum value, fitness (m, τ, ω, b, σ)2λ) minimum function value, defining variable Local _ pop records m, τ, ω, b, σ for each iteration2And optimum values of λ, the defining variable Global _ pop records m, τ, ω, b, σ in all iterations2And an optimum value of λ, after the third step is performed, updating the values in Local _ pop and Global _ pop, respectively;
fifthly, if the maximum iteration number is reached, stopping iteration, and respectively outputting m, tau, omega, b and sigma in Local _ pop and Global _ pop2The value of λ; otherwise, returning to the third step for re-iteration.
6. The time-series-based oil-well oil water content multi-model prediction method of claim 5, characterized in that: for each of the K clusters, the calculation steps for establishing a time series model using an extreme learning machine algorithm are as follows:
step 5.1 initialization, setting m, tau, omega, b, sigma2And the value range of λ;
step 5.2 normalize the data in each cluster to [0, 1%]Within the interval, determining that the input and the output of the model to be established are X respectivelyt αAnd Yt α
Step 5.3 generating an initial population of "fireflies" at m, τ, ω, b, σ2Randomly giving m, tau, omega, b and sigma in the value range of the sum lambda2And the value of λ;
step 5.4 calculating the hidden layer output matrix H according to the formula 14, and calculating the output weight matrix beta according to the formula 16*Calculating a function output value according to formula 17;
step 5.5 calculate the fitness function fitness (m, τ, ω, b, σ)2λ), recording m, τ, ω, b, σ in Local _ pop and Global _ pop2And an initial value of λ;
step 5.6, calculating the brightness of each firefly, and updating the position according to a formula 22-a formula 24;
step 5.7 repeat step 5.4, calculate fitness function fitness (m, τ, ω, b, σ)2λ), update m, τ, ω, b, σ in Local _ pop and Global _ pop2The value of λ;
and 5.8, if the maximum iteration number is reached, stopping iteration, and respectively outputting m, tau, omega, b and sigma in Local _ pop and Global _ pop2The value of λ; otherwise, returning to the step 5.6 to the step 5.7 for re-iteration;
step 5.9 clustering the best m, tau, omega, b, sigma obtained by each cluster2The value of the sum lambda is used as a parameter for establishing a time series model by an extreme learning machine algorithm and is inputValue of (d) calculating output Yt αAnd performing inverse normalization processing.
7. The time-series-based oil-well oil water content multi-model prediction method of claim 1, characterized in that: using single-step prediction, i.e. predicting one Y at a timet αThen predict the next outputWhen it is, Y ist αAdd to next inputTo (3) finally; one Λ is predicted at a timetPredicting the next outputt+1When, will ΛtAdded to the next input Γt+1And finally.
8. The time-series-based oil-well oil water content multi-model prediction method of claim 1, characterized in that: the number N of the data in the oil water content data set of the oil well is 300-800.
9. The time-series-based oil-well oil water content multi-model prediction method of claim 6, characterized in that: m, tau, omega, b, sigma2And the value ranges of λ are respectively: m is an element of [1,50 ]],τ∈[1,10],ω∈[0,1],b∈[0,10],σ2∈[0.01,1000],λ∈[0,1]。
CN201610094429.0A 2016-02-22 2016-02-22 A kind of oil well oil liquid moisture content multi-model prediction technique based on time series Expired - Fee Related CN105631554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610094429.0A CN105631554B (en) 2016-02-22 2016-02-22 A kind of oil well oil liquid moisture content multi-model prediction technique based on time series

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610094429.0A CN105631554B (en) 2016-02-22 2016-02-22 A kind of oil well oil liquid moisture content multi-model prediction technique based on time series

Publications (2)

Publication Number Publication Date
CN105631554A CN105631554A (en) 2016-06-01
CN105631554B true CN105631554B (en) 2019-11-26

Family

ID=56046461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610094429.0A Expired - Fee Related CN105631554B (en) 2016-02-22 2016-02-22 A kind of oil well oil liquid moisture content multi-model prediction technique based on time series

Country Status (1)

Country Link
CN (1) CN105631554B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4068174A4 (en) * 2019-11-29 2022-11-16 BOE Technology Group Co., Ltd. System for recommending maximum quantity of products in process, method, and computer readable medium

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944607B (en) * 2017-11-03 2022-01-18 渤海大学 Time sequence-based pumping well shut-down time integrated prediction method
CN107909202B (en) * 2017-11-03 2021-12-21 渤海大学 Time sequence-based oil well liquid production integrated prediction method
CN108427657B (en) * 2018-02-28 2021-05-25 东北大学 Effectiveness analysis and regulation and control method for underground water seal oil reservoir water curtain system
CN109630092B (en) * 2018-11-14 2023-02-10 渤海大学 Data-based multi-model soft measurement method for pumping well pump efficiency
CN110630244B (en) * 2019-07-09 2022-12-02 东营智图数据科技有限公司 High-yield gas-oil well water content prediction system and method based on multi-sensor measurement and long-time and short-time memory network
CN110796155A (en) * 2019-07-12 2020-02-14 大港油田集团有限责任公司 Crude oil water content data analysis method based on clustering algorithm
CN110821470B (en) * 2019-08-09 2022-11-25 大港油田集团有限责任公司 Oil well working condition characteristic analysis method based on time series signals
CN114165228B (en) * 2021-10-08 2023-05-16 西南石油大学 Double-frequency microwave current collecting umbrella output profile logging plate constraint optimization interpretation method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414366A (en) * 2008-10-22 2009-04-22 西安交通大学 Method for forecasting electric power system short-term load based on method for improving uttermost learning machine
CN104239964A (en) * 2014-08-18 2014-12-24 华北电力大学 Ultra-short-period wind speed prediction method based on spectral clustering type and genetic optimization extreme learning machine
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN105257277A (en) * 2015-05-15 2016-01-20 渤海大学 Method for predicating underground fault of sucker-rod pump oil pumping well on basis of multivariable grey model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414366A (en) * 2008-10-22 2009-04-22 西安交通大学 Method for forecasting electric power system short-term load based on method for improving uttermost learning machine
CN104239964A (en) * 2014-08-18 2014-12-24 华北电力大学 Ultra-short-period wind speed prediction method based on spectral clustering type and genetic optimization extreme learning machine
CN104361393A (en) * 2014-09-06 2015-02-18 华北电力大学 Method for using improved neural network model based on particle swarm optimization for data prediction
CN105257277A (en) * 2015-05-15 2016-01-20 渤海大学 Method for predicating underground fault of sucker-rod pump oil pumping well on basis of multivariable grey model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multiclass Semisupervised Learning Based Upon Kernel Spectral Clustering;Siamak Mehrkanoon et al.;《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》;20150430;第26卷(第4期);全文 *
基于谱聚类和优化极端学习机的超短期风速预测;王辉 等;《电网技术》;20150531;第39卷(第5期);全文 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4068174A4 (en) * 2019-11-29 2022-11-16 BOE Technology Group Co., Ltd. System for recommending maximum quantity of products in process, method, and computer readable medium

Also Published As

Publication number Publication date
CN105631554A (en) 2016-06-01

Similar Documents

Publication Publication Date Title
CN105631554B (en) A kind of oil well oil liquid moisture content multi-model prediction technique based on time series
Taormina et al. Data-driven input variable selection for rainfall–runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines
CN109492822B (en) Air pollutant concentration time-space domain correlation prediction method
CN105045941B (en) Pumping unit parameter optimization method based on Unscented kalman filtering
Sudret Meta-models for structural reliability and uncertainty quantification
Chang et al. Artificial neural networks for estimating regional arsenic concentrations in a blackfoot disease area in Taiwan
US20070239640A1 (en) Neural Network Based Predication and Optimization for Groundwater / Surface Water System
CN110807544B (en) Oil field residual oil saturation distribution prediction method based on machine learning
Yu et al. A hybrid intelligent soft computing method for ammonia nitrogen prediction in aquaculture
Zhang et al. Adaptive spatio-temporal graph convolutional neural network for remaining useful life estimation
CN114266278A (en) Dual-attention-network-based method for predicting residual service life of equipment
Pandey et al. A robust deep structured prediction model for petroleum reservoir characterization using pressure transient test data
Liu et al. Predictive model for water absorption in sublayers using a machine learning method
CN116311921A (en) Traffic speed prediction method based on multi-spatial scale space-time converter
CN107909202B (en) Time sequence-based oil well liquid production integrated prediction method
CN107944607B (en) Time sequence-based pumping well shut-down time integrated prediction method
Xiao et al. Inversion study of soil organic matter content based on reflectance spectroscopy and the improved hybrid extreme learning machine
Haixiang et al. Optimizing reservoir features in oil exploration management based on fusion of soft computing
CN114596726A (en) Parking position prediction method based on interpretable space-time attention mechanism
Khan et al. Rainfall Prediction using Artificial Neural Network in Semi-Arid mountainous region, Saudi Arabia
Zhang et al. Data-driven approaches for time series prediction of daily production in the Sulige tight gas field, China
CN117154704A (en) Photovoltaic power prediction method based on multiscale space-time diagram attention convolution network
CN116612831A (en) Chemical substance safety evaluation method for deep learning combined mode biological zebra fish
Ehteram et al. An advanced deep learning model for predicting water quality index
CN116628444A (en) Water quality early warning method based on improved meta-learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20191126

Termination date: 20210222