CN113673152A - Digital twin body-based group-level KKS coding intelligent mapping recommendation method - Google Patents
Digital twin body-based group-level KKS coding intelligent mapping recommendation method Download PDFInfo
- Publication number
- CN113673152A CN113673152A CN202110905893.4A CN202110905893A CN113673152A CN 113673152 A CN113673152 A CN 113673152A CN 202110905893 A CN202110905893 A CN 202110905893A CN 113673152 A CN113673152 A CN 113673152A
- Authority
- CN
- China
- Prior art keywords
- attention
- kks
- coding
- digital twin
- self
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 50
- 238000000034 method Methods 0.000 title claims abstract description 36
- 230000004927 fusion Effects 0.000 claims abstract description 48
- 238000004519 manufacturing process Methods 0.000 claims abstract description 15
- 238000010248 power generation Methods 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims description 28
- 239000013598 vector Substances 0.000 claims description 25
- 230000011218 segmentation Effects 0.000 claims description 23
- 238000003860 storage Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000012360 testing method Methods 0.000 claims description 9
- 238000003491 array Methods 0.000 claims description 4
- 238000007781 pre-processing Methods 0.000 claims description 4
- 101100161752 Mus musculus Acot11 gene Proteins 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000001133 acceleration Effects 0.000 claims description 2
- 230000006835 compression Effects 0.000 claims description 2
- 238000007906 compression Methods 0.000 claims description 2
- 238000005457 optimization Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 8
- 230000004888 barrier function Effects 0.000 abstract description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 125000004432 carbon atom Chemical group C* 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013506 data mapping Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/28—Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- General Health & Medical Sciences (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention relates to a digital twin-based group-level KKS coding intelligent mapping recommendation method, which comprises the following steps of: constructing a digital twin body based on a power full production process; constructing a professional dictionary according to the obtained digital twins; and acquiring codes under the original coding rule and the new coding rule by using acquisition equipment, and constructing a KKS coding data set. The invention has the beneficial effects that: the complex power generation process is systematized, a digital twin body based on the power full production process is constructed, and a collective group level digital twin technology is instantiated; the invention also constructs a KKS generation model based on the attention stack network, stimulates the vitality of the original digital infrastructure of the power plant, opens a data island, breaks down information barriers, promotes the real-time fusion between a physical space and an information space, and realizes the intelligent mapping task under various KKS coding systems.
Description
Technical Field
The invention belongs to the technical field of power plant information, and particularly relates to a digital twin-based group-level KKS coding intelligent mapping recommendation method.
Background
The digital twin is a digital representation of real things with a specific purpose, namely, the real situation of real physical entities presented in real time in a virtual digital world. With the popularization and application of the digital twin concept, how to map the entities of the information space and the physical space one by one becomes one of important basic works.
Currently, the main basis supporting the application of digital twins in the power industry is the power plant identification system (KKS), which enables efficient identification and management of power plant equipment assets. However, KKS coding itself has many limitations:
1. only the production needs of the power plant are considered at the beginning of the programming, and the universality of the codes is not considered;
2. the compiling main body is an electric power design institute and an equipment manufacturer, and operation and maintenance personnel lack the capacity of maintaining codes;
3. for iterative updates of new technologies, the KKS coding scheme cannot respond quickly.
Therefore, a new technology, especially a digital twin technology is promoted to be applied to the power industry, and the premise is to realize the automation and the intelligence of the KKS coding system. Although the invention patent cn201711434013.x implements a digital twin mapping model using an OPC UA server, an OPC UA client and a data mapping dictionary, it is only directed to coding rules in the OPC UA protocol and cannot be extended. In addition, the invention patent CN201910956494.3 adopts a heterogeneous protocol conversion mode to implement a data virtual-real interactive encoding and decoding process, and also has an identification problem for the same entity under different encoding conditions. In order to instantiate a group-level digital twin technology, the vitality of original digital infrastructures of a power plant is excited, a data island is opened, information barriers are broken, real-time fusion between a physical space and an information space is promoted, and an intelligent mapping task under various KKS coding systems is still a difficult problem to be solved urgently.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a digital twin-based group-level KKS coding intelligent mapping recommendation method.
The group-level KKS coding intelligent mapping recommendation method based on the digital twin comprises the following steps:
in the above formula, the first and second carbon atoms are,representing the individual subsystems that make up the digital twin,representing a digital twin based on a power full production process;
step 4, KKS coded data set obtained in step 3Carrying out data preprocessing to obtain a training setAnd test set;
Step 5, training set obtained through step 4Training a KKS generation model based on the attention stacking network, wherein the KKS generation model based on the attention stacking network is formed by stacking a plurality of layers of attention networks;
step 6, obtaining the professional dictionary according to the step 2And 5, obtaining a probability mapping value of the fusion attention feature in the word vectorpGenerating a KKS coding predicted value under a new coding rule by adopting a reconstruction function, and calculating the similarity of the KKS coding predicted value under the new coding rule in a character string mode by adopting a minimum editing distance without converting the KKS coding predicted value into a vector mode; obtain the most similargThe KKS code values are used for KKS code mapping recommendation:
in the above formula, byThe reconstruction function obtains a KKS coding prediction valueThen byCalculating the similarity of KKS codes by using a minimum edit distance function, and finally adoptingSelecting the most similargAnd the KKS code values are used for KKS code mapping recommendation, and the recommendation result is stored in the storage unit.
Preferably, when the complex power generation flow is systematized in step 1, the power generation flow is divided into different power subsystems, and the different power subsystems have three relations of inclusion, parallel connection and series connection.
Preferably, step 2 specifically comprises the following steps:
step 2.1, aiming at digital twinThe detailed description of each subsystem contained in the method carries out word segmentation to obtain professional word segmentation data;
step 2.2, constructing a multi-level professional dictionary according to the professional word segmentation data obtained in the step 2.1:
the above formula represents a multi-level professional dictionaryDictionary with master systemAnd subsystem dictionaryTwo parts are formed, and a multi-level professional dictionary is formedStored in a storage device.
Preferably, the professional dictionary in step 2And the three-key-value pair consisting of the serial number, the serial number and the professional code of the power subsystem is used as a data structure and is stored in a storage unit of the storage device, and the storage unit accesses the Json-format file and provides data interface service.
Preferably, in the step 3, the original coding rule is a KKS coding rule adopted when designing a power plant, the new coding rule is a KKS coding rule set by a power generation group, and the two rules have the same purpose but different coding results; and 3, performing data acquisition by the acquisition unit in the acquisition equipment by running a Python script, and transmitting the acquired data to the outside through a data interface.
Preferably, step 4 specifically comprises the following steps:
step 4.1, obtaining the professional dictionary according to the step 2For KKS encoded data setThe coded data in (1) is participated to obtain the corresponding participated word group,Are words that constitute coded data, a codeIs divided intoWord(ii) a Each wordCorresponding to a serial numberThe codes of each word-separating phrase are digitally coded and numberedRecombined to form a coded vectorObtaining a digital coding data set;
step 4.2, dividing the digital coding data set obtained in the step 4.1 into training sets in a random sampling modeAnd test set。
Preferably, KKS encoded data set is encoded in step 4.1When the word segmentation results obtained after the word segmentation is carried out on the coded data in the method are inconsistent in length, the longest word segmentation quantity is used as a standard, and the word segmentation with the insufficient length is expanded by adding corresponding quantity of placeholders; training set in step 4.2And test setIn a ratio of 4: 1.
Preferably, step 5 specifically comprises the following steps:
step 5.1, according to the full connection layer, encoding according to the original encoding ruleCoded according to the new coding ruleTo carry out word vectorization coding to respectively obtain vectorization results with fixed dimensions:
In the above formulaUpper label of1stIndicating the original coding rules, superscript2ndRepresenting a new encoding rule;
step 5.2, vectorizing result obtained according to the step 5.1Attention features are calculated by the self-attention layer:
in the above equation, a vectorized result with a fixed dimension is input to the self-attention layerWherein the self-attention layer is composed ofNThe self-attention module is formed by stacking, the input of the latter module is taken as the output of the former module in the form of a self-attention layer, and each self-attention module is divided into two layers; through a self-attention networkGet attention moment matrix:
In the above formula, the self-attention networkRespectively pass through the weight、、Get specialEigenvalue、、(ii) a Then adoptObtaining a value based on the characteristic、、Attention weight of, in addition to、Has a vector dimension ofd(ii) a Then through the feedforward full connection layerCalculating to obtain attention characteristicsWhereinIs shown innCalculating in a self-attention module;
step 5.3, vectorizing results obtained in the step 5.1Step 5.2 obtaining the self-attention featureCarrying out feature fusion calculation; the fusion computing module is composed ofMThe fusion self-attention module is formed by stacking, and in form, the input of the latter module in the fusion calculation module is used as the output of the former module; wherein the fusion self-attention module is composed of three layers:
in the above formula, theAs a self-attention networkBy first passing through a self-attention networkGet attention moment matrix:
In the above formula, the self-attention networkRespectively pass through the weight、、Obtaining the characteristic value、、(ii) a Then adoptObtaining a value based on the characteristic、、Attention weight ofIn addition d is、The vector dimension of (a); then passes throughTwo attention moment arraysAnd attention characteristicsPerforming fusion to obtain a fusion attention moment array(ii) a Finally, the full connection layer is fed forwardAttention moment array according to fusionCalculating to obtain the fusion attention characteristicsWhereinIs shown inmCalculating in a fusion self-attention module;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3Using normalized exponential functionCalculating to obtain probability mapping values of the fusion attention features in the word vectorsp:
Step 5.5, obtaining the probability mapping value of the fusion attention feature in the word vector from step 5.4pEvaluating a model training result by taking the cross entropy result as a loss function for training a KKS generation model based on the attention stacking network; determining a pause condition of KKS generation model training based on the attention stacking network according to the iteration times and the loss function convergence value; if the KKS generation model based on the attention stacking network is to be trained continuously, the steps 5.1 to 5.4 are repeatedly executed until the iteration times are reached; after the iteration times are reached, obtaining a KKS generation model based on the attention stacking network, and ensuringAnd storing the data in a computing unit.
Preferably, each layer of the self-attention network in step 5.2The input of (2) combines the output characteristics of the previous layer, each layer being a self-attention networkThe result of the layer is normalized by the skip layer, and then is combined with the feature in the current output result; step 5.5, matching corresponding word segmentation results according to the index of the maximum value of each dimension word vector probability, and then performing word segmentation splicing to reconstruct KKS codes in sequence; in step 5.5, the computing unit provides a model operating environment by adopting a Tensorflow architecture, and provides model compression and acceleration by adopting a TensorRT optimization tool.
Preferably, in step 6, when the similarity calculation is performed on the predicted KKS code value under the new coding rule in the form of a character string by using the minimum edit distance, the closer the similarity result is to 1, the more similar the KKS code is, and the closer to 0, the more dissimilar the KKS code is.
The invention has the beneficial effects that: the complex power generation process is systematized, a digital twin body based on the power full production process is constructed, and a collective group level digital twin technology is instantiated; the invention also constructs a KKS generation model based on the attention stack network, stimulates the vitality of the original digital infrastructure of the power plant, opens a data island, breaks down information barriers, promotes the real-time fusion between a physical space and an information space, and realizes the intelligent mapping task under various KKS coding systems.
Drawings
FIG. 1 is a diagram summarizing a digital twin-based group-level KKS encoding intelligent mapping recommendation method;
FIG. 2 is a diagram of a digital twin based professional dictionary;
FIG. 3 is a schematic diagram of an attention stacking network;
FIG. 4 is a logic relationship diagram of the acquisition unit, the storage unit, and the calculation unit;
FIG. 5 is a schematic view of an acquisition device;
FIG. 6 is a schematic view of a storage device;
FIG. 7 is a schematic diagram of a computing device.
Detailed Description
The present invention will be further described with reference to the following examples. The following examples are set forth merely to aid in the understanding of the invention. It should be noted that, for a person skilled in the art, several modifications can be made to the invention without departing from the principle of the invention, and these modifications and modifications also fall within the protection scope of the claims of the present invention.
The fact that the devices of the same model or the same type have a public digital twin body is one of the main characteristics of a group-level digital twin architecture. To ensure consistency between the digital space and the physical space, the underlying coding standardization is the basis of the digital twin of the device. At present, most of the production systems and other auxiliary systems of the power plants use the coding rules of the systems for many years, and great difficulty exists in modifying the coding specifications of the systems in operation. In addition, the power plant cannot cover all the devices due to the updating of the technology. If the cluster-level digital twin technology is to be popularized, each power plant needs to construct a unified coding map according to the self coding, but the work is extremely complicated. Therefore, the invention provides a digital twin body-based group-level KKS coding intelligent mapping recommendation method
Example one
The embodiment of the application provides a recommendation method for the group-level KKS coding intelligent mapping based on the digital twin as shown in FIG. 1:
in the above formula, the first and second carbon atoms are,representing the individual subsystems that make up the digital twin,representing a digital twin based on a power full production process;
step 4, the KKS coding number obtained in the step 3 is subjected to codingData setCarrying out data preprocessing to obtain a training setAnd test set;
Step 5, training set obtained through step 4Training a KKS generation model based on the attention stacking network, wherein the KKS generation model based on the attention stacking network is formed by stacking a plurality of layers of attention networks;
step 5.1, according to the full connection layer, encoding according to the original encoding ruleCoded according to the new coding ruleTo carry out word vectorization coding to respectively obtain vectorization results with fixed dimensions:
In the above formula, the upper label1stIndicating the original coding rules, superscript2ndRepresenting a new encoding rule;
step 5.2, vectorizing result obtained according to the step 5.1Attention features are calculated by the self-attention layer:
in the above equation, a vectorized result with a fixed dimension is input to the self-attention layerWherein the self-attention layer is composed ofNThe self-attention module is formed by stacking, the input of the latter module is taken as the output of the former module in the form of a self-attention layer, and each self-attention module is divided into two layers; through a self-attention networkGet attention moment matrix:
In the above formula, the self-attention networkRespectively pass through the weight、、Obtaining the characteristic value、、(ii) a Then adoptObtaining a value based on the characteristic、、Attention weight of, in addition to、Has a vector dimension ofd(ii) a Then through the feedforward full connection layerCalculating to obtain attention characteristicsWhereinIs shown innCalculating in a self-attention module;
step 5.3, vectorizing results obtained in the step 5.1Step 5.2 obtaining the self-attention featureCarrying out feature fusion calculation; the fusion computing module is composed ofMThe fusion self-attention module is formed by stacking, and in form, the input of the latter module in the fusion calculation module is used as the output of the former module; wherein the fusion self-attention module is composed of three layers:
in the above formula, theAs a self-attention networkBy first passing through a self-attention networkGet attention moment matrix:
In the above formula, the self-attention networkRespectively pass through the weight、、Obtaining the characteristic value、、(ii) a Then adoptObtaining a value based on the characteristic、、Attention weight ofIn addition d is、The vector dimension of (a); then passes throughTwo attention moment arraysAnd attention characteristicsPerforming fusion to obtain a fusion attention moment array(ii) a Finally, the full connection layer is fed forwardAttention moment array according to fusionCalculating to obtain the fusion attention characteristicsWhereinIs shown inmCalculating in a fusion self-attention module;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3Using normalized exponential functionCalculating to obtain probability mapping values of the fusion attention features in the word vectorsp:
Step 5.5, obtaining the probability mapping value of the fusion attention feature in the word vector from step 5.4pEvaluating a model training result by taking the cross entropy result as a loss function for training a KKS generation model based on the attention stacking network; determining a pause condition of KKS generation model training based on the attention stacking network according to the iteration times and the loss function convergence value; if the KKS generation model based on the attention stacking network is to be trained continuously, the steps 5.1 to 5.4 are repeatedly executed until the iteration times are reached; after the iteration times are reached, obtaining a KKS generation model based on the attention stacking network, and storing the KKS generation model in a computing unit;
step 6, obtaining the professional dictionary according to the step 2And 5, obtaining a probability mapping value of the fusion attention feature in the word vectorpGenerating a KKS coding predicted value under a new coding rule by adopting a reconstruction function, and calculating the similarity of the KKS coding predicted value under the new coding rule in a character string mode by adopting a minimum editing distance without conversionCalculating in a vector form; obtain the most similargThe KKS code values are used for KKS code mapping recommendation:
in the above formula, byThe reconstruction function obtains a KKS coding prediction valueThen byCalculating the similarity of KKS codes by using a minimum edit distance function, and finally adoptingSelecting the most similargAnd the KKS code values are used for KKS code mapping recommendation, and the recommendation result is stored in the storage unit.
Example two
On the basis of the first embodiment, the second embodiment of the present application provides an application of the group-level KKS coding intelligent mapping recommendation method based on digital twin in the first embodiment to a certain power generation group digital twin standard coding project:
step 2.1, digital twinsThe detailed classification of five main systems and related subsystems contained in the system is used for word segmentation to obtain professional word segmentation data, and a data structure of triple-key-value pairs is constructed;
step 2.2, performing multi-level professional dictionary construction according to the professional word segmentation data obtained in the step 2.1Mainly composed of a main system dictionaryAnd subsystem dictionaryThe dictionary is stored in a storage unit, and the structure of the storage unit is shown in FIG. 6;
step 4, KKS coded data set obtained in step 3The data preprocessing is carried out, and the specific steps are as follows:
step 4.1, obtaining the professional dictionary according to the step 2Adopting a Jieba word segmentation tool to segment the coded data to obtain a corresponding word segmentation result,Representing respective participles; according to professional dictionariesThe code of each phrase is digitally coded to obtain,Representing each participle in a dictionaryThe corresponding numerical value in (1);
step 4.2, training the digital coding data set obtained in the step 4.1 in a random sampling modeAnd test setDividing, wherein the training set is 8000 groups of data, and the testing set is 2000 groups of data;
step 5, passing the training set obtained by step 4.2Training an attention stacking network model, wherein the model is composed of a plurality of layers of attention stacking networks, as shown in fig. 3, the specific steps are as follows:
step 5.1, according to the input original coding dataWord vectorization coding to obtain a vector with fixed dimensionsThe fixed dimension is 32:
obtaining the data dimension of vectorization coding as 64 × 20 × 32, wherein 64 refers to batch size, and one batch can train 64 batches of coding pairs simultaneously, and each batch of coding pairs comprises 20 groups of codes;
step 5.2, vectorizing result obtained according to the step 5.1Carry out attention characteristic through self attention layer and calculate, it is piled up by 3 self attention modules end to end from attention layer and constitutes, and self attention module comprises two-layer structure simultaneously:
wherein the input featuresFirst through a self-attention networkGet attention moment matrixMatrix dimension 64 x 20 x 32; then through the feedforward full connection layerCalculating to obtain attention characteristicsA characteristic dimension of 64 × 20 × 32;
step 5.3, vectorizing result obtained according to the step 5.1Step 5.2 obtaining the self-attention featureAnd performing feature fusion calculation, wherein the fusion process is formed by stacking 3 fusion self-attention modules in an end-to-end manner, and the fusion self-attention modules are formed by three layers of structures:
wherein the input featuresFirst through a self-attention networkGet attention moment matrixMatrix dimension 64 x 20 x 32; then pass throughTwo attention moment arraysAnd attention characteristicsFusion is carried out to obtain a fusion attention moment arrayMatrix dimension 64 x 20 x 32; finally, the full connection layer is fed forwardMoment array according to attentionCalculating to obtain the fusion attention characteristicsA characteristic dimension of 64 × 20 × 32;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3By usingCalculating to obtain probability mapping values of all the characteristics in the word vectorspThe probability matrix dimension is 64 × 20 × 1365:
step 5.5, word vector probability mapping value obtained by step 5.4pEvaluating a model training result by taking a cross entropy result as a loss function of model training, wherein the model tends to be converged when the model is trained for 12K times, and the loss function value is 0.132 at the moment, storing and obtaining a KKS coding generation model based on the attention stacking network, and obtaining the modelStored in the computing unit for model multiplexing.
Step 6, the professional dictionary obtained in the step 2And the probability mapping value obtained in the step 5pAnd (3) adopting a new KKS code generated by a reconstruction function, and adopting a minimum edit distance to calculate the similarity, so as to obtain the most similar 8 KKS code values for KKS code mapping recommendation:
wherein byThe reconstruction function obtains a KKS coding prediction valueMeanwhile, verifying that the generated codes accord with the grading of each system in the digital twin; then pass throughCalculating the similarity of the KKS codes by using the minimum editing distance function; finally adoptAnd selecting the most similar 10 KKS codes to be stored in the storage unit and using the KKS codes for matching the mapping result by the operator. In this embodiment, after a certain gas power plant belonging to a certain group adopts a KKS generation model based on an attention stacking network to perform standardized coding, the obtained coding situation is as shown in table 1 below:
TABLE 1 coding situation table of a certain gas power plant subordinate to a certain group
In the table 1, through intelligent mapping recommendation statistics on 1442 measuring point codes, the recommendation accuracy of the group-level KKS coding intelligent mapping recommendation method based on the digital twin is 84.3%, and the working requirements of power plant managers are met.
In this embodiment, a KKS generation model based on the attention stacking network is adopted for the hydraulic power plant belonging to the group to perform standardized coding, and the obtained coding condition is as shown in table 2 below:
TABLE 2 coding situation table of hydraulic power plants under a certain group
In the table 2, through the intelligent mapping recommendation statistics of the codes of 5113 measuring points, the recommendation accuracy is 82.7%, and the working requirements of power plant managers are met.
Claims (10)
1. A group-level KKS coding intelligent mapping recommendation method based on digital twin is characterized by comprising the following steps:
step 1, according to a digital twin modeling process, firstly systematizing a power generation process, and constructing a digital twin based on a power full production process:in the formula (I), wherein,representing the individual subsystems that make up the digital twin,representing a digital twin based on a power full production process;
step 2, obtaining the digital twin body according to the step 1Constructing a professional dictionary;
Step 3, acquiring and obtaining the codes under the original coding rule by using acquisition equipmentAnd coding under the new coding rulesWhereinRefers to the nth code which is coded according to the original coding rule,the nth coding is coded according to a new coding rule; according to the coding of the original coding rule by manpowerEncoding under the new encoding rulesPerforming mapping matching, and constructing a KKS encoding data set:;
step 4, KKS coded data set obtained in step 3Carrying out data preprocessing to obtain a training setAnd test set;
Step 5,Training set obtained by step 4Training a KKS generation model based on the attention stacking network, wherein the KKS generation model based on the attention stacking network is formed by stacking a plurality of layers of attention networks;
step 6, obtaining the professional dictionary according to the step 2And 5, obtaining a probability mapping value of the fusion attention feature in the word vectorpGenerating a KKS coding predicted value under a new coding rule by adopting a reconstruction function, and calculating the similarity of the KKS coding predicted value under the new coding rule in a character string mode by adopting a minimum editing distance; obtain the most similargThe KKS code values are used for KKS code mapping recommendation:
in the above formula, byThe reconstruction function obtains a KKS coding prediction valueThen byCalculating the similarity of KKS codes by using a minimum edit distance function, and finally adoptingSelecting the most similargAnd the KKS code values are used for KKS code mapping recommendation, and the recommendation result is stored in the storage unit.
2. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, characterized in that: when the power generation process is systematized in the step 1, the power generation process is divided according to different power subsystems, and the three relations including, parallel and series exist among the different power subsystems.
3. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, wherein step 2 specifically comprises the steps of:
step 2.1, aiming at digital twinThe detailed description of each subsystem contained in the method carries out word segmentation to obtain professional word segmentation data;
step 2.2, constructing a multi-level professional dictionary according to the professional word segmentation data obtained in the step 2.1:
4. The digital twin-based group-level KKS coding intelligent mapping recommendation method as claimed in claim 3, wherein the professional dictionary in step 2And the three-key-value pair consisting of the serial number, the serial number and the professional code of the power subsystem is used as a data structure and is stored in a storage unit of the storage device, and the storage unit accesses the Json-format file and provides data interface service.
5. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, characterized in that: step 3, the original coding rule is a KKS coding rule adopted when a design institute designs a power plant, and the new coding rule is a KKS coding rule formulated by a power generation group; and 3, performing data acquisition by the acquisition unit in the acquisition equipment by running a Python script, and transmitting the acquired data to the outside through a data interface.
6. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1 or 3, characterized in that step 4 comprises the following steps:
step 4.1, obtaining the professional dictionary according to the step 2For KKS encoded data setThe coded data in (1) is participated to obtain the corresponding participated word group,Are words that constitute coded data, a codeIs divided intoWord(ii) a Each wordCorresponding to a serial numberDigitally encoding each word-dividing phrase and numberingRecombined to form a coded vectorObtaining a digital coding data set;
7. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 6, characterized in that: KKS encoded data set in step 4.1When the word segmentation results obtained after the word segmentation is carried out on the coded data in the method are inconsistent in length, the longest word segmentation quantity is used as a standard, and the word segmentation with the insufficient length is expanded by adding corresponding quantity of placeholders; training set in step 4.2And test setIn a ratio of 4: 1.
8. The digital twin based group-level KKS coding intelligent mapping recommendation method according to claim 1, 3 or 6, characterized in that step 5 comprises the following steps:
step 5.1, according to the full connection layer, encoding according to the original encoding ruleCoded according to the new coding ruleTo carry out word vectorization coding to respectively obtain vectorization results with fixed dimensions:
In the above formula, the upper label1stIndicating the original coding rules, superscript2ndRepresenting a new encoding rule;
step 5.2, vectorizing result obtained according to the step 5.1Attention features are calculated by the self-attention layer:
in the above equation, a vector having a fixed dimension is input to the self-attention layerChange the resultWherein the self-attention layer is composed ofNThe self-attention module is formed by stacking, the input of the latter module is taken as the output of the former module in the form of a self-attention layer, and each self-attention module is divided into two layers; through a self-attention networkGet attention moment matrix:
In the above formula, the self-attention networkRespectively pass through the weight、、Obtaining the characteristic value、、(ii) a Then adoptObtaining a value based on the characteristic、、Attention weight of, in addition to、Has a vector dimension ofd(ii) a Then through the feedforward full connection layerCalculating to obtain attention characteristicsWhereinIs shown innCalculating in a self-attention module;
step 5.3, vectorizing results obtained in the step 5.1Step 5.2 obtaining the self-attention featureCarrying out feature fusion calculation; the fusion computing module is composed ofMThe fusion self-attention module is formed by stacking, and the input of the latter module in the fusion calculation module is used as the output of the former module; it is composed ofThe middle fusion self-attention module is composed of three layers of structures:
in the above formula, theAs a self-attention networkBy first passing through a self-attention networkGet attention moment matrix:
In the above formula, the self-attention networkRespectively pass through the weight、、Obtaining the characteristic value、、(ii) a Then adoptObtaining a value based on the characteristic、、Attention weight ofIn addition d is、The vector dimension of (a); then passes throughTwo attention moment arraysAnd attention characteristicsPerforming fusion to obtain a fusion attention moment array(ii) a Finally, the full connection layer is fed forwardAttention moment array according to fusionCalculating to obtain the fusion attention characteristicsWhereinIs shown inmCalculating in a fusion self-attention module;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3Using normalized exponential functionCalculating to obtain probability mapping values of the fusion attention features in the word vectorsp:
Step 5.5, obtaining the probability mapping value of the fusion attention feature in the word vector from step 5.4pEvaluating a model training result by taking the cross entropy result as a loss function for training a KKS generation model based on the attention stacking network; determining a pause condition of KKS generation model training based on the attention stacking network according to the iteration times and the loss function convergence value; if the KKS generation model based on the attention stacking network is to be trained continuously, the steps 5.1 to 5.4 are repeatedly executed until the iteration times are reached; and after the iteration times are reached, obtaining a KKS generation model based on the attention stacking network, and storing the KKS generation model in the calculating unit.
9. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 8, characterized in that: each layer of the self-attention network in step 5.2The input of (2) combines the output characteristics of the previous layer, each layer being a self-attention networkThe result of the layer is normalized by the skip layer, and then is combined with the feature in the current output result; step 5.5, matching corresponding word segmentation results according to the index of the maximum value of each dimension word vector probability, and then performing word segmentation splicing to reconstruct KKS codes in sequence; in step 5.5, the computing unit provides a model operating environment by adopting a Tensorflow architecture, and provides model compression and acceleration by adopting a TensorRT optimization tool.
10. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, characterized in that: and 6, when the KKS code predicted value under the new coding rule is subjected to similarity calculation in a character string mode by adopting the minimum editing distance, the closer the similarity result is to 1, the more similar the KKS code is, and the closer the similarity result is to 0, the more dissimilar the KKS code is.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110905893.4A CN113673152B (en) | 2021-08-09 | 2021-08-09 | Group level KKS coding intelligent mapping recommendation method based on digital twin |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110905893.4A CN113673152B (en) | 2021-08-09 | 2021-08-09 | Group level KKS coding intelligent mapping recommendation method based on digital twin |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113673152A true CN113673152A (en) | 2021-11-19 |
CN113673152B CN113673152B (en) | 2024-06-14 |
Family
ID=78541823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110905893.4A Active CN113673152B (en) | 2021-08-09 | 2021-08-09 | Group level KKS coding intelligent mapping recommendation method based on digital twin |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113673152B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114911797A (en) * | 2022-05-05 | 2022-08-16 | 福建安能数通科技有限公司 | Method for fusing and applying KKS code and grid code |
CN115689399A (en) * | 2022-10-10 | 2023-02-03 | 中国长江电力股份有限公司 | Hydropower equipment information model rapid construction method based on industrial internet platform |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190340467A1 (en) * | 2018-05-06 | 2019-11-07 | Strong Force TX Portfolio 2018, LLC | Facility level transaction-enabling systems and methods for provisioning and resource allocation |
CN110781680A (en) * | 2019-10-17 | 2020-02-11 | 江南大学 | Semantic similarity matching method based on twin network and multi-head attention mechanism |
US20200293394A1 (en) * | 2019-03-13 | 2020-09-17 | Accenture Global Solutions Limited | Interactive troubleshooting assistant |
CA3158765A1 (en) * | 2019-11-25 | 2021-06-03 | Strong Force Iot Portfolio 2016, Llc | Intelligent vibration digital twin systems and methods for industrial environments |
-
2021
- 2021-08-09 CN CN202110905893.4A patent/CN113673152B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190340467A1 (en) * | 2018-05-06 | 2019-11-07 | Strong Force TX Portfolio 2018, LLC | Facility level transaction-enabling systems and methods for provisioning and resource allocation |
US20200293394A1 (en) * | 2019-03-13 | 2020-09-17 | Accenture Global Solutions Limited | Interactive troubleshooting assistant |
CN110781680A (en) * | 2019-10-17 | 2020-02-11 | 江南大学 | Semantic similarity matching method based on twin network and multi-head attention mechanism |
CA3158765A1 (en) * | 2019-11-25 | 2021-06-03 | Strong Force Iot Portfolio 2016, Llc | Intelligent vibration digital twin systems and methods for industrial environments |
Non-Patent Citations (1)
Title |
---|
蒋承峰等: "流域梯级水电厂KKS 信号标识的设计及应用", 《中国设备工程》, 23 November 2020 (2020-11-23), pages 8 - 9 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114911797A (en) * | 2022-05-05 | 2022-08-16 | 福建安能数通科技有限公司 | Method for fusing and applying KKS code and grid code |
CN115689399A (en) * | 2022-10-10 | 2023-02-03 | 中国长江电力股份有限公司 | Hydropower equipment information model rapid construction method based on industrial internet platform |
CN115689399B (en) * | 2022-10-10 | 2024-05-10 | 中国长江电力股份有限公司 | Rapid construction method of hydropower equipment information model based on industrial Internet platform |
Also Published As
Publication number | Publication date |
---|---|
CN113673152B (en) | 2024-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110943857B (en) | Power communication network fault analysis and positioning method based on convolutional neural network | |
CN116192971B (en) | Intelligent cloud energy operation and maintenance service platform data management method | |
CN113673152B (en) | Group level KKS coding intelligent mapping recommendation method based on digital twin | |
CN106067066A (en) | Method for diagnosing fault of power transformer based on genetic algorithm optimization pack algorithm | |
CN106874963B (en) | A kind of Fault Diagnosis Method for Distribution Networks and system based on big data technology | |
CN113361559B (en) | Multi-mode data knowledge information extraction method based on deep-width combined neural network | |
CN117096867A (en) | Short-term power load prediction method, device, system and storage medium | |
CN114138759B (en) | Secondary equipment fault processing pushing method and system based on knowledge graph reasoning | |
CN110515931A (en) | A kind of capacitance type equipment failure prediction method based on random forests algorithm | |
CN115526236A (en) | Text network graph classification method based on multi-modal comparative learning | |
CN104732067A (en) | Industrial process modeling forecasting method oriented at flow object | |
CN113361803A (en) | Ultra-short-term photovoltaic power prediction method based on generation countermeasure network | |
CN114021758A (en) | Operation and maintenance personnel intelligent recommendation method and device based on fusion of gradient lifting decision tree and logistic regression | |
CN113343643B (en) | Supervised-based multi-model coding mapping recommendation method | |
CN117172413B (en) | Power grid equipment operation state monitoring method based on multi-mode data joint characterization and dynamic weight learning | |
CN114238524A (en) | Satellite frequency-orbit data information extraction method based on enhanced sample model | |
CN113536508A (en) | Method and system for classifying manufacturing network nodes | |
Balazs et al. | Hierarchical-interpolative fuzzy system construction by genetic and bacterial memetic programming approaches | |
CN113643141B (en) | Method, device, equipment and storage medium for generating interpretation conclusion report | |
CN115438190B (en) | Power distribution network fault auxiliary decision knowledge extraction method and system | |
CN116796617A (en) | Rolling bearing equipment residual life prediction method and system based on data identification | |
CN114969237A (en) | Automatic address analyzing and matching method for geographic information system | |
CN113673202A (en) | Double-layer matching coding mapping recommendation method based on hybrid supervision | |
CN114692495A (en) | Efficient complex system reliability evaluation method based on reliability block diagram | |
Li et al. | Power grid fault detection method based on cloud platform and improved isolated forest |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20220818 Address after: Room 307, No. 32, Gaoji Street, Xihu District, Hangzhou City, Zhejiang Province, 310002 Applicant after: Zhejiang Zheneng Digital Technology Co., Ltd. Applicant after: ZHEJIANG ENERGY R & D INSTITUTE Co.,Ltd. Address before: 5 / F, building 1, No. 2159-1, yuhangtang Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province Applicant before: ZHEJIANG ENERGY R & D INSTITUTE Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |