CN113673152A - Digital twin body-based group-level KKS coding intelligent mapping recommendation method - Google Patents

Digital twin body-based group-level KKS coding intelligent mapping recommendation method Download PDF

Info

Publication number
CN113673152A
CN113673152A CN202110905893.4A CN202110905893A CN113673152A CN 113673152 A CN113673152 A CN 113673152A CN 202110905893 A CN202110905893 A CN 202110905893A CN 113673152 A CN113673152 A CN 113673152A
Authority
CN
China
Prior art keywords
attention
kks
coding
digital twin
self
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110905893.4A
Other languages
Chinese (zh)
Other versions
CN113673152B (en
Inventor
傅骏伟
俞荣栋
柴真琦
郭鼎
王豆
罗一凡
戴程鹏
邵建宇
高凯楠
徐哲源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zheneng Digital Technology Co Ltd
Zhejiang Energy Group Research Institute Co Ltd
Original Assignee
Zhejiang Energy Group Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Energy Group Research Institute Co Ltd filed Critical Zhejiang Energy Group Research Institute Co Ltd
Priority to CN202110905893.4A priority Critical patent/CN113673152B/en
Publication of CN113673152A publication Critical patent/CN113673152A/en
Application granted granted Critical
Publication of CN113673152B publication Critical patent/CN113673152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a digital twin-based group-level KKS coding intelligent mapping recommendation method, which comprises the following steps of: constructing a digital twin body based on a power full production process; constructing a professional dictionary according to the obtained digital twins; and acquiring codes under the original coding rule and the new coding rule by using acquisition equipment, and constructing a KKS coding data set. The invention has the beneficial effects that: the complex power generation process is systematized, a digital twin body based on the power full production process is constructed, and a collective group level digital twin technology is instantiated; the invention also constructs a KKS generation model based on the attention stack network, stimulates the vitality of the original digital infrastructure of the power plant, opens a data island, breaks down information barriers, promotes the real-time fusion between a physical space and an information space, and realizes the intelligent mapping task under various KKS coding systems.

Description

Digital twin body-based group-level KKS coding intelligent mapping recommendation method
Technical Field
The invention belongs to the technical field of power plant information, and particularly relates to a digital twin-based group-level KKS coding intelligent mapping recommendation method.
Background
The digital twin is a digital representation of real things with a specific purpose, namely, the real situation of real physical entities presented in real time in a virtual digital world. With the popularization and application of the digital twin concept, how to map the entities of the information space and the physical space one by one becomes one of important basic works.
Currently, the main basis supporting the application of digital twins in the power industry is the power plant identification system (KKS), which enables efficient identification and management of power plant equipment assets. However, KKS coding itself has many limitations:
1. only the production needs of the power plant are considered at the beginning of the programming, and the universality of the codes is not considered;
2. the compiling main body is an electric power design institute and an equipment manufacturer, and operation and maintenance personnel lack the capacity of maintaining codes;
3. for iterative updates of new technologies, the KKS coding scheme cannot respond quickly.
Therefore, a new technology, especially a digital twin technology is promoted to be applied to the power industry, and the premise is to realize the automation and the intelligence of the KKS coding system. Although the invention patent cn201711434013.x implements a digital twin mapping model using an OPC UA server, an OPC UA client and a data mapping dictionary, it is only directed to coding rules in the OPC UA protocol and cannot be extended. In addition, the invention patent CN201910956494.3 adopts a heterogeneous protocol conversion mode to implement a data virtual-real interactive encoding and decoding process, and also has an identification problem for the same entity under different encoding conditions. In order to instantiate a group-level digital twin technology, the vitality of original digital infrastructures of a power plant is excited, a data island is opened, information barriers are broken, real-time fusion between a physical space and an information space is promoted, and an intelligent mapping task under various KKS coding systems is still a difficult problem to be solved urgently.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a digital twin-based group-level KKS coding intelligent mapping recommendation method.
The group-level KKS coding intelligent mapping recommendation method based on the digital twin comprises the following steps:
step 1, according to a digital twin modeling process, firstly systematizing a complex power generation process, and constructing a digital twin based on a power full production process:
Figure DEST_PATH_IMAGE001
in the above formula, the first and second carbon atoms are,
Figure 792736DEST_PATH_IMAGE002
representing the individual subsystems that make up the digital twin,
Figure DEST_PATH_IMAGE003
representing a digital twin based on a power full production process;
step 2, obtaining the digital twin body according to the step 1
Figure 35499DEST_PATH_IMAGE003
Constructing a professional dictionary
Figure 900687DEST_PATH_IMAGE004
Step 3, acquiring and obtaining the codes under the original coding rule by using acquisition equipment
Figure DEST_PATH_IMAGE005
And coding under the new coding rules
Figure 836282DEST_PATH_IMAGE006
Wherein
Figure 100002_DEST_PATH_IMAGE007
Refers to the nth code which is coded according to the original coding rule,
Figure 685289DEST_PATH_IMAGE008
the nth coding is coded according to a new coding rule; coding under the original coding rule according to the manual (experience of production operators)
Figure 100002_DEST_PATH_IMAGE009
Encoding under the new encoding rules
Figure 466163DEST_PATH_IMAGE006
Performing mapping matching, and constructing a KKS encoding data set:
Figure 451437DEST_PATH_IMAGE010
step 4, KKS coded data set obtained in step 3
Figure 100002_DEST_PATH_IMAGE011
Carrying out data preprocessing to obtain a training set
Figure 557933DEST_PATH_IMAGE012
And test set
Figure 100002_DEST_PATH_IMAGE013
Step 5, training set obtained through step 4
Figure 894236DEST_PATH_IMAGE014
Training a KKS generation model based on the attention stacking network, wherein the KKS generation model based on the attention stacking network is formed by stacking a plurality of layers of attention networks;
step 6, obtaining the professional dictionary according to the step 2
Figure 100002_DEST_PATH_IMAGE015
And 5, obtaining a probability mapping value of the fusion attention feature in the word vectorpGenerating a KKS coding predicted value under a new coding rule by adopting a reconstruction function, and calculating the similarity of the KKS coding predicted value under the new coding rule in a character string mode by adopting a minimum editing distance without converting the KKS coding predicted value into a vector mode; obtain the most similargThe KKS code values are used for KKS code mapping recommendation:
Figure 478801DEST_PATH_IMAGE016
in the above formula, by
Figure 318582DEST_PATH_IMAGE017
The reconstruction function obtains a KKS coding prediction value
Figure 100002_DEST_PATH_IMAGE018
Then by
Figure 330400DEST_PATH_IMAGE019
Calculating the similarity of KKS codes by using a minimum edit distance function, and finally adopting
Figure 100002_DEST_PATH_IMAGE020
Selecting the most similargAnd the KKS code values are used for KKS code mapping recommendation, and the recommendation result is stored in the storage unit.
Preferably, when the complex power generation flow is systematized in step 1, the power generation flow is divided into different power subsystems, and the different power subsystems have three relations of inclusion, parallel connection and series connection.
Preferably, step 2 specifically comprises the following steps:
step 2.1, aiming at digital twin
Figure 153999DEST_PATH_IMAGE021
The detailed description of each subsystem contained in the method carries out word segmentation to obtain professional word segmentation data;
step 2.2, constructing a multi-level professional dictionary according to the professional word segmentation data obtained in the step 2.1:
Figure 100002_DEST_PATH_IMAGE022
the above formula represents a multi-level professional dictionary
Figure 807835DEST_PATH_IMAGE023
Dictionary with master system
Figure 100002_DEST_PATH_IMAGE024
And subsystem dictionary
Figure 767700DEST_PATH_IMAGE025
Two parts are formed, and a multi-level professional dictionary is formed
Figure 950420DEST_PATH_IMAGE023
Stored in a storage device.
Preferably, the professional dictionary in step 2
Figure 995736DEST_PATH_IMAGE015
And the three-key-value pair consisting of the serial number, the serial number and the professional code of the power subsystem is used as a data structure and is stored in a storage unit of the storage device, and the storage unit accesses the Json-format file and provides data interface service.
Preferably, in the step 3, the original coding rule is a KKS coding rule adopted when designing a power plant, the new coding rule is a KKS coding rule set by a power generation group, and the two rules have the same purpose but different coding results; and 3, performing data acquisition by the acquisition unit in the acquisition equipment by running a Python script, and transmitting the acquired data to the outside through a data interface.
Preferably, step 4 specifically comprises the following steps:
step 4.1, obtaining the professional dictionary according to the step 2
Figure 390946DEST_PATH_IMAGE015
For KKS encoded data set
Figure 736476DEST_PATH_IMAGE026
The coded data in (1) is participated to obtain the corresponding participated word group
Figure 824518DEST_PATH_IMAGE027
Figure 100002_DEST_PATH_IMAGE028
Are words that constitute coded data, a code
Figure 357131DEST_PATH_IMAGE029
Is divided into
Figure 100002_DEST_PATH_IMAGE030
Word
Figure 87189DEST_PATH_IMAGE031
(ii) a Each word
Figure 490489DEST_PATH_IMAGE031
Corresponding to a serial number
Figure 100002_DEST_PATH_IMAGE032
The codes of each word-separating phrase are digitally coded and numbered
Figure 280590DEST_PATH_IMAGE033
Recombined to form a coded vector
Figure 100002_DEST_PATH_IMAGE034
Obtaining a digital coding data set;
step 4.2, dividing the digital coding data set obtained in the step 4.1 into training sets in a random sampling mode
Figure 566078DEST_PATH_IMAGE035
And test set
Figure 100002_DEST_PATH_IMAGE036
Preferably, KKS encoded data set is encoded in step 4.1
Figure 99828DEST_PATH_IMAGE037
When the word segmentation results obtained after the word segmentation is carried out on the coded data in the method are inconsistent in length, the longest word segmentation quantity is used as a standard, and the word segmentation with the insufficient length is expanded by adding corresponding quantity of placeholders; training set in step 4.2
Figure 888792DEST_PATH_IMAGE038
And test set
Figure 100002_DEST_PATH_IMAGE039
In a ratio of 4: 1.
Preferably, step 5 specifically comprises the following steps:
step 5.1, according to the full connection layer, encoding according to the original encoding rule
Figure 849795DEST_PATH_IMAGE040
Coded according to the new coding rule
Figure 825841DEST_PATH_IMAGE041
To carry out word vectorization coding to respectively obtain vectorization results with fixed dimensions
Figure 100002_DEST_PATH_IMAGE042
Figure 163281DEST_PATH_IMAGE043
In the above formulaUpper label of1stIndicating the original coding rules, superscript2ndRepresenting a new encoding rule;
step 5.2, vectorizing result obtained according to the step 5.1
Figure 100002_DEST_PATH_IMAGE044
Attention features are calculated by the self-attention layer:
Figure 72332DEST_PATH_IMAGE045
in the above equation, a vectorized result with a fixed dimension is input to the self-attention layer
Figure 938656DEST_PATH_IMAGE044
Wherein the self-attention layer is composed ofNThe self-attention module is formed by stacking, the input of the latter module is taken as the output of the former module in the form of a self-attention layer, and each self-attention module is divided into two layers; through a self-attention network
Figure 667578DEST_PATH_IMAGE046
Get attention moment matrix
Figure 277551DEST_PATH_IMAGE047
Figure 775528DEST_PATH_IMAGE048
In the above formula, the self-attention network
Figure 100002_DEST_PATH_IMAGE049
Respectively pass through the weight
Figure 812755DEST_PATH_IMAGE050
Figure 100002_DEST_PATH_IMAGE051
Figure 825710DEST_PATH_IMAGE052
Get specialEigenvalue
Figure 100002_DEST_PATH_IMAGE053
Figure 973794DEST_PATH_IMAGE054
Figure 100002_DEST_PATH_IMAGE055
(ii) a Then adopt
Figure 857437DEST_PATH_IMAGE056
Obtaining a value based on the characteristic
Figure 596723DEST_PATH_IMAGE053
Figure 34657DEST_PATH_IMAGE054
Figure 100002_DEST_PATH_IMAGE057
Attention weight of, in addition to
Figure 720854DEST_PATH_IMAGE058
Figure 100002_DEST_PATH_IMAGE059
Has a vector dimension ofd(ii) a Then through the feedforward full connection layer
Figure 724582DEST_PATH_IMAGE060
Calculating to obtain attention characteristics
Figure 103610DEST_PATH_IMAGE061
Wherein
Figure 100002_DEST_PATH_IMAGE062
Is shown innCalculating in a self-attention module;
step 5.3, vectorizing results obtained in the step 5.1
Figure 560000DEST_PATH_IMAGE063
Step 5.2 obtaining the self-attention feature
Figure 581045DEST_PATH_IMAGE061
Carrying out feature fusion calculation; the fusion computing module is composed ofMThe fusion self-attention module is formed by stacking, and in form, the input of the latter module in the fusion calculation module is used as the output of the former module; wherein the fusion self-attention module is composed of three layers:
Figure 908121DEST_PATH_IMAGE064
in the above formula, the
Figure 458051DEST_PATH_IMAGE065
As a self-attention network
Figure 870578DEST_PATH_IMAGE066
By first passing through a self-attention network
Figure 164156DEST_PATH_IMAGE046
Get attention moment matrix
Figure 345739DEST_PATH_IMAGE067
Figure 332149DEST_PATH_IMAGE068
In the above formula, the self-attention network
Figure 231972DEST_PATH_IMAGE049
Respectively pass through the weight
Figure 329241DEST_PATH_IMAGE050
Figure 365330DEST_PATH_IMAGE051
Figure 522642DEST_PATH_IMAGE052
Obtaining the characteristic value
Figure 100002_DEST_PATH_IMAGE069
Figure 440920DEST_PATH_IMAGE070
Figure 545142DEST_PATH_IMAGE071
(ii) a Then adopt
Figure 232475DEST_PATH_IMAGE056
Obtaining a value based on the characteristic
Figure 100002_DEST_PATH_IMAGE072
Figure 826268DEST_PATH_IMAGE073
Figure 100002_DEST_PATH_IMAGE074
Attention weight of
Figure 700683DEST_PATH_IMAGE075
In addition d is
Figure 100002_DEST_PATH_IMAGE076
Figure 405334DEST_PATH_IMAGE077
The vector dimension of (a); then passes through
Figure 100002_DEST_PATH_IMAGE078
Two attention moment arrays
Figure 947173DEST_PATH_IMAGE079
And attention characteristics
Figure 915129DEST_PATH_IMAGE080
Performing fusion to obtain a fusion attention moment array
Figure 100002_DEST_PATH_IMAGE081
(ii) a Finally, the full connection layer is fed forward
Figure 542420DEST_PATH_IMAGE082
Attention moment array according to fusion
Figure 100002_DEST_PATH_IMAGE083
Calculating to obtain the fusion attention characteristics
Figure 519603DEST_PATH_IMAGE084
Wherein
Figure 100002_DEST_PATH_IMAGE085
Is shown inmCalculating in a fusion self-attention module;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3
Figure 181529DEST_PATH_IMAGE086
Using normalized exponential function
Figure 320386DEST_PATH_IMAGE087
Calculating to obtain probability mapping values of the fusion attention features in the word vectorsp
Figure DEST_PATH_IMAGE088
Step 5.5, obtaining the probability mapping value of the fusion attention feature in the word vector from step 5.4pEvaluating a model training result by taking the cross entropy result as a loss function for training a KKS generation model based on the attention stacking network; determining a pause condition of KKS generation model training based on the attention stacking network according to the iteration times and the loss function convergence value; if the KKS generation model based on the attention stacking network is to be trained continuously, the steps 5.1 to 5.4 are repeatedly executed until the iteration times are reached; after the iteration times are reached, obtaining a KKS generation model based on the attention stacking network, and ensuringAnd storing the data in a computing unit.
Preferably, each layer of the self-attention network in step 5.2
Figure 434972DEST_PATH_IMAGE089
The input of (2) combines the output characteristics of the previous layer, each layer being a self-attention network
Figure 493145DEST_PATH_IMAGE049
The result of the layer is normalized by the skip layer, and then is combined with the feature in the current output result; step 5.5, matching corresponding word segmentation results according to the index of the maximum value of each dimension word vector probability, and then performing word segmentation splicing to reconstruct KKS codes in sequence; in step 5.5, the computing unit provides a model operating environment by adopting a Tensorflow architecture, and provides model compression and acceleration by adopting a TensorRT optimization tool.
Preferably, in step 6, when the similarity calculation is performed on the predicted KKS code value under the new coding rule in the form of a character string by using the minimum edit distance, the closer the similarity result is to 1, the more similar the KKS code is, and the closer to 0, the more dissimilar the KKS code is.
The invention has the beneficial effects that: the complex power generation process is systematized, a digital twin body based on the power full production process is constructed, and a collective group level digital twin technology is instantiated; the invention also constructs a KKS generation model based on the attention stack network, stimulates the vitality of the original digital infrastructure of the power plant, opens a data island, breaks down information barriers, promotes the real-time fusion between a physical space and an information space, and realizes the intelligent mapping task under various KKS coding systems.
Drawings
FIG. 1 is a diagram summarizing a digital twin-based group-level KKS encoding intelligent mapping recommendation method;
FIG. 2 is a diagram of a digital twin based professional dictionary;
FIG. 3 is a schematic diagram of an attention stacking network;
FIG. 4 is a logic relationship diagram of the acquisition unit, the storage unit, and the calculation unit;
FIG. 5 is a schematic view of an acquisition device;
FIG. 6 is a schematic view of a storage device;
FIG. 7 is a schematic diagram of a computing device.
Detailed Description
The present invention will be further described with reference to the following examples. The following examples are set forth merely to aid in the understanding of the invention. It should be noted that, for a person skilled in the art, several modifications can be made to the invention without departing from the principle of the invention, and these modifications and modifications also fall within the protection scope of the claims of the present invention.
The fact that the devices of the same model or the same type have a public digital twin body is one of the main characteristics of a group-level digital twin architecture. To ensure consistency between the digital space and the physical space, the underlying coding standardization is the basis of the digital twin of the device. At present, most of the production systems and other auxiliary systems of the power plants use the coding rules of the systems for many years, and great difficulty exists in modifying the coding specifications of the systems in operation. In addition, the power plant cannot cover all the devices due to the updating of the technology. If the cluster-level digital twin technology is to be popularized, each power plant needs to construct a unified coding map according to the self coding, but the work is extremely complicated. Therefore, the invention provides a digital twin body-based group-level KKS coding intelligent mapping recommendation method
Example one
The embodiment of the application provides a recommendation method for the group-level KKS coding intelligent mapping based on the digital twin as shown in FIG. 1:
step 1, according to a digital twin modeling process, firstly systematizing a complex power generation process, and constructing a digital twin based on a power full production process:
Figure DEST_PATH_IMAGE090
in the above formula, the first and second carbon atoms are,
Figure 743997DEST_PATH_IMAGE091
representing the individual subsystems that make up the digital twin,
Figure DEST_PATH_IMAGE092
representing a digital twin based on a power full production process;
step 2, obtaining the digital twin body according to the step 1
Figure 584914DEST_PATH_IMAGE093
Constructing a professional dictionary
Figure DEST_PATH_IMAGE094
Step 3, acquiring and obtaining the codes under the original coding rule by using acquisition equipment
Figure 655639DEST_PATH_IMAGE095
And coding under the new coding rules
Figure DEST_PATH_IMAGE096
Wherein
Figure 240204DEST_PATH_IMAGE097
Refers to the nth code which is coded according to the original coding rule,
Figure DEST_PATH_IMAGE098
the nth coding is coded according to a new coding rule; coding under the original coding rule according to the manual (experience of production operators)
Figure 611142DEST_PATH_IMAGE095
Encoding under the new encoding rules
Figure 622960DEST_PATH_IMAGE096
Performing mapping matching, and constructing a KKS encoding data set:
Figure 915402DEST_PATH_IMAGE099
step 4, the KKS coding number obtained in the step 3 is subjected to codingData set
Figure DEST_PATH_IMAGE100
Carrying out data preprocessing to obtain a training set
Figure 303658DEST_PATH_IMAGE101
And test set
Figure 529103DEST_PATH_IMAGE102
Step 5, training set obtained through step 4
Figure 180664DEST_PATH_IMAGE103
Training a KKS generation model based on the attention stacking network, wherein the KKS generation model based on the attention stacking network is formed by stacking a plurality of layers of attention networks;
step 5.1, according to the full connection layer, encoding according to the original encoding rule
Figure 22718DEST_PATH_IMAGE104
Coded according to the new coding rule
Figure 417927DEST_PATH_IMAGE105
To carry out word vectorization coding to respectively obtain vectorization results with fixed dimensions
Figure 232299DEST_PATH_IMAGE106
Figure DEST_PATH_IMAGE107
In the above formula, the upper label1stIndicating the original coding rules, superscript2ndRepresenting a new encoding rule;
step 5.2, vectorizing result obtained according to the step 5.1
Figure 851499DEST_PATH_IMAGE108
Attention features are calculated by the self-attention layer:
Figure DEST_PATH_IMAGE109
in the above equation, a vectorized result with a fixed dimension is input to the self-attention layer
Figure 649691DEST_PATH_IMAGE108
Wherein the self-attention layer is composed ofNThe self-attention module is formed by stacking, the input of the latter module is taken as the output of the former module in the form of a self-attention layer, and each self-attention module is divided into two layers; through a self-attention network
Figure 583012DEST_PATH_IMAGE110
Get attention moment matrix
Figure DEST_PATH_IMAGE111
Figure 783049DEST_PATH_IMAGE112
In the above formula, the self-attention network
Figure DEST_PATH_IMAGE113
Respectively pass through the weight
Figure 573151DEST_PATH_IMAGE114
Figure DEST_PATH_IMAGE115
Figure 593059DEST_PATH_IMAGE116
Obtaining the characteristic value
Figure 126809DEST_PATH_IMAGE053
Figure 650194DEST_PATH_IMAGE117
Figure 345618DEST_PATH_IMAGE118
(ii) a Then adopt
Figure DEST_PATH_IMAGE119
Obtaining a value based on the characteristic
Figure 118402DEST_PATH_IMAGE053
Figure 659104DEST_PATH_IMAGE054
Figure 568155DEST_PATH_IMAGE118
Attention weight of, in addition to
Figure 168900DEST_PATH_IMAGE053
Figure 632243DEST_PATH_IMAGE059
Has a vector dimension ofd(ii) a Then through the feedforward full connection layer
Figure 507795DEST_PATH_IMAGE120
Calculating to obtain attention characteristics
Figure DEST_PATH_IMAGE121
Wherein
Figure 536931DEST_PATH_IMAGE122
Is shown innCalculating in a self-attention module;
step 5.3, vectorizing results obtained in the step 5.1
Figure DEST_PATH_IMAGE123
Step 5.2 obtaining the self-attention feature
Figure 574157DEST_PATH_IMAGE121
Carrying out feature fusion calculation; the fusion computing module is composed ofMThe fusion self-attention module is formed by stacking, and in form, the input of the latter module in the fusion calculation module is used as the output of the former module; wherein the fusion self-attention module is composed of three layers:
Figure 790374DEST_PATH_IMAGE124
in the above formula, the
Figure 204038DEST_PATH_IMAGE065
As a self-attention network
Figure DEST_PATH_IMAGE125
By first passing through a self-attention network
Figure 556522DEST_PATH_IMAGE110
Get attention moment matrix
Figure 764650DEST_PATH_IMAGE126
Figure DEST_PATH_IMAGE127
In the above formula, the self-attention network
Figure 733743DEST_PATH_IMAGE113
Respectively pass through the weight
Figure 419939DEST_PATH_IMAGE114
Figure 158088DEST_PATH_IMAGE115
Figure 271537DEST_PATH_IMAGE116
Obtaining the characteristic value
Figure 727926DEST_PATH_IMAGE128
Figure 952234DEST_PATH_IMAGE129
Figure 279310DEST_PATH_IMAGE130
(ii) a Then adopt
Figure 94820DEST_PATH_IMAGE119
Obtaining a value based on the characteristic
Figure 241767DEST_PATH_IMAGE131
Figure 800925DEST_PATH_IMAGE132
Figure DEST_PATH_IMAGE133
Attention weight of
Figure 513666DEST_PATH_IMAGE134
In addition d is
Figure 968918DEST_PATH_IMAGE135
Figure 399899DEST_PATH_IMAGE136
The vector dimension of (a); then passes through
Figure 966010DEST_PATH_IMAGE137
Two attention moment arrays
Figure DEST_PATH_IMAGE138
And attention characteristics
Figure 533257DEST_PATH_IMAGE139
Performing fusion to obtain a fusion attention moment array
Figure 690569DEST_PATH_IMAGE140
(ii) a Finally, the full connection layer is fed forward
Figure DEST_PATH_IMAGE141
Attention moment array according to fusion
Figure 812109DEST_PATH_IMAGE142
Calculating to obtain the fusion attention characteristics
Figure 713069DEST_PATH_IMAGE143
Wherein
Figure 869244DEST_PATH_IMAGE144
Is shown inmCalculating in a fusion self-attention module;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3
Figure DEST_PATH_IMAGE145
Using normalized exponential function
Figure 197457DEST_PATH_IMAGE146
Calculating to obtain probability mapping values of the fusion attention features in the word vectorsp
Figure DEST_PATH_IMAGE147
Step 5.5, obtaining the probability mapping value of the fusion attention feature in the word vector from step 5.4pEvaluating a model training result by taking the cross entropy result as a loss function for training a KKS generation model based on the attention stacking network; determining a pause condition of KKS generation model training based on the attention stacking network according to the iteration times and the loss function convergence value; if the KKS generation model based on the attention stacking network is to be trained continuously, the steps 5.1 to 5.4 are repeatedly executed until the iteration times are reached; after the iteration times are reached, obtaining a KKS generation model based on the attention stacking network, and storing the KKS generation model in a computing unit;
step 6, obtaining the professional dictionary according to the step 2
Figure 337451DEST_PATH_IMAGE148
And 5, obtaining a probability mapping value of the fusion attention feature in the word vectorpGenerating a KKS coding predicted value under a new coding rule by adopting a reconstruction function, and calculating the similarity of the KKS coding predicted value under the new coding rule in a character string mode by adopting a minimum editing distance without conversionCalculating in a vector form; obtain the most similargThe KKS code values are used for KKS code mapping recommendation:
Figure 776523DEST_PATH_IMAGE149
in the above formula, by
Figure 787204DEST_PATH_IMAGE150
The reconstruction function obtains a KKS coding prediction value
Figure DEST_PATH_IMAGE151
Then by
Figure 286318DEST_PATH_IMAGE152
Calculating the similarity of KKS codes by using a minimum edit distance function, and finally adopting
Figure 382450DEST_PATH_IMAGE153
Selecting the most similargAnd the KKS code values are used for KKS code mapping recommendation, and the recommendation result is stored in the storage unit.
Example two
On the basis of the first embodiment, the second embodiment of the present application provides an application of the group-level KKS coding intelligent mapping recommendation method based on digital twin in the first embodiment to a certain power generation group digital twin standard coding project:
step 1, systematizing a complex power generation flow by adopting the idea of system engineering, and constructing a digital twin organism based on a power generation full production flow
Figure 625213DEST_PATH_IMAGE154
WhereinSEach subsystem forming the digital twin body is shown, taking an ultra-low emission coal-fired unit as an example, and a main system is divided into five systems of a boiler, a steam engine, an environmental protection system, an electrical system and a chemical system;
step 2, obtaining the digital twin body according to the step 1
Figure 755980DEST_PATH_IMAGE155
Constructing a multi-level professional dictionary
Figure 425996DEST_PATH_IMAGE148
As shown in fig. 2, the specific steps are as follows:
step 2.1, digital twins
Figure 743845DEST_PATH_IMAGE156
The detailed classification of five main systems and related subsystems contained in the system is used for word segmentation to obtain professional word segmentation data, and a data structure of triple-key-value pairs is constructed;
step 2.2, performing multi-level professional dictionary construction according to the professional word segmentation data obtained in the step 2.1
Figure 790298DEST_PATH_IMAGE157
Mainly composed of a main system dictionary
Figure 775572DEST_PATH_IMAGE158
And subsystem dictionary
Figure 85330DEST_PATH_IMAGE159
The dictionary is stored in a storage unit, and the structure of the storage unit is shown in FIG. 6;
step 3, using the collection device with the structure shown in fig. 5 to complete the collection of the codes under the original coding rule
Figure 421634DEST_PATH_IMAGE095
Figure 475040DEST_PATH_IMAGE160
Representing the original encoding rule; encoding under new encoding rules
Figure 845979DEST_PATH_IMAGE161
Figure 326639DEST_PATH_IMAGE162
Representing a new encoding rule; matching 10000 codes under new and old rules according to experience of production operators to construct a KKS coding data set
Figure 415817DEST_PATH_IMAGE163
(ii) a The constructed data set is transmitted outwards through a data interface;
step 4, KKS coded data set obtained in step 3
Figure 7336DEST_PATH_IMAGE100
The data preprocessing is carried out, and the specific steps are as follows:
step 4.1, obtaining the professional dictionary according to the step 2
Figure 967201DEST_PATH_IMAGE164
Adopting a Jieba word segmentation tool to segment the coded data to obtain a corresponding word segmentation result
Figure 149921DEST_PATH_IMAGE165
Figure 460817DEST_PATH_IMAGE031
Representing respective participles; according to professional dictionaries
Figure 856026DEST_PATH_IMAGE164
The code of each phrase is digitally coded to obtain
Figure 670398DEST_PATH_IMAGE166
Figure 24019DEST_PATH_IMAGE167
Representing each participle in a dictionary
Figure 822211DEST_PATH_IMAGE164
The corresponding numerical value in (1);
step 4.2, training the digital coding data set obtained in the step 4.1 in a random sampling mode
Figure 21111DEST_PATH_IMAGE038
And test set
Figure 689990DEST_PATH_IMAGE168
Dividing, wherein the training set is 8000 groups of data, and the testing set is 2000 groups of data;
step 5, passing the training set obtained by step 4.2
Figure 214512DEST_PATH_IMAGE103
Training an attention stacking network model, wherein the model is composed of a plurality of layers of attention stacking networks, as shown in fig. 3, the specific steps are as follows:
step 5.1, according to the input original coding data
Figure 968841DEST_PATH_IMAGE169
Word vectorization coding to obtain a vector with fixed dimensions
Figure 502591DEST_PATH_IMAGE170
The fixed dimension is 32:
Figure 25976DEST_PATH_IMAGE171
obtaining the data dimension of vectorization coding as 64 × 20 × 32, wherein 64 refers to batch size, and one batch can train 64 batches of coding pairs simultaneously, and each batch of coding pairs comprises 20 groups of codes;
step 5.2, vectorizing result obtained according to the step 5.1
Figure 190241DEST_PATH_IMAGE108
Carry out attention characteristic through self attention layer and calculate, it is piled up by 3 self attention modules end to end from attention layer and constitutes, and self attention module comprises two-layer structure simultaneously:
Figure 963025DEST_PATH_IMAGE172
wherein the input features
Figure 503728DEST_PATH_IMAGE108
First through a self-attention network
Figure 412778DEST_PATH_IMAGE125
Get attention moment matrix
Figure 13524DEST_PATH_IMAGE173
Matrix dimension 64 x 20 x 32; then through the feedforward full connection layer
Figure 8025DEST_PATH_IMAGE174
Calculating to obtain attention characteristics
Figure 352418DEST_PATH_IMAGE121
A characteristic dimension of 64 × 20 × 32;
step 5.3, vectorizing result obtained according to the step 5.1
Figure 115975DEST_PATH_IMAGE065
Step 5.2 obtaining the self-attention feature
Figure 887622DEST_PATH_IMAGE175
And performing feature fusion calculation, wherein the fusion process is formed by stacking 3 fusion self-attention modules in an end-to-end manner, and the fusion self-attention modules are formed by three layers of structures:
Figure 838260DEST_PATH_IMAGE176
wherein the input features
Figure 251924DEST_PATH_IMAGE065
First through a self-attention network
Figure 604408DEST_PATH_IMAGE125
Get attention moment matrix
Figure 78115DEST_PATH_IMAGE177
Matrix dimension 64 x 20 x 32; then pass through
Figure 516049DEST_PATH_IMAGE178
Two attention moment arrays
Figure 733404DEST_PATH_IMAGE179
And attention characteristics
Figure 940394DEST_PATH_IMAGE180
Fusion is carried out to obtain a fusion attention moment array
Figure 585002DEST_PATH_IMAGE181
Matrix dimension 64 x 20 x 32; finally, the full connection layer is fed forward
Figure 510233DEST_PATH_IMAGE182
Moment array according to attention
Figure 734541DEST_PATH_IMAGE183
Calculating to obtain the fusion attention characteristics
Figure 592776DEST_PATH_IMAGE145
A characteristic dimension of 64 × 20 × 32;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3
Figure 611547DEST_PATH_IMAGE184
By using
Figure 555232DEST_PATH_IMAGE185
Calculating to obtain probability mapping values of all the characteristics in the word vectorspThe probability matrix dimension is 64 × 20 × 1365:
Figure 583231DEST_PATH_IMAGE147
step 5.5, word vector probability mapping value obtained by step 5.4pEvaluating a model training result by taking a cross entropy result as a loss function of model training, wherein the model tends to be converged when the model is trained for 12K times, and the loss function value is 0.132 at the moment, storing and obtaining a KKS coding generation model based on the attention stacking network, and obtaining the modelStored in the computing unit for model multiplexing.
Step 6, the professional dictionary obtained in the step 2
Figure 295972DEST_PATH_IMAGE164
And the probability mapping value obtained in the step 5pAnd (3) adopting a new KKS code generated by a reconstruction function, and adopting a minimum edit distance to calculate the similarity, so as to obtain the most similar 8 KKS code values for KKS code mapping recommendation:
Figure 751224DEST_PATH_IMAGE149
wherein by
Figure 385468DEST_PATH_IMAGE150
The reconstruction function obtains a KKS coding prediction value
Figure 482737DEST_PATH_IMAGE151
Meanwhile, verifying that the generated codes accord with the grading of each system in the digital twin; then pass through
Figure 784405DEST_PATH_IMAGE186
Calculating the similarity of the KKS codes by using the minimum editing distance function; finally adopt
Figure 941717DEST_PATH_IMAGE187
And selecting the most similar 10 KKS codes to be stored in the storage unit and using the KKS codes for matching the mapping result by the operator. In this embodiment, after a certain gas power plant belonging to a certain group adopts a KKS generation model based on an attention stacking network to perform standardized coding, the obtained coding situation is as shown in table 1 below:
TABLE 1 coding situation table of a certain gas power plant subordinate to a certain group
Figure 63257DEST_PATH_IMAGE188
In the table 1, through intelligent mapping recommendation statistics on 1442 measuring point codes, the recommendation accuracy of the group-level KKS coding intelligent mapping recommendation method based on the digital twin is 84.3%, and the working requirements of power plant managers are met.
In this embodiment, a KKS generation model based on the attention stacking network is adopted for the hydraulic power plant belonging to the group to perform standardized coding, and the obtained coding condition is as shown in table 2 below:
TABLE 2 coding situation table of hydraulic power plants under a certain group
Figure 964217DEST_PATH_IMAGE189
In the table 2, through the intelligent mapping recommendation statistics of the codes of 5113 measuring points, the recommendation accuracy is 82.7%, and the working requirements of power plant managers are met.

Claims (10)

1. A group-level KKS coding intelligent mapping recommendation method based on digital twin is characterized by comprising the following steps:
step 1, according to a digital twin modeling process, firstly systematizing a power generation process, and constructing a digital twin based on a power full production process:
Figure 282162DEST_PATH_IMAGE001
in the formula (I), wherein,
Figure DEST_PATH_IMAGE002
representing the individual subsystems that make up the digital twin,
Figure 730461DEST_PATH_IMAGE003
representing a digital twin based on a power full production process;
step 2, obtaining the digital twin body according to the step 1
Figure DEST_PATH_IMAGE004
Constructing a professional dictionary
Figure 41356DEST_PATH_IMAGE005
Step 3, acquiring and obtaining the codes under the original coding rule by using acquisition equipment
Figure DEST_PATH_IMAGE006
And coding under the new coding rules
Figure DEST_PATH_IMAGE007
Wherein
Figure 498883DEST_PATH_IMAGE008
Refers to the nth code which is coded according to the original coding rule,
Figure DEST_PATH_IMAGE009
the nth coding is coded according to a new coding rule; according to the coding of the original coding rule by manpower
Figure 313255DEST_PATH_IMAGE006
Encoding under the new encoding rules
Figure 666876DEST_PATH_IMAGE010
Performing mapping matching, and constructing a KKS encoding data set:
Figure DEST_PATH_IMAGE011
step 4, KKS coded data set obtained in step 3
Figure 465067DEST_PATH_IMAGE012
Carrying out data preprocessing to obtain a training set
Figure DEST_PATH_IMAGE013
And test set
Figure 195126DEST_PATH_IMAGE014
Step 5,Training set obtained by step 4
Figure DEST_PATH_IMAGE015
Training a KKS generation model based on the attention stacking network, wherein the KKS generation model based on the attention stacking network is formed by stacking a plurality of layers of attention networks;
step 6, obtaining the professional dictionary according to the step 2
Figure 395163DEST_PATH_IMAGE016
And 5, obtaining a probability mapping value of the fusion attention feature in the word vectorpGenerating a KKS coding predicted value under a new coding rule by adopting a reconstruction function, and calculating the similarity of the KKS coding predicted value under the new coding rule in a character string mode by adopting a minimum editing distance; obtain the most similargThe KKS code values are used for KKS code mapping recommendation:
Figure DEST_PATH_IMAGE017
in the above formula, by
Figure DEST_PATH_IMAGE018
The reconstruction function obtains a KKS coding prediction value
Figure 919685DEST_PATH_IMAGE019
Then by
Figure DEST_PATH_IMAGE020
Calculating the similarity of KKS codes by using a minimum edit distance function, and finally adopting
Figure 205173DEST_PATH_IMAGE021
Selecting the most similargAnd the KKS code values are used for KKS code mapping recommendation, and the recommendation result is stored in the storage unit.
2. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, characterized in that: when the power generation process is systematized in the step 1, the power generation process is divided according to different power subsystems, and the three relations including, parallel and series exist among the different power subsystems.
3. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, wherein step 2 specifically comprises the steps of:
step 2.1, aiming at digital twin
Figure DEST_PATH_IMAGE022
The detailed description of each subsystem contained in the method carries out word segmentation to obtain professional word segmentation data;
step 2.2, constructing a multi-level professional dictionary according to the professional word segmentation data obtained in the step 2.1:
Figure 473344DEST_PATH_IMAGE023
the above formula represents a multi-level professional dictionary
Figure DEST_PATH_IMAGE024
Dictionary with master system
Figure 527887DEST_PATH_IMAGE025
And subsystem dictionary
Figure DEST_PATH_IMAGE026
Forming a multi-level professional dictionary
Figure 488890DEST_PATH_IMAGE024
Stored in a storage device.
4. The digital twin-based group-level KKS coding intelligent mapping recommendation method as claimed in claim 3, wherein the professional dictionary in step 2
Figure 730516DEST_PATH_IMAGE024
And the three-key-value pair consisting of the serial number, the serial number and the professional code of the power subsystem is used as a data structure and is stored in a storage unit of the storage device, and the storage unit accesses the Json-format file and provides data interface service.
5. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, characterized in that: step 3, the original coding rule is a KKS coding rule adopted when a design institute designs a power plant, and the new coding rule is a KKS coding rule formulated by a power generation group; and 3, performing data acquisition by the acquisition unit in the acquisition equipment by running a Python script, and transmitting the acquired data to the outside through a data interface.
6. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1 or 3, characterized in that step 4 comprises the following steps:
step 4.1, obtaining the professional dictionary according to the step 2
Figure DEST_PATH_IMAGE027
For KKS encoded data set
Figure DEST_PATH_IMAGE028
The coded data in (1) is participated to obtain the corresponding participated word group
Figure 67956DEST_PATH_IMAGE029
Figure DEST_PATH_IMAGE030
Are words that constitute coded data, a code
Figure 242585DEST_PATH_IMAGE031
Is divided into
Figure DEST_PATH_IMAGE032
Word
Figure 577752DEST_PATH_IMAGE033
(ii) a Each word
Figure DEST_PATH_IMAGE034
Corresponding to a serial number
Figure 837832DEST_PATH_IMAGE035
Digitally encoding each word-dividing phrase and numbering
Figure 447805DEST_PATH_IMAGE035
Recombined to form a coded vector
Figure DEST_PATH_IMAGE036
Obtaining a digital coding data set;
step 4.2, dividing the digital coding data set obtained in the step 4.1 into training sets in a random sampling mode
Figure 476941DEST_PATH_IMAGE037
And test set
Figure DEST_PATH_IMAGE038
7. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 6, characterized in that: KKS encoded data set in step 4.1
Figure DEST_PATH_IMAGE039
When the word segmentation results obtained after the word segmentation is carried out on the coded data in the method are inconsistent in length, the longest word segmentation quantity is used as a standard, and the word segmentation with the insufficient length is expanded by adding corresponding quantity of placeholders; training set in step 4.2
Figure 779746DEST_PATH_IMAGE040
And test set
Figure DEST_PATH_IMAGE041
In a ratio of 4: 1.
8. The digital twin based group-level KKS coding intelligent mapping recommendation method according to claim 1, 3 or 6, characterized in that step 5 comprises the following steps:
step 5.1, according to the full connection layer, encoding according to the original encoding rule
Figure DEST_PATH_IMAGE042
Coded according to the new coding rule
Figure 792701DEST_PATH_IMAGE042
To carry out word vectorization coding to respectively obtain vectorization results with fixed dimensions
Figure 675207DEST_PATH_IMAGE043
Figure DEST_PATH_IMAGE044
In the above formula, the upper label1stIndicating the original coding rules, superscript2ndRepresenting a new encoding rule;
step 5.2, vectorizing result obtained according to the step 5.1
Figure 558849DEST_PATH_IMAGE045
Attention features are calculated by the self-attention layer:
Figure DEST_PATH_IMAGE046
in the above equation, a vector having a fixed dimension is input to the self-attention layerChange the result
Figure DEST_PATH_IMAGE047
Wherein the self-attention layer is composed ofNThe self-attention module is formed by stacking, the input of the latter module is taken as the output of the former module in the form of a self-attention layer, and each self-attention module is divided into two layers; through a self-attention network
Figure DEST_PATH_IMAGE048
Get attention moment matrix
Figure DEST_PATH_IMAGE049
Figure 563714DEST_PATH_IMAGE050
In the above formula, the self-attention network
Figure 532807DEST_PATH_IMAGE048
Respectively pass through the weight
Figure DEST_PATH_IMAGE051
Figure 953424DEST_PATH_IMAGE052
Figure DEST_PATH_IMAGE053
Obtaining the characteristic value
Figure 957152DEST_PATH_IMAGE054
Figure DEST_PATH_IMAGE055
Figure 601760DEST_PATH_IMAGE056
(ii) a Then adopt
Figure DEST_PATH_IMAGE057
Obtaining a value based on the characteristic
Figure 58149DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE059
Figure DEST_PATH_IMAGE060
Attention weight of, in addition to
Figure 344774DEST_PATH_IMAGE054
Figure 406271DEST_PATH_IMAGE061
Has a vector dimension ofd(ii) a Then through the feedforward full connection layer
Figure DEST_PATH_IMAGE062
Calculating to obtain attention characteristics
Figure 221780DEST_PATH_IMAGE063
Wherein
Figure DEST_PATH_IMAGE064
Is shown innCalculating in a self-attention module;
step 5.3, vectorizing results obtained in the step 5.1
Figure DEST_PATH_IMAGE065
Step 5.2 obtaining the self-attention feature
Figure DEST_PATH_IMAGE066
Carrying out feature fusion calculation; the fusion computing module is composed ofMThe fusion self-attention module is formed by stacking, and the input of the latter module in the fusion calculation module is used as the output of the former module; it is composed ofThe middle fusion self-attention module is composed of three layers of structures:
Figure DEST_PATH_IMAGE067
in the above formula, the
Figure DEST_PATH_IMAGE068
As a self-attention network
Figure DEST_PATH_IMAGE069
By first passing through a self-attention network
Figure 227783DEST_PATH_IMAGE070
Get attention moment matrix
Figure DEST_PATH_IMAGE071
Figure DEST_PATH_IMAGE072
In the above formula, the self-attention network
Figure 318098DEST_PATH_IMAGE073
Respectively pass through the weight
Figure DEST_PATH_IMAGE074
Figure 30839DEST_PATH_IMAGE075
Figure 751671DEST_PATH_IMAGE053
Obtaining the characteristic value
Figure DEST_PATH_IMAGE076
Figure 182652DEST_PATH_IMAGE077
Figure DEST_PATH_IMAGE078
(ii) a Then adopt
Figure 279921DEST_PATH_IMAGE057
Obtaining a value based on the characteristic
Figure 581589DEST_PATH_IMAGE079
Figure DEST_PATH_IMAGE080
Figure DEST_PATH_IMAGE081
Attention weight of
Figure 473322DEST_PATH_IMAGE082
In addition d is
Figure 391599DEST_PATH_IMAGE079
Figure DEST_PATH_IMAGE083
The vector dimension of (a); then passes through
Figure 292559DEST_PATH_IMAGE084
Two attention moment arrays
Figure DEST_PATH_IMAGE085
And attention characteristics
Figure 714313DEST_PATH_IMAGE066
Performing fusion to obtain a fusion attention moment array
Figure 511368DEST_PATH_IMAGE086
(ii) a Finally, the full connection layer is fed forward
Figure DEST_PATH_IMAGE087
Attention moment array according to fusion
Figure 651362DEST_PATH_IMAGE088
Calculating to obtain the fusion attention characteristics
Figure DEST_PATH_IMAGE089
Wherein
Figure 356013DEST_PATH_IMAGE090
Is shown inmCalculating in a fusion self-attention module;
step 5.4, obtaining the fusion attention characteristics according to the step 5.3
Figure DEST_PATH_IMAGE091
Using normalized exponential function
Figure 909572DEST_PATH_IMAGE092
Calculating to obtain probability mapping values of the fusion attention features in the word vectorsp
Figure DEST_PATH_IMAGE093
Step 5.5, obtaining the probability mapping value of the fusion attention feature in the word vector from step 5.4pEvaluating a model training result by taking the cross entropy result as a loss function for training a KKS generation model based on the attention stacking network; determining a pause condition of KKS generation model training based on the attention stacking network according to the iteration times and the loss function convergence value; if the KKS generation model based on the attention stacking network is to be trained continuously, the steps 5.1 to 5.4 are repeatedly executed until the iteration times are reached; and after the iteration times are reached, obtaining a KKS generation model based on the attention stacking network, and storing the KKS generation model in the calculating unit.
9. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 8, characterized in that: each layer of the self-attention network in step 5.2
Figure 408686DEST_PATH_IMAGE094
The input of (2) combines the output characteristics of the previous layer, each layer being a self-attention network
Figure DEST_PATH_IMAGE095
The result of the layer is normalized by the skip layer, and then is combined with the feature in the current output result; step 5.5, matching corresponding word segmentation results according to the index of the maximum value of each dimension word vector probability, and then performing word segmentation splicing to reconstruct KKS codes in sequence; in step 5.5, the computing unit provides a model operating environment by adopting a Tensorflow architecture, and provides model compression and acceleration by adopting a TensorRT optimization tool.
10. The digital twin-based group-level KKS coding intelligent mapping recommendation method according to claim 1, characterized in that: and 6, when the KKS code predicted value under the new coding rule is subjected to similarity calculation in a character string mode by adopting the minimum editing distance, the closer the similarity result is to 1, the more similar the KKS code is, and the closer the similarity result is to 0, the more dissimilar the KKS code is.
CN202110905893.4A 2021-08-09 2021-08-09 Group level KKS coding intelligent mapping recommendation method based on digital twin Active CN113673152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110905893.4A CN113673152B (en) 2021-08-09 2021-08-09 Group level KKS coding intelligent mapping recommendation method based on digital twin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110905893.4A CN113673152B (en) 2021-08-09 2021-08-09 Group level KKS coding intelligent mapping recommendation method based on digital twin

Publications (2)

Publication Number Publication Date
CN113673152A true CN113673152A (en) 2021-11-19
CN113673152B CN113673152B (en) 2024-06-14

Family

ID=78541823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110905893.4A Active CN113673152B (en) 2021-08-09 2021-08-09 Group level KKS coding intelligent mapping recommendation method based on digital twin

Country Status (1)

Country Link
CN (1) CN113673152B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911797A (en) * 2022-05-05 2022-08-16 福建安能数通科技有限公司 Method for fusing and applying KKS code and grid code
CN115689399A (en) * 2022-10-10 2023-02-03 中国长江电力股份有限公司 Hydropower equipment information model rapid construction method based on industrial internet platform

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190340467A1 (en) * 2018-05-06 2019-11-07 Strong Force TX Portfolio 2018, LLC Facility level transaction-enabling systems and methods for provisioning and resource allocation
CN110781680A (en) * 2019-10-17 2020-02-11 江南大学 Semantic similarity matching method based on twin network and multi-head attention mechanism
US20200293394A1 (en) * 2019-03-13 2020-09-17 Accenture Global Solutions Limited Interactive troubleshooting assistant
CA3158765A1 (en) * 2019-11-25 2021-06-03 Strong Force Iot Portfolio 2016, Llc Intelligent vibration digital twin systems and methods for industrial environments

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190340467A1 (en) * 2018-05-06 2019-11-07 Strong Force TX Portfolio 2018, LLC Facility level transaction-enabling systems and methods for provisioning and resource allocation
US20200293394A1 (en) * 2019-03-13 2020-09-17 Accenture Global Solutions Limited Interactive troubleshooting assistant
CN110781680A (en) * 2019-10-17 2020-02-11 江南大学 Semantic similarity matching method based on twin network and multi-head attention mechanism
CA3158765A1 (en) * 2019-11-25 2021-06-03 Strong Force Iot Portfolio 2016, Llc Intelligent vibration digital twin systems and methods for industrial environments

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蒋承峰等: "流域梯级水电厂KKS 信号标识的设计及应用", 《中国设备工程》, 23 November 2020 (2020-11-23), pages 8 - 9 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114911797A (en) * 2022-05-05 2022-08-16 福建安能数通科技有限公司 Method for fusing and applying KKS code and grid code
CN115689399A (en) * 2022-10-10 2023-02-03 中国长江电力股份有限公司 Hydropower equipment information model rapid construction method based on industrial internet platform
CN115689399B (en) * 2022-10-10 2024-05-10 中国长江电力股份有限公司 Rapid construction method of hydropower equipment information model based on industrial Internet platform

Also Published As

Publication number Publication date
CN113673152B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
CN110943857B (en) Power communication network fault analysis and positioning method based on convolutional neural network
CN116192971B (en) Intelligent cloud energy operation and maintenance service platform data management method
CN113673152B (en) Group level KKS coding intelligent mapping recommendation method based on digital twin
CN106067066A (en) Method for diagnosing fault of power transformer based on genetic algorithm optimization pack algorithm
CN106874963B (en) A kind of Fault Diagnosis Method for Distribution Networks and system based on big data technology
CN113361559B (en) Multi-mode data knowledge information extraction method based on deep-width combined neural network
CN117096867A (en) Short-term power load prediction method, device, system and storage medium
CN114138759B (en) Secondary equipment fault processing pushing method and system based on knowledge graph reasoning
CN110515931A (en) A kind of capacitance type equipment failure prediction method based on random forests algorithm
CN115526236A (en) Text network graph classification method based on multi-modal comparative learning
CN104732067A (en) Industrial process modeling forecasting method oriented at flow object
CN113361803A (en) Ultra-short-term photovoltaic power prediction method based on generation countermeasure network
CN114021758A (en) Operation and maintenance personnel intelligent recommendation method and device based on fusion of gradient lifting decision tree and logistic regression
CN113343643B (en) Supervised-based multi-model coding mapping recommendation method
CN117172413B (en) Power grid equipment operation state monitoring method based on multi-mode data joint characterization and dynamic weight learning
CN114238524A (en) Satellite frequency-orbit data information extraction method based on enhanced sample model
CN113536508A (en) Method and system for classifying manufacturing network nodes
Balazs et al. Hierarchical-interpolative fuzzy system construction by genetic and bacterial memetic programming approaches
CN113643141B (en) Method, device, equipment and storage medium for generating interpretation conclusion report
CN115438190B (en) Power distribution network fault auxiliary decision knowledge extraction method and system
CN116796617A (en) Rolling bearing equipment residual life prediction method and system based on data identification
CN114969237A (en) Automatic address analyzing and matching method for geographic information system
CN113673202A (en) Double-layer matching coding mapping recommendation method based on hybrid supervision
CN114692495A (en) Efficient complex system reliability evaluation method based on reliability block diagram
Li et al. Power grid fault detection method based on cloud platform and improved isolated forest

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220818

Address after: Room 307, No. 32, Gaoji Street, Xihu District, Hangzhou City, Zhejiang Province, 310002

Applicant after: Zhejiang Zheneng Digital Technology Co., Ltd.

Applicant after: ZHEJIANG ENERGY R & D INSTITUTE Co.,Ltd.

Address before: 5 / F, building 1, No. 2159-1, yuhangtang Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant before: ZHEJIANG ENERGY R & D INSTITUTE Co.,Ltd.

GR01 Patent grant
GR01 Patent grant