CN117708715B - Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model - Google Patents

Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model Download PDF

Info

Publication number
CN117708715B
CN117708715B CN202410049997.3A CN202410049997A CN117708715B CN 117708715 B CN117708715 B CN 117708715B CN 202410049997 A CN202410049997 A CN 202410049997A CN 117708715 B CN117708715 B CN 117708715B
Authority
CN
China
Prior art keywords
sample
working condition
graph
iteration round
image set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410049997.3A
Other languages
Chinese (zh)
Other versions
CN117708715A (en
Inventor
徐峰
王辉
黄宇廷
范自柱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Jiaotong University
Original Assignee
East China Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Jiaotong University filed Critical East China Jiaotong University
Priority to CN202410049997.3A priority Critical patent/CN117708715B/en
Publication of CN117708715A publication Critical patent/CN117708715A/en
Application granted granted Critical
Publication of CN117708715B publication Critical patent/CN117708715B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a working condition diagnosis method of an electric smelting magnesium furnace based on a mixed structure model, which relates to the field of data classification, and comprises the steps of firstly acquiring a heterogeneous data set containing a plurality of sample multi-view image sets, generating a graph structure and a hypergraph structure according to sample initial characteristics corresponding to all sample view images in the sample multi-view image sets, inputting working condition prediction results of a classifier group corresponding to the graph structure, the hypergraph structure and a current iteration round into the mixed structure model to obtain mixed graph characteristics, and taking residual errors calculated by the mixed graph characteristics and the working condition prediction results corresponding to the current iteration round as labels of a classifier group of the next iteration round to obtain a trained gradient lifting decision tree. According to the invention, the residual error obtained by calculating the output of the mixed structure model and the working condition prediction result corresponding to the current iteration round is used as the label of the classifier group of the next iteration round, and the trained gradient lifting decision tree is obtained by training, so that the working condition diagnosis precision of the electric smelting magnesium furnace is higher.

Description

Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model
Technical Field
The invention relates to the field of data classification, in particular to a method for diagnosing working conditions of an electric smelting magnesium furnace based on a mixed structure model.
Background
The main technological process of the industrial equipment with great energy consumption, such as an electric smelting magnesium furnace, is to control the temperature of the smelting process by adjusting the arc current during the feeding, exhausting and smelting processes, and heat and smelt the raw magnesite to obtain the electric smelting magnesite product. The ideal operation condition is in the excellent condition on the premise that the product quality meets the process requirement, and the fluctuation of raw materials in actual production is large, so that non-excellent conditions are easy to occur. For example, in the production process of the electric smelting magnesium furnace, the underburn working condition caused by abnormal fluctuation of material characteristics not only can cause abnormal fluctuation of process variables, but also can cause abnormal furnace temperature distribution. The temperature distribution of the molten liquid is directly related to the running furnace condition of the electric magnesium melting furnace, but the temperature of the ultra-high temperature molten liquid (approximately 3000 ℃) is difficult to directly measure, at present, operators can regularly observe heterogeneous data such as images of the inside and the wall of the furnace by eyes to indirectly monitor and regulate so as to diagnose the optimal working condition and the non-optimal working condition. However, the results of the manual decision are irregular, untimely and low in accuracy.
Based on the problems, gradient lifting decision trees (Gradient Boosting Decision Trees, GBDT) can be used for diagnosing heterogeneous data working conditions of the electric smelting magnesium furnace, and the gradient lifting decision trees are a powerful machine learning model in the field of computer vision and are excellent in processing structural data and nonlinear characteristics. GBDT is widely used in the fields of fault diagnosis, image processing and the like. It is capable of handling multiple target regression or classification tasks and is adept at handling various data types. GBDT has the following characteristics: (i) They are non-parametric models and do not make any assumptions about the data distribution. (ii) GBDT the final gradient boosting decision tree is obtained by combining a plurality of weak classifiers for iterative training. Based on the decision tree model GBDT can be described by an IF-then structure, which makes it highly interpretable. (iii) They are adaptable and can handle a variety of data types, such as continuous, discrete, and mixed data. (iv) They provide practical information for feature selection through the importance of the ranking elements. Although GBDT algorithm solves the problems of irregular and untimely results of manual decision of the working condition of the electric smelting magnesium furnace, the working condition diagnosis accuracy is lower. The graph neural network and the hypergraph neural network have great advantages in the aspect of extracting high-order information, and can better express heterogeneous data, so that the working condition diagnosis precision is improved by combining the gradient lifting decision tree with the graph neural network and the hypergraph neural network.
Disclosure of Invention
The invention aims to provide a working condition diagnosis method of an electric smelting magnesium furnace based on a mixed structure model, which can improve the working condition diagnosis precision of the electric smelting magnesium furnace.
In order to achieve the above object, the present invention provides the following solutions:
the invention provides a method for diagnosing working conditions of an electric smelting magnesium furnace based on a mixed structure model, which comprises the following steps:
Acquiring a heterogeneous data set; the heterogeneous data sets comprise a plurality of frame sample multi-view image sets and sample real working conditions corresponding to each multi-view image set; each sample multi-view image set comprises a plurality of different sample view images acquired at the same moment; the sample visual angle image is an in-furnace image or a furnace wall image in the working process of the electric smelting magnesium furnace; the sample real working conditions comprise optimal working conditions and non-optimal working conditions;
For each sample multi-view image set, extracting features of each sample view image in the sample multi-view image set to obtain sample initial features corresponding to each sample view image in the sample multi-view image set;
For each sample multi-view image set, generating a graph structure corresponding to the sample multi-view image set according to sample initial characteristics corresponding to all the sample view images in the sample multi-view image set, and generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure; the graph nodes in the graph structure correspond to sample initial features corresponding to all the sample view images in the sample multi-view image set one by one; the edges in the graph structure represent that the distance between sample initial features corresponding to two connected graph nodes is smaller than a first set threshold value; the hypergraph nodes in the hypergraph structure are in one-to-one correspondence with sample initial features corresponding to all the sample view images in the sample multi-view image set; the superside in the supergraph structure indicates that the distance between the initial characteristics of the samples corresponding to a plurality of connected supergraph nodes is smaller than a second set threshold value;
Inputting a working condition prediction result of a classifier group corresponding to the current iteration round, a graph structure and a hypergraph structure corresponding to the sample multi-view image set into a mixed structure model to obtain mixed graph characteristics corresponding to the current iteration round; when the current iteration round is 1, the working condition prediction result of the classifier group corresponding to the current iteration round is the sample initial characteristics corresponding to all the sample visual angle images in the sample multi-visual angle image set; the hybrid structure model consists of a graph neural network and a hypergraph neural network; the classifier group corresponding to the current iteration round comprises at least one classifier;
Calculating residual errors corresponding to the current iteration round according to the mixed graph characteristics corresponding to the current iteration round and the working condition prediction results of the classifier group corresponding to the current iteration round; judging whether an iteration stopping condition is met, if so, stopping iteration, and taking a classifier group corresponding to the current iteration round as a trained gradient lifting decision tree; if not, taking the residual error corresponding to the current iteration round as a label of the next iteration round, replacing the classifier group corresponding to the current iteration round with the classifier group corresponding to the next iteration round, and returning to the step of inputting the working condition prediction result of the classifier group corresponding to the current iteration round, the graph structure and the hypergraph structure corresponding to the sample multi-view image set into the mixed structure model until an iteration stop condition is reached, so as to obtain a trained gradient lifting decision tree; the trained gradient lifting decision tree is a classifier group of the corresponding iteration round when reaching the iteration stop condition; the classifier group of the next iteration round is one more classifier than the classifier group corresponding to the current iteration round;
Diagnosing the working condition of the electric smelting magnesium furnace by adopting the trained gradient lifting decision tree to obtain a target working condition diagnosis result; and the target working condition diagnosis result is an excellent working condition or a non-excellent working condition.
Optionally, the trained gradient boosting decision tree comprises a plurality of classifiers;
diagnosing the working condition of the electric smelting magnesium furnace by adopting the trained gradient lifting decision tree to obtain a target working condition diagnosis result, and specifically comprises the following steps:
For each classifier, diagnosing the working condition of the electric smelting magnesium furnace by adopting the classifier to obtain the target prediction probability corresponding to the classifier;
And determining a target working condition diagnosis result according to the target prediction probabilities corresponding to all the classifiers.
Optionally, determining the target working condition diagnosis result according to the target prediction probabilities corresponding to all the classifiers specifically includes:
adding the target prediction probabilities corresponding to all the classifiers to obtain a final target prediction probability;
and determining a target working condition diagnosis result according to the final target prediction probability and a set probability threshold.
Optionally, the working condition prediction result of the classifier group corresponding to the current iteration round, the graph structure and the hypergraph structure corresponding to the sample multi-view image set are input into a hybrid structure model, so as to obtain the hybrid graph feature corresponding to the current iteration round, which specifically includes:
Obtaining a first input characteristic according to a working condition prediction result and a first weight of a classifier group corresponding to the current iteration round;
obtaining a second input characteristic according to the working condition prediction result and the second weight of the classifier group corresponding to the current iteration round;
Inputting the first input characteristic and a graph structure corresponding to the sample multi-view image set into a graph neural network to obtain a graph characteristic;
inputting the second input characteristic and the hypergraph structure corresponding to the sample multi-view image set into a graph neural network to obtain hypergraph characteristics;
and fusing the graph features and the hypergraph features to obtain the mixed graph features corresponding to the current iteration turns.
Optionally, according to the working condition prediction result and the first weight of the classifier group corresponding to the current iteration round, obtaining the first input feature specifically includes:
And multiplying the working condition prediction result of the classifier group corresponding to the current iteration round by the first weight to obtain a first input characteristic.
Optionally, generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure specifically includes:
And generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure by using a K nearest neighbor method.
Optionally, extracting features of each sample view image in the sample multi-view image set to obtain sample initial features corresponding to each sample view image in the sample multi-view image set, which specifically includes:
And extracting the characteristics of each sample view angle image in the sample multi-view angle image set by adopting a characteristic extraction model to obtain sample initial characteristics corresponding to each sample view angle image in the sample multi-view angle image set.
Optionally, the feature extraction model is a CNN model or an RNN model.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects: the invention provides a working condition diagnosis method of an electric smelting magnesium furnace based on a mixed structure model, which comprises the steps of firstly acquiring a heterogeneous data set containing a plurality of sample multi-view image sets, generating a graph structure according to sample initial characteristics corresponding to all sample view images in the sample multi-view image sets, generating a hypergraph structure according to the graph structure, inputting working condition prediction results of classifier groups corresponding to the graph structure, the hypergraph structure and current iteration rounds into the mixed structure model to obtain mixed graph characteristics, calculating residual errors according to the mixed graph characteristics and the working condition prediction results corresponding to the current iteration rounds, taking the residual errors as labels of classifier groups of the next iteration rounds, and finally performing iterative training to obtain a trained gradient lifting decision tree. According to the invention, the residual error obtained by calculating the working condition prediction result corresponding to the current iteration round through the output of the mixed structure model is used as the label of the classifier group of the next iteration round, and the gradient lifting decision tree is trained by extracting the graph information of the heterogeneous data, so that the trained gradient lifting decision tree considers two aspects of the heterogeneous data characteristics and the graph structure information, and the working condition diagnosis precision of the electric smelting magnesium furnace is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions of the prior art, the drawings that are needed in the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a method for diagnosing the working condition of an electric smelting magnesium furnace based on a mixed structure model;
FIG. 2 is a schematic diagram of a GBDT-HSM combined architecture provided by the present invention;
FIG. 3 is a schematic diagram of the generation of graph structures and hypergraph structures provided by the present invention;
FIG. 4 is a schematic diagram of the training process of the gradient boosting decision tree provided by the invention;
FIG. 5 is a convergence graph of a trained gradient boosting decision tree and the existing GCN, HGNN, HGNN +, res-GNN, BGNN algorithm provided by the present invention on a house class dataset;
FIG. 6 is a convergence graph of a trained gradient boosting decision tree and the conventional GCN, HGNN, HGNN +, res-GNN, BGNN algorithm provided by the present invention on a multi-view dataset.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The invention aims to provide a mixed structure model-based electric smelting magnesium furnace working condition diagnosis method, which uses residual errors obtained by calculating working condition prediction results corresponding to the current iteration round as labels of a classifier group of the next iteration round, and trains a gradient lifting decision tree by extracting graph information of heterogeneous data, so that the obtained trained gradient lifting decision tree considers two aspects of heterogeneous data characteristics and graph structure information, and improves the working condition diagnosis precision of the electric smelting magnesium furnace.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1:
Gradient-enhanced decision trees represent a considerable efficiency in processing structured data. However, there are still challenges in combining gradient-lifting decision tree methods, graph data structures, and hypergraph data structures. These challenges can be summarized in the following points: (i) The graph data structure is significantly different from the traditional structured data format, requiring an appropriate encoding method to successfully co-train the gradient boost decision tree and graph data. (ii) GBDT still face difficulties in handling graph data structures. Achieving end-to-end optimization goals while preserving the effectiveness and applicability of graph data remains an important exploration area of the data processing arts. In order to further improve the working condition diagnosis precision of the electric smelting magnesium furnace, the embodiment provides a GBDT-HSM combined frame structure, the structure is shown in fig. 2, multi-mode data is used for classifying, the multi-mode data can comprise video, text and images, and the embodiment selects image data for explanation. The present embodiment introduces a Hybrid Structural Model (HSM) as a special method of processing graph data, the hybrid structural model being composed of a graph neural network and a hypergraph neural network (HGNN), which further enhances the ability to extract potentially higher order information from the data by constructing a hybrid method that includes hypergraph convolution and graph convolution. The GBDT-HSM combined framework structure combines the node characteristic learning based on GBDT and the hybrid structure model for carrying out fine prediction by utilizing the graph structure and the hypergraph structure. By utilizing the interpretability of GBDT and the training advantage of HSM, the working condition diagnosis precision of the electric smelting magnesium furnace is improved.
As shown in fig. 1, the embodiment provides a method for diagnosing the working condition of the electric smelting magnesium furnace based on a mixed structure model, which comprises the following steps:
S1: acquiring a heterogeneous data set; the heterogeneous data sets comprise a plurality of sample multi-view image sets and sample real working conditions corresponding to each multi-view image set; each sample multi-view image set comprises a plurality of different sample view images acquired at the same moment; the sample visual angle image is an in-furnace image or a furnace wall image in the working process of the electric smelting magnesium furnace; the sample real working conditions comprise optimal working conditions and non-optimal working conditions. The image in the furnace and the image of the furnace wall are obtained by shooting the electric smelting magnesium furnace through an image acquisition end; the image acquisition end is composed of a plurality of visible light video sensors and a near infrared frequency band video sensor. The image is a two-dimensional or three-dimensional image.
S2: and extracting the characteristics of each sample view angle image in the sample multi-view angle image set for each sample multi-view angle image set to obtain sample initial characteristics corresponding to each sample view angle image in the sample multi-view angle image set.
S3: for each sample multi-view image set, generating a graph structure corresponding to the sample multi-view image set according to sample initial characteristics corresponding to all the sample view images in the sample multi-view image set, and generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure; the graph nodes in the graph structure correspond to sample initial features corresponding to all the sample view images in the sample multi-view image set one by one; the edges in the graph structure represent that the distance between sample initial features corresponding to two connected graph nodes is smaller than a first set threshold value; the hypergraph nodes in the hypergraph structure are in one-to-one correspondence with sample initial features corresponding to all the sample view images in the sample multi-view image set; and the superside in the supergraph structure indicates that the distance between the initial characteristics of the samples corresponding to the connected supergraph nodes is smaller than a second set threshold value.
The first set threshold and the second set threshold are both set according to user experience.
S4: inputting a working condition prediction result of a classifier group corresponding to the current iteration round, a graph structure and a hypergraph structure corresponding to the sample multi-view image set into a mixed structure model to obtain mixed graph characteristics corresponding to the current iteration round; when the current iteration round is 1, the working condition prediction result of the classifier group corresponding to the current iteration round is the sample initial characteristics corresponding to all the sample visual angle images in the sample multi-visual angle image set; the label of the classifier group corresponding to the current iteration round is the sample real working condition of the sample multi-view image set; the hybrid structure model consists of a graph neural network and a hypergraph neural network; the classifier set corresponding to the current iteration round includes at least one classifier.
S5: calculating residual errors corresponding to the current iteration round according to the mixed graph characteristics corresponding to the current iteration round and the working condition prediction results of the classifier group corresponding to the current iteration round; judging whether an iteration stopping condition is met, if so, stopping iteration, and taking a classifier group corresponding to the current iteration round as a trained gradient lifting decision tree; if not, taking the residual error corresponding to the current iteration round as a label of the next iteration round, replacing the classifier group corresponding to the current iteration round with the classifier group corresponding to the next iteration round, and returning to the step of inputting the working condition prediction result of the classifier group corresponding to the current iteration round, the graph structure and the hypergraph structure corresponding to the sample multi-view image set into the mixed structure model until an iteration stop condition is reached, so as to obtain a trained gradient lifting decision tree; the trained gradient lifting decision tree is a classifier group of the corresponding iteration round when reaching the iteration stop condition; the classifier group of the next iteration round is one more classifier than the classifier group corresponding to the current iteration round.
S6: diagnosing the working condition of the electric smelting magnesium furnace by adopting the trained gradient lifting decision tree to obtain a target working condition diagnosis result; and the target working condition diagnosis result is an excellent working condition or a non-excellent working condition.
S2, extracting characteristics of each sample view angle image in the sample multi-view angle image set to obtain sample initial characteristics corresponding to each sample view angle image in the sample multi-view angle image set, wherein the method specifically comprises the following steps: and extracting the characteristics of each sample view angle image in the sample multi-view angle image set by adopting a characteristic extraction model to obtain sample initial characteristics corresponding to each sample view angle image in the sample multi-view angle image set. In this embodiment, the feature extraction model is a CNN model or an RNN model.
GBDT-HSM joint architecture as shown in FIG. 2, on the left hand side, it is used to process raw data, fully extracting features to generate graph structures and hypergraph structures. In the right-hand portion, based on the data characteristics, an initial GBDT is created and the initial processing results are used as new node inputs for the graph and hypergraph structures. This forms a Hybrid Structural Model (HSM) that can be iteratively optimized GBDT.
In order to fully exploit the advantages of gradient-lifted decision trees and graph convolution control algorithms, GBDT is incorporated into the node training process in this embodiment. The predicted result and the original input are then input as new nodes of a Hybrid Structure Model (HSM), and the data structure is refined by the HSM. This embodiment describes a new end-to-end training method that combines a Gradient Boosting Decision Tree (GBDT) with a Hybrid Structural Model (HSM), namely GBDT-HSM. In the execution process of the algorithm, an enhanced decision tree model is firstly constructed by using a gradient lifting decision tree, and then the gradient lifting decision tree is iteratively updated by using a new tree in the loss function of the HSM. GBDT-HSM can be used to classify classification problems, such as semi-supervised classification and prediction problems.
GBDT-HSM federated architecture consists of GBDT and HSM, both trained in an end-to-end fashion. In the data training phase, the original feature vector X is processed GBDT to obtain an updated node representation. GBDT as an input layer of the HSM, and then optimizing GBDT the training process using the HSM as a loss function.
The generation process of the graph structure and the hypergraph structure in S3 is shown in fig. 3. The graph results corresponding to the graph neural network are expressed as
Wherein, V is the set of all graph nodes; n is the number of graph nodes in the graph structure; e is the set of all edges in the graph structure,/>,/>Is the total number of edges in the graph structure.
The graph neural network relies on neighbor aggregation to obtain information from surrounding nodes, commonly referred to as messaging. In this process, each graph node gathers information from its neighboring graph nodes and merges the information into a vector describing its surrounding nodes. As each graph node undergoes multiple messaging and aggregation, it gradually gets a full understanding of the global information of its neighboring nodes, resulting in an updated feature representation. Neighbor aggregation is the basic principle of graph neural networks and is also a key factor in their success in processing graph data. The messaging mechanism of each graph node in the graph structure can be represented by the following formula:
Wherein, Expressed in/>Layer messaging, u representing the u-th graph node; /(I)Representing completion of the first/>Function of layer messaging, if the messaging process is linear,/>May be described by a weight matrix; represents the/> Message aggregation data of the layer; /(I)Representing the set of all graph nodes except the graph node v itself. The data aggregation process in the graph structure can be expressed as:
Wherein, Expressed in/>The layer aggregates the information obtained by all neighbor nodes; /(I)Is an activation function, and can increase the nonlinear capability of the GNN; /(I)Expressed in/>An aggregation function of the layers; /(I)Expressed in/>Aggregation of all node information except the information of the graph node v itself in the layer.
Generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure, wherein the hypergraph structure specifically comprises:
And generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure by using a K nearest neighbor method.
In the hypergraph neural network (HGNN), the hypergraph structure can be represented as,/>For containing/>Set of individual hypergraph nodes,/>Representing a collection of hyperedges in a hypergraph structure,/>Is a diagonal matrix, the elements on the diagonal are weights of the superside,/>Is the number of superedges.
S4, inputting a working condition prediction result of a classifier group corresponding to the current iteration round, a graph structure and a hypergraph structure corresponding to the sample multi-view image set into a mixed structure model to obtain mixed graph features corresponding to the current iteration round, wherein the method specifically comprises the following steps of:
Obtaining a first input characteristic according to a working condition prediction result and a first weight of a classifier group corresponding to the current iteration round;
obtaining a second input characteristic according to the working condition prediction result and the second weight of the classifier group corresponding to the current iteration round;
Inputting the first input characteristic and a graph structure corresponding to the sample multi-view image set into a graph neural network to obtain a graph characteristic;
inputting the second input characteristic and the hypergraph structure corresponding to the sample multi-view image set into a graph neural network to obtain hypergraph characteristics;
and fusing the graph features and the hypergraph features to obtain the mixed graph features corresponding to the current iteration turns.
The method comprises the steps of obtaining a first input characteristic according to a working condition prediction result and a first weight of a classifier group corresponding to a current iteration round, wherein the first input characteristic comprises the following specific steps: and multiplying the working condition prediction result of the classifier group corresponding to the current iteration round by the first weight to obtain a first input characteristic.
Note that, the second weight=1—the first weight, and the first weight is set according to user experience.
Specifically, after the working condition prediction result of each iteration round classifier group is obtained, a part of the working condition prediction result of each iteration round classifier group is input into the graph neural network, a part of the working condition prediction result of each iteration round classifier group is input into the hypergraph neural network, in this embodiment, the working condition prediction result of each iteration round classifier group is multiplied by a corresponding weight, and then the input features obtained by weight calculation are respectively input into the graph neural network and the hypergraph neural network.
In this embodiment, the graph structure data extracts data features through a graph neural network model (GNN), and extracts low-order information. The hypergraph structure extracts features through a hypergraph neural network model (HGNN or HGNN +) and extracts high-order information. Mixing low-order information and high-order information by the following formula, wherein the information fusion formula is as follows:
Wherein M HSM represents the fusion information of the mixed structure model, M GNN and Respectively representing graph neural network convolution and hypergraph neural network convolution; /(I)Representing the mixing coefficients, obtained by heuristic search.
As shown in fig. 4 and table 1, the training process of the gradient boosting decision tree is as follows:
Table 1 gradient boosting decision tree training process (i.e., training with GBDT-HSM joint training algorithm):
In the table 1, the contents of the components, I.e. the predictive model of the ith iteration round, i.e. consisting of i classifiers.
In the iterative process of the gradient lifting decision tree algorithm, the gradient lifting decision tree model is gradually updated by using an accumulation classifier, and GBDT updating equations are as follows:
Wherein, Is at the/>The output model after the iteration round, namely the/>Classifier groups corresponding to the iteration rounds; /(I)Is/>The model formed in 1 iteration, i.e./>-A set of classifiers corresponding to a round of 1 iteration; /(I)Is the learning rate; Is at the/> Weak learners (classifiers) during the iterative rounds. The weak learner in GBDT is primarily determined by hidden decision trees and is constructed by dividing the feature space into disjoint decision tree sub-areas.
In Table 1, at the first iteration round, only one classifier in the classifier group corresponding to the first iteration round has its input asI.e. the initial characteristics of the sample corresponding to all the sample view images in the sample multi-view image set, the input form is a matrix form in this embodiment. When the first iteration turns, the labels/>, corresponding to the classifier groups, of the first iteration turnsY is the sample real working condition of the sample multi-view image set. And obtaining a working condition prediction result of the classifier group corresponding to the first iteration round through the first iteration.
After the first iteration is completed, classifier 1 is formedWhich is classifier 1/>, in fig. 4The working condition prediction result obtained by the classifier 1 is/>, in fig. 4The classifier 1/>By minimizing the loss function/>Formed by the method. Node characteristics/>For training in a hybrid structural model,/>And calculating a result for the loss function of the mixed structure model corresponding to the 2 nd iteration round.
Obtaining the working condition prediction result of the classifier 1Output hybrid map features from the hybrid structural modelCalculating to obtain residual error/>, corresponding to the second iteration round,/>=/>-/>
Then, according to GBDT, updating the equation to obtain a classifier group corresponding to the second iteration round, which comprises the classifier 1 in fig. 4And classifier 2/>. And then/>And optimized node (Mixed graph feature)/>The difference between, i.e. the residual error/>, corresponding to the second iteration roundAs the objective function of the next lifting decision tree (i.e. the classifier group corresponding to the next iteration round), the working condition prediction result/>, of the classifier group corresponding to the second iteration round is obtainedWorking condition prediction result/>, corresponding to second iteration round, of classifier groupAnd inputting the graph structure and the hypergraph structure into a mixed structure model to obtain new mixed graph characteristics/>And repeating the process to obtain the gradient lifting decision tree with n branches, namely the gradient lifting decision tree with n classifiers.
In the second iteration of the algorithm, a new classifier is trainedIt has a new target labelThe input is the original input/>。/>Based on the first predictive classifier/>The output of the HSM of (c) changes. After the second iteration operation is completed, a new prediction model (i.e. classifier group) is formedAnd will obtain/>Re-map onto the hybrid structural model. The HSM reversely traverses r steps of iteration and retransmits the new difference to the next optimization iteration process as an optimization target. By HSM model/>Training is carried out for N periods, a trained gradient lifting decision tree with N classifiers is finally output, and the classification problem of multi-class data can be effectively completed.
The training process is described below with respect to the ith iteration round, and the training process of the current iteration round i in fig. 4 may specifically include the following steps:
step 1: obtaining a graph structure: and generating graph structure data according to sample initial features corresponding to all the sample view images in the sample multi-view image set by using a K nearest neighbor method.
Step 2: obtaining a hypergraph structure: and generating a hypergraph structure according to the graph structure by using a K neighbor method.
Step 3: inputting the working condition prediction result, the graph structure and the hypergraph structure of the classifier group corresponding to the current iteration round i into a mixed structure model to obtain mixed graph features corresponding to the current iteration round i, and then according to the mixed graph features corresponding to the current iteration round iWorking condition prediction result/>, corresponding to current iteration round i, of classifier groupCalculating residual error/>, corresponding to current iteration round i,/>
Step 4: residual error corresponding to current iteration round iAnd training the classifier group corresponding to the next iteration round i+1 as a label of the classifier group corresponding to the next iteration round i+1.
Step 5: repeating the steps to obtain the trained gradient lifting decision tree.
Node characteristics (namely, the working condition prediction result of the classifier group corresponding to the nth iteration round)For training in a hybrid structural model, the training phase employs gradient descent techniques to minimize the loss function/>. In the algorithm execution process, model parameters/>, in the HSM are optimized simultaneouslyAnd predicted operating condition output in node/>The output expression of the hybrid structural model is as follows:
Wherein, Outputting a mixed structure model corresponding to the ith iteration round, namely mixing graph characteristics; /(I)The working condition prediction result corresponding to the ith iteration round classifier group is obtained; /(I)Is the learning rate; /(I)Calculating a process for the hybrid structure model; a loss function that is a hybrid structural model; /(I) Representing a graph structure; /(I)Representing a hypergraph structure; /(I)Weights are calculated for the hybrid structural model.
Based on the above formula, the residual error corresponding to the ith iteration roundFor/>Residual error/>The times are used as an objective function, namely training labels of the classifier group of the next iteration round i+1.
In this embodiment, the hybrid structure model is usedThe function is a cross entropy loss function.
To validate the heterogeneous data classification model described above, the present embodiment uses six publicly available heterogeneous data sets, which are compared with other models.
The embodiment comprehensively compares the trained gradient lifting decision tree with other related methods to evaluate the performance and effectiveness of the gradient lifting decision tree. The present embodiment selects and sets the relevant parameters for algorithm execution to ensure fair and strict assessment. This includes adjusting the superparameter, selecting the appropriate feature representation method, and selecting the appropriate evaluation index. By performing thorough analysis, the advantages and disadvantages of GBDT-HSM, as well as the competitive advantages compared to alternative methods in various experimental scenarios, are well understood.
Nine exemplary algorithms were chosen for comparison with the performance of the trained gradient-enhanced decision tree of this embodiment, including decision tree algorithm CatBoost under GBDT algorithm framework, GNN-based methods (GAT, GCN, AGNN, APPNP), hypergraph-based methods (HGNN, hgnn+) and mixed model graph-based convolution methods (Res-GNN and BGNN).
The experimental environment included all comparison algorithms and a hardware set-up consisting of two E5-2690v4 CPUs, two NVIDIA TESLA P GPUs and one NVIDIA TESLA P GPU. These algorithms are performed on heterogeneous and multi-view datasets. The learning rates employed in this example were 0.01, 0.001 and 0.0003, respectively, and the loss rates were 0 and 0.5, respectively, with Adam optimizer to minimize cross entropy loss, and weight decay set to 0.004. To mitigate the risk of model overfitting and maximize performance, an early-stop strategy is employed. After each epoch has ended during the training phase, the performance of the model is evaluated on the validation set. If the model does not show performance improvement in 100 epochs in succession, training is stopped and the model at that stage is saved as the final model.
Four heterogeneous data sets with different attributes are used for node classification, and four types of heterogeneous data set information are shown in table 2.
Table 2 four types of heterogeneous data sets:
Since publicly accessible and trusted heterogeneous data sets are not readily available, this embodiment employs vk_class and house_class data sets from regression tasks. In addition, two sparse node classification datasets were employed: SLAP and DBLP. These data sets come from Heterogeneous Information Networks (HIN) containing different types of nodes. These nodes inherently conform to the key attributes of the heterogeneous data set.
As shown in table 3, the trained gradient boost decision tree trained by GBDT-HSM method provided in this embodiment achieves excellent classification performance on all heterogeneous datasets.
Table 3 optimal classification results for heterogeneous dataset with F1-m:
The present embodiment verifies the classification accuracy of various algorithms in terms of both accuracy (acc) and F1 values (F1-m). Such comprehensive analysis allows a full understanding of their performance in classification tasks and a comparison of their advantages and disadvantages.
Taking the House_class dataset as an example, the GBDT-HSM algorithm is superior to other algorithms in terms of accuracy, such as CatBoost, GNN-based algorithm (GAT, GCN, AGNN, APPNP), hypergraph neural network-based method (HGNN, HGNN+) and hybrid method (Res-GNN, BGNN), which are improved by 16.56%, 5.89%, 5.61% and 1.38%, respectively. To further analyze the convergence of the GBDT-HSM algorithm during the iteration, it was compared to GCN, HGNN, HGNN + and BGNN algorithms. Then, the iterative curve data of the algorithms on the House_class data set is collected, and the loss function of each algorithm after each iteration is recorded. To effectively illustrate the iterative convergence process of the various algorithms, as shown in fig. 5, fig. 5 uses a non-linear scale on the coordinate axes, wherein the curve within the dashed box is enlarged as indicated by the arrow. It can be observed that GBDT-HSM and BGNN converge faster at the initial stage than GCN, HGNN, HGNN + et al. However, during convergence, the GBDT-HSM curve is smoother than BGNN. Overall, GBDT-HSM algorithm shows faster convergence while guaranteeing algorithm stability, and has certain advantages over other algorithms.
SLAP and DBLP with sparse features present a greater challenge to GNN-based algorithms (GAT, GCN, AGNN, APPNP) in the remaining three data sets. This can be observed in the SLAP dataset, the CatBoost algorithm outperforms the GNN-based algorithm in terms of accuracy (GAT, GCN, AGNN, APPNP). However, the GBDT-HSM algorithm proposed in this embodiment achieves the best results in terms of accuracy and F1 values on both data sets, further proving the effectiveness of this joint architecture algorithm.
As a special type of heterogeneous dataset, multi-view datasets face certain challenges in heterogeneous feature fusion and data dimension. In this experiment, the task was to classify the visual objects, mainly using the ModelNet dataset and NTU dataset of the university of preston, the dataset information being shown in table 4.
Table 4 multiview dataset:
ModelNet40 is a CAD model dataset containing 12311 objects, including 40 categories of aircraft, automobiles, tables, chairs, and the like. During the training process, the dataset was split into two parts, training and testing, with 7387 subjects for training and 2462 subjects for testing. The NTU dataset contains 2012 3D objects, 67 categories including pen, cup, chess, etc. In the NTU dataset, 80% of the data was used for training and 20% for testing. The subjects were selected as the Group-View convolutional neural network (GVCNN), multi-View convolutional neural network (MVCNN), and fusion network (MV+ GVCNN), respectively. The main reason for choosing these two methods is that they show superior performance in terms of 3D object representation. The present embodiment follows the experimental setup of MVCNN, GVCNN, and mv+ GVCNN, generating multi-view data of 3D objects, thereby effectively distinguishing the performance of various classification algorithms. The comparison results on the multi-view dataset are shown in table 5.
Table 5 optimal classification results for multi-view data with F1-m:
The units in the above table are: percent of the total weight of the composition. The GBDT-HSM method achieved excellent classification performance on both NTU and ModelNet datasets. Taking the MV+ GVCNN characteristic in ModelNet data set as an example, GBDT-HSM performed better than CatBoost, BGNN, GCN, GAT, AGNN and HGNN, by 6.57%, 2.67%, 2.76%, 2.48%, 1.50% and 0.25%, respectively. This excellent performance is mainly due to the integrated structure of HSMs and GBDT. HSM is able to efficiently handle complex relationships between graph structures and hypergraph structures. Furthermore, it is observed in fig. 6 that GBDT-HSM exhibits a smooth and fast loss function convergence process, wherein the curve within the dashed box is enlarged as indicated by the graph indicated by the arrow. From the results shown in fig. 6, the following conclusions can be drawn: the gradient lifting decision tree trained by the GBDT-HSM method provided by the embodiment is superior to other existing classification methods in terms of accuracy (acc) and F1 score (F1-m) on ModelNet data set.
Through the training and verification processes, a trained gradient lifting decision tree can be obtained, wherein the trained gradient lifting decision tree comprises a plurality of classifiers.
S6, diagnosing the working condition of the electric smelting magnesium furnace by adopting the trained gradient lifting decision tree to obtain a target working condition diagnosis result, wherein the method specifically comprises the following steps of:
For each classifier, diagnosing the working condition of the electric smelting magnesium furnace by adopting the classifier to obtain the target prediction probability corresponding to the classifier;
determining a target working condition diagnosis result according to target prediction probabilities corresponding to all the classifiers specifically comprises: adding the target prediction probabilities corresponding to all the classifiers to obtain a final target prediction probability; and determining a target working condition diagnosis result according to the final target prediction probability and a set probability threshold.
The invention has the following beneficial effects:
(1) The invention provides an innovative architecture, which combines a gradient lifting decision tree (GBDT) with a Hybrid Structure Model (HSM) to further improve the accuracy and efficiency of classification. This federated architecture leverages GBDT's advantages in extracting heterogeneous features and the powerful capabilities of HSM in processing graph structure information. The HSM model has a higher degree of expressive power, is able to capture complex relationships and constraints, and more accurately describes interactions between data. Instead GBDT focuses on capturing the linear relationship between features. Thus, combining the two models can maximize their advantages.
(2) The model adopts a new collaborative training strategy, and uses a new mixed graph neural network convolution model to realize gradient update of a lifting decision tree, so that the performance of the model is further improved.
(3) GBDT-HSM achieved optimal results in all 6 heterogeneous datasets throughout. Compared with the method based on the lifting decision tree and the graph neural network, the combined architecture shows excellent performance in classification tasks, and the potential of the method in solving heterogeneous data classification is fully demonstrated.
(4) The HSM model exhibits enhanced generalization capability by effectively processing nonlinear correlations between data, enabling it to accommodate different data distributions and complexities. Furthermore, the lifting concept used in GBDT can significantly reduce the likelihood of overfitting.
(5) Integrating the HSM model and GBDT can improve the prediction accuracy of the model in a complex scene. The HSM model provides more accurate feature weights and combinations, while GBDT enables more accurate predictions of these features.
The present invention proposes an innovative architecture that combines a gradient-lifting decision tree (GBDT) with a Hybrid Structural Model (HSM). The joint architecture takes full advantage of the gradient-enhanced decision tree (GBDT) in extracting heterogeneous features and the powerful capability of the Hybrid Structure Model (HSM) in processing the graph structure information. By constructing a mixed graph neural network convolution layer by using a mixed structure model (HSM), the gradient updating efficiency of a decision tree (GBDT) is improved, and end-to-end optimization is realized. In the combined architecture, GBDT is mainly used for extracting heterogeneous characteristics, and a Hybrid Structure Model (HSM) is responsible for extracting graph information, so that algorithm accuracy is further improved through a collaborative training strategy. Experimental results show that GBDT-HSM algorithm provided by the invention has the highest effect in the classification process of 6 heterogeneous data sets. The combined architecture shows excellent performance compared with a related method based on a lifting decision tree and a graph neural network, and fully demonstrates the potential of the method in solving heterogeneous data classification.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; also, it is within the scope of the present invention to be modified by those of ordinary skill in the art in light of the present teachings. In view of the foregoing, this description should not be construed as limiting the invention.

Claims (8)

1. The utility model provides a method for diagnosing the working condition of an electric smelting magnesium furnace based on a mixed structure model, which is characterized by comprising the following steps:
Acquiring a heterogeneous data set; the heterogeneous data sets comprise a plurality of sample multi-view image sets and sample real working conditions corresponding to each multi-view image set; each sample multi-view image set comprises a plurality of different sample view images acquired at the same moment; the sample visual angle image is an in-furnace image or a furnace wall image in the working process of the electric smelting magnesium furnace; the sample real working conditions comprise optimal working conditions and non-optimal working conditions;
For each sample multi-view image set, extracting features of each sample view image in the sample multi-view image set to obtain sample initial features corresponding to each sample view image in the sample multi-view image set;
For each sample multi-view image set, generating a graph structure corresponding to the sample multi-view image set according to sample initial characteristics corresponding to all the sample view images in the sample multi-view image set, and generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure; the graph nodes in the graph structure correspond to sample initial features corresponding to all the sample view images in the sample multi-view image set one by one; the edges in the graph structure represent that the distance between sample initial features corresponding to two connected graph nodes is smaller than a first set threshold value; the hypergraph nodes in the hypergraph structure are in one-to-one correspondence with sample initial features corresponding to all the sample view images in the sample multi-view image set; the superside in the supergraph structure indicates that the distance between the initial characteristics of the samples corresponding to a plurality of connected supergraph nodes is smaller than a second set threshold value;
Inputting a working condition prediction result of a classifier group corresponding to the current iteration round, a graph structure and a hypergraph structure corresponding to the sample multi-view image set into a mixed structure model to obtain mixed graph characteristics corresponding to the current iteration round; when the current iteration round is 1, the working condition prediction result of the classifier group corresponding to the current iteration round is the sample initial characteristics corresponding to all the sample visual angle images in the sample multi-visual angle image set; the label of the classifier group corresponding to the current iteration round is the sample real working condition of the sample multi-view image set; the hybrid structure model consists of a graph neural network and a hypergraph neural network; the classifier group corresponding to the current iteration round comprises at least one classifier;
Calculating residual errors corresponding to the current iteration round according to the mixed graph characteristics corresponding to the current iteration round and the working condition prediction results of the classifier group corresponding to the current iteration round; judging whether an iteration stopping condition is met, if so, stopping iteration, and taking a classifier group corresponding to the current iteration round as a trained gradient lifting decision tree; if not, taking the residual error corresponding to the current iteration round as a label of the next iteration round, replacing the classifier group corresponding to the current iteration round with the classifier group corresponding to the next iteration round, and returning to the step of inputting the working condition prediction result of the classifier group corresponding to the current iteration round, the graph structure and the hypergraph structure corresponding to the sample multi-view image set into the mixed structure model until an iteration stop condition is reached, so as to obtain a trained gradient lifting decision tree; the trained gradient lifting decision tree is a classifier group of the corresponding iteration round when reaching the iteration stop condition; the classifier group of the next iteration round is one more classifier than the classifier group corresponding to the current iteration round;
Diagnosing the working condition of the electric smelting magnesium furnace by adopting the trained gradient lifting decision tree to obtain a target working condition diagnosis result; and the target working condition diagnosis result is an excellent working condition or a non-excellent working condition.
2. The method for diagnosing the conditions of the electric smelting magnesium furnace based on the mixed structure model according to claim 1, wherein the trained gradient lifting decision tree comprises a plurality of classifiers;
diagnosing the working condition of the electric smelting magnesium furnace by adopting the trained gradient lifting decision tree to obtain a target working condition diagnosis result, and specifically comprises the following steps:
For each classifier, diagnosing the working condition of the electric smelting magnesium furnace by adopting the classifier to obtain the target prediction probability corresponding to the classifier;
And determining a target working condition diagnosis result according to the target prediction probabilities corresponding to all the classifiers.
3. The method for diagnosing the working condition of the electric smelting magnesium furnace based on the mixed structure model according to claim 2, wherein the method for determining the diagnosing result of the working condition of the electric smelting magnesium furnace according to the target prediction probabilities corresponding to all the classifiers is characterized by comprising the following steps:
adding the target prediction probabilities corresponding to all the classifiers to obtain a final target prediction probability;
and determining a target working condition diagnosis result according to the final target prediction probability and a set probability threshold.
4. The method for diagnosing the working condition of the electric smelting magnesium furnace based on the mixed structure model according to claim 1, wherein the working condition prediction result of the classifier group corresponding to the current iteration round, the graph structure and the hypergraph structure corresponding to the sample multi-view image set are input into the mixed structure model to obtain the mixed graph characteristic corresponding to the current iteration round, and the method specifically comprises the following steps:
Obtaining a first input characteristic according to a working condition prediction result and a first weight of a classifier group corresponding to the current iteration round;
obtaining a second input characteristic according to the working condition prediction result and the second weight of the classifier group corresponding to the current iteration round;
Inputting the first input characteristic and a graph structure corresponding to the sample multi-view image set into a graph neural network to obtain a graph characteristic;
inputting the second input characteristic and the hypergraph structure corresponding to the sample multi-view image set into a graph neural network to obtain hypergraph characteristics;
and fusing the graph features and the hypergraph features to obtain the mixed graph features corresponding to the current iteration turns.
5. The method for diagnosing the working condition of the electric smelting magnesium furnace based on the mixed structure model according to claim 4, wherein the first input feature is obtained according to the working condition prediction result and the first weight of the classifier group corresponding to the current iteration round, and the method specifically comprises the following steps:
And multiplying the working condition prediction result of the classifier group corresponding to the current iteration round by the first weight to obtain a first input characteristic.
6. The method for diagnosing the working condition of the electric smelting magnesium furnace based on the mixed structure model according to claim 1, wherein the generating of the hypergraph structure corresponding to the sample multi-view image set according to the graph structure specifically comprises the following steps:
And generating a hypergraph structure corresponding to the sample multi-view image set according to the graph structure by using a K nearest neighbor method.
7. The method for diagnosing the working condition of the electric smelting magnesium furnace based on the mixed structure model according to claim 1, wherein the method is characterized in that the method comprises the steps of extracting the characteristics of each sample visual angle image in the sample multi-visual angle image set to obtain the initial characteristics of the sample corresponding to each sample visual angle image in the sample multi-visual angle image set, and specifically comprises the following steps:
And extracting the characteristics of each sample view angle image in the sample multi-view angle image set by adopting a characteristic extraction model to obtain sample initial characteristics corresponding to each sample view angle image in the sample multi-view angle image set.
8. The method for diagnosing the operating condition of the electric smelting magnesium furnace based on the mixed structure model according to claim 7, wherein the characteristic extraction model is a CNN model or an RNN model.
CN202410049997.3A 2024-01-15 2024-01-15 Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model Active CN117708715B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410049997.3A CN117708715B (en) 2024-01-15 2024-01-15 Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410049997.3A CN117708715B (en) 2024-01-15 2024-01-15 Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model

Publications (2)

Publication Number Publication Date
CN117708715A CN117708715A (en) 2024-03-15
CN117708715B true CN117708715B (en) 2024-06-21

Family

ID=90153519

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410049997.3A Active CN117708715B (en) 2024-01-15 2024-01-15 Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model

Country Status (1)

Country Link
CN (1) CN117708715B (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111737474B (en) * 2020-07-17 2021-01-12 支付宝(杭州)信息技术有限公司 Method and device for training business model and determining text classification category
CN114996488B (en) * 2022-08-08 2022-10-25 北京道达天际科技股份有限公司 Skynet big data decision-level fusion method
CN115905940A (en) * 2022-10-27 2023-04-04 徐州徐工矿业机械有限公司 Intelligent fault diagnosis method for engineering machinery based on self-learning graph convolution network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BOOST THEN CONVOLVE: GRADIENT BOOSTING MEETS GRAPH NEURAL NETWORKS;Sergei Ivanov等;《arXiv》;20210331;第1-16页 *
基于图编码网络的社交网络节点分类方法;郝志峰等;《计算机应用》;20201231(第1期);第194-201页 *

Also Published As

Publication number Publication date
CN117708715A (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US20220215227A1 (en) Neural Architecture Search Method, Image Processing Method And Apparatus, And Storage Medium
CN111523047A (en) Multi-relation collaborative filtering algorithm based on graph neural network
CN104899921B (en) Single-view videos human body attitude restoration methods based on multi-modal own coding model
CN108287904A (en) A kind of document context perception recommendation method decomposed based on socialization convolution matrix
Khoo et al. A prototype genetic algorithm-enhanced rough set-based rule induction system
Cheng et al. DDU-Net: A dual dense U-structure network for medical image segmentation
Tembusai et al. K-nearest neighbor with k-fold cross validation and analytic hierarchy process on data classification
Liu et al. Tool path planning of consecutive free-form sheet metal stamping with deep learning
CN115051929B (en) Network fault prediction method and device based on self-supervision target perception neural network
Easom-McCaldin et al. Efficient quantum image classification using single qubit encoding
CN111259264B (en) Time sequence scoring prediction method based on generation countermeasure network
CN116844041A (en) Cultivated land extraction method based on bidirectional convolution time self-attention mechanism
CN116510124A (en) Infusion monitoring system and method thereof
Shariff et al. Artificial (or) fake human face generator using generative adversarial network (GAN) machine learning model
Liu et al. Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer
CN117708715B (en) Electric smelting magnesium furnace working condition diagnosis method based on mixed structure model
Song A study on explainable artificial intelligence-based sentimental analysis system model
Xue [Retracted] Research on Information Visualization Graphic Design Teaching Based on DBN Algorithm
CN113780146B (en) Hyperspectral image classification method and system based on lightweight neural architecture search
Zhu et al. Rethinking the number of channels for the convolutional neural network
Zhu et al. Fast Adaptive Character Animation Synthesis Based on Greedy Algorithm
Altundogan et al. Genetic algorithm based fuzzy cognitive map concept relationship determination and sigmoid configuration
CN111046740B (en) Classification method for human action video based on full tensor cyclic neural network
Zhong et al. Action-driven reinforcement learning for improving localization of brace sleeve in railway catenary
Jeong et al. Filter combination learning for CNN model compression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant