CN109255029A - A method of automatic Bug report distribution is enhanced using weighted optimization training set - Google Patents

A method of automatic Bug report distribution is enhanced using weighted optimization training set Download PDF

Info

Publication number
CN109255029A
CN109255029A CN201811033587.0A CN201811033587A CN109255029A CN 109255029 A CN109255029 A CN 109255029A CN 201811033587 A CN201811033587 A CN 201811033587A CN 109255029 A CN109255029 A CN 109255029A
Authority
CN
China
Prior art keywords
training set
bug report
bug
algorithm
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811033587.0A
Other languages
Chinese (zh)
Inventor
魏苗苗
陈荣
李辉
郭世凯
唐文君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN201811033587.0A priority Critical patent/CN109255029A/en
Publication of CN109255029A publication Critical patent/CN109255029A/en
Pending legal-status Critical Current

Links

Landscapes

  • Stored Programmes (AREA)

Abstract

The invention discloses a kind of methods for enhancing automatic Bug report distribution using weighted optimization training set, this method is by being weighted processing to bug report data set, improve the frequency of information in short description, and binding characteristic selection algorithm and example selection algorithm, noise word and redundant instance are reduced simultaneously, obtains that scale is smaller and the higher training set of quality, improves the accuracy rate of bug classification, time cost and human cost needed for saving bug distribution, improve working efficiency.

Description

A method of automatic Bug report distribution is enhanced using weighted optimization training set
Technical field
The present invention relates to data processing sorting technique field more particularly to a kind of use weighted optimization training set enhancing are automatic The method of Bug report distribution.
Background technique
Currently, some researchers attempt to solve bug report classification problem.G.C.Murphy etc. is proposed first in file [1] Bug assignment problem is changed into text classification problem to solve, i.e., Text Classification is applied in the warehouse bug.Anvik is in text Part [2] et al. is semi-automatic by bug assignment problem, predicts multiple developers first with Text Classification training, then will These developers select as candidate for expert.Jeong et al. proposes the concept of tossing figure in file [3], by right Classification results carry out tossing figure filtering to improve classification accuracy.Xuan et al. uses semisupervised classification side in file [4] Method helps the example for marking label unknown with the bug example of the existing label of a part, then is used to train by all examples Prediction.Zou et al. first will be in the training set of Data Reduction technical application to classification in file [5].It is above-mentioned based on bug distribution The problem of research mostly has ignored data set itself with improvement.Work on hand be concentrated mainly on to the initial data of bug report and In terms of text is analyzed, and the noise information for including in the natural language of text is largely ignored.If bug Contain many noises in the natural language description of report, then no matter how sorting algorithm is optimized, classifying quality will not It is especially good.
Summary of the invention
According to problem of the existing technology, automatic Bug is enhanced using weighted optimization training set the invention discloses a kind of The method for reporting distribution specifically uses following steps:
S1: obtaining original training set data from the warehouse Bug, pre-processes to original training set: from original training set In filter out the bug report of inefficient developer processing, short description and the are extracted respectively to the bug report in the data set filtered out Description information of the one long description as the bug report, carries out participle to the description information of each bug report and goes at stop words Reason, then the short description of bug report and long description are processed into text matrix S respectivelyBRWith text matrix LBR
S2: processing is weighted to pretreated Bug report: the text matrix S generated to short descriptionBRMultiplied by one The text matrix L that weighted value η is generated with long description againBRBe added, will weighting treated text matrix as training set text square Battle array WBR
S3: to training set text matrix WBRCarry out reduction processing: first with 4 kinds of feature selecting algorithms and 4 kinds of example selections Algorithm is respectively to training set text matrix WBRThe reduction for carrying out dimension and line number, from feature selecting and example selection algorithm respectively Best Algorithm for Reduction is selected, two best Algorithm for Reduction are combined to training set text matrix WBRReduction is carried out to obtain finally Training set text;
S4: learning training is carried out using NB Algorithm to final training set text and obtains disaggregated model;
S5: new bug report is inputted in disaggregated model to the appointment developer for carrying out classification processing and exporting the bug report.
Further, processing is weighted to pretreated Bug report in the S2 and uses following algorithm:
η indicates that the weighted value of the text matrix generated to short description, m indicate the bug report number in training set in above formula, N indicates the number of various words in training set.
By adopting the above-described technical solution, provided by the invention a kind of using the automatic Bug of weighted optimization training set enhancing The method of report distribution improves the frequency of information in short description, and combine by being weighted processing to bug report data set Feature selecting algorithm and example selection algorithm, while reducing noise word and redundant instance, obtain that scale is smaller and quality is higher Training set, improve bug classification accuracy rate, save bug distribution needed for time cost and human cost, improve work Make efficiency.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this The some embodiments recorded in application, for those of ordinary skill in the art, without creative efforts, It is also possible to obtain other drawings based on these drawings.
Fig. 1 is the flow chart of the method for the present invention
Specific embodiment
To keep technical solution of the present invention and advantage clearer, with reference to the attached drawing in the embodiment of the present invention, to this Technical solution in inventive embodiments carries out clear and complete description:
A kind of method enhancing automatic Bug report distribution using weighted optimization training set as shown in Figure 1 is specific using such as Lower step:
S1: obtaining original training set data from the warehouse Bug, pre-processes to original training set: first from original instruction Practicing the bug report that concentration filter falls inefficient developer's processing, i.e., inefficient developer is the few developer of processing bug number, if The characteristics of training example for belonging to a certain classification in training set is very few, and classifier then learns less than the category.To the number filtered out The description information of short description and first long description as the bug is extracted respectively according to each bug report of concentration, to each bug The description information participle of report goes stop words to handle, then the short description of bug report and long description are processed into text matrix respectively SBRWith text matrix LBRIt wherein segments are as follows: description information is processed into word one by one;Stop words: i.e. to, in, on etc. To useless conjunction preposition etc. of classifying.Final data concentrates the short description of bug report and long description to be processed into text square respectively Battle array, since row every in matrix is expressed as a bug report, each column represents a word.If there is certain column in the bug report of certain row Corresponding word then corresponds to the frequency that the value of ranks occurs in the bug for the word in matrix;If in the bug report of certain row There is no certain to arrange corresponding word, then the value that ranks are corresponded in matrix is 0.
S2: processing is weighted to pretreated Bug report: the text matrix S generated to short descriptionBRMultiplied by one The text matrix L that weighted value η is generated with long description againBRBe added, will weighting treated text matrix as training set text square Battle array WBR
S3: to training set text matrix WBRCarry out reduction processing: first respectively with 4 kinds of feature selectings and 4 kinds of example selections Algorithm is respectively to training set text matrix WBRThe reduction for carrying out dimension and line number, from feature selecting and example selection algorithm respectively Best Algorithm for Reduction is selected, two best Algorithm for Reduction are combined to training set text matrix WBRReduction is carried out to obtain finally Training set text.Whether have an impact to verify different combinations to reduction effect, separately verify two kinds of combinations, first To the example reduction again of training set feature reduction, first to training set example reduction feature reduction again) and therefrom select a reduction effect Fruit it is good as final reduction scheme.
S4: learning training is carried out using NB Algorithm to final training set text and obtains disaggregated model;
S5: when tester submits a new bug report, the bug report is provided with trained disaggregated model One size is developer's recommendation list of K (taking 10), is finally rule of thumb assigned to the bug report by bug distribution personnel It is most suitable for solving its developer in recommendation list.
Further, processing is weighted to pretreated Bug report in the S2 and uses following algorithm:
η indicates that the weighted value of the text matrix generated to short description, m indicate the bug report number in training set in above formula, N indicates the number of various words in training set.
Further, during Data Reduction, we using feature selecting and example selection in conjunction with by the way of come pair Data set carries out reduction, to obtain the standard that data scale is smaller and the higher bug report set of quality, and raising bug report is classified True rate.In order to avoid single Algorithm for Reduction may generate deviation and contingency, we used four kinds of common feature selectings Algorithm: OneR, IG, CHI and RF;And four kinds of example selection algorithms: CNN, MCS, ENN and ICF.The specific method is as follows:
(1) feature selecting algorithm:
OneR algorithm is inferred to an output rule from given sample set, which is based on an attribute, the attribute It is most accurate when predicting given classification.It, for each value of the attribute, it is corresponding to find out the value first since some attribute All sample sets, statistical sample concentrates different classes of number, that most classification of selection sort result is as this feature The division result of value.Each characteristic value can generate an error rate for division result.The mistake that all characteristic values generate The sum of rate is the total false rate of this feature.The error rate of all features is ranked up, error rate is smaller, and this feature is to classification tribute It offers bigger.Specific algorithm process is as follows:
In IG, the measurement standard of importance is that some feature can bring how much information, bring information for categorizing system More, this feature is more important.Its appraisal procedure is using entropy as theoretical basis.Entropy is the probabilistic measurement of stochastic variable, is not known The characteristics of property, can be obtained by probability distribution.For a classification problem, its entropy can be used by exporting the uncertain of distribution To measure.Assuming that Y is the other set of all output classes, y is some element (classification) in set, and Pr (y) is the sample that classification is y The ratio of this number and total number of samples.Then entropy is defined as follows:
Negative sign ensures that information must be positive number either zero.After knowing a determining input variable X, output is not Certainty can decline.Know i-th of input feature vector (Xi=xi) afterwards the entropy of Y be:
The conditional entropy of variable Y be H (Y | Xi) desired value:
After knowing a determining input variable X, the uncertainty of output can decline.Export uncertain reduction amount just It is the mutual information of this feature, that is, IG.XiMutual information I (X between Yi;Y) are as follows:
I(Xi;Y)=I (Yi;X)=H (Y)-H (Y | Xi)
CHI is several frequently seen one of hypothesis testing, and statistical independence is negated in it.Chi-square Test it is basic Thought is that theoretical correctness is determined by the deviation of observation actual value and theoretical value.The bigger explanation of chi-square value and null hypothesis Deviation is bigger, i.e., feature is more important.In the feature selecting stage of text classification, generally done using " word t is uncorrelated to classification c " Null hypothesis.Word t and the chi-square value of classification c are shown below:
Relief is a kind of feature weight algorithm, it assigns each feature difference according to the correlation of each feature and classification Weight, weight be less than some threshold value feature will be removed.Relief is based on K_nearest neighbors algorithm to classification Each feature of problem carries out ranking, in order to evaluate each feature, for each sample, to find it is similar in K most K arest neighbors in neighbour and inhomogeneity.Foundation: if a feature is important to classifying, this feature is similar Similar value is had between not, the gap of the value should be larger between different classes of.The Relief value Rel (j) of j-th of feature Be defined as all samples with they it is different classes of between at a distance from K neighbour and with all samples and they it is generic between K The ratio of the distance sum of neighbour.Shown in formula specific as follows:
M is sample instance sum, x in above formulai,jFor j-th of characteristic value of i-th of sample.For with i-th of sample The distance that k-th similar of arest neighbors is arranged in jth.For with inhomogeneous k-th of the arest neighbors of i-th of sample jth column at a distance from.
(2) example selection algorithm
CNN is based on arest neighbors sorting algorithm, it is first example selection algorithm.The algorithm is made of the following steps: first First an example is taken to obtain new data set at random from each class of training set.Then gone with this data set to remaining real Example is classified, and is added it in this data set as long as the example of misclassification.So operation obtains final selection Exemplar Data Set out.Algorithm flow are as follows:
MCS is typically to select the algorithm unanimously collected by what Dasarathy was proposed in 1994.Algorithm is with nearest foreign peoples's Collect (NUNS, Nearest Uulike Neighbor Set) and nearest foreign peoples's example (NUN, Nearest Uulike Neighbor distance based on), using voting mechanism, example ballot of each example into oneself nearest foreign peoples's distance is obtained Example more than poll has the ability of example around stronger division.Algorithm successively selects the example more than number of votes obtained that MCS is added, so It votes again afterwards, until the example in MCS is no longer reduced.Algorithm also maintains MCS set while choosing instances Consistency.Algorithm flow are as follows:
ENN algorithm is a kind of noise filtering method, by achieving the purpose that example is selected to noise filtering.ENN is calculated The main thought of method is to remove those and all different example of major part example classification arround it.It is opened from raw data set Begin, the number K of previously given arest neighbors example is then a recently to the K that each example obtains it according to nearest neighbor algorithm Neighbour compares the example classification and the classification of this K arest neighbors, if different from the classification of most of neighbours, just deleting should Example.Algorithm flow are as follows:
ICF algorithm is the algorithm put forward by Brighton H et al. [22], it includes two steps: the first step is noise filtering Process, i.e., to its K neighbour of each example calculation of data concentration, if some example is classified by its K neighbour's mistake Then it is deleted;Second step is instance compression process, calculates its accessibility using Localset to all examples (Reachability) and spreadability (Coverage).The calculating of Reachability and Coverage can use Localset (Localset is defined as follows: to an example, calculating all close to from nearest reality of it according to measuring similarity standard meter Example starts sequence and counts, and until encountering an example different with its label, those of counting to identical example is exactly this The Localset of example).Wherein Coverage is defined as follows: if an example in the Localset of N number of example, this The value of the Coverage of example is N.If the number of example is M in the Localset of an example, the example Reachability is M.The then value relatively between the two, if the Reachability value of an example is greater than Coverage value then illustrates the example redundancy, just deletes this example.Algorithm flow are as follows:
Training set is used respectively Information Gain (IG), Chi-square (CHI), One Rule (OneR), Four kinds of feature selecting algorithms such as Relevant Features (Relief) carry out the reduction of dimension to training set, reduce training set In noise word.Different scales is taken when reduction respectively.
Condensed Nearest Neighbor (CNN), Edited Nearest Neighbor are used training set respectively (ENN), four kinds of example selections such as Minimal Consistent Set (MCS), Iterative Case Filter (ICF) are calculated Method carries out the reduction of line number to training set, reduces the redundant instance in training set.Different scales is taken when reduction respectively.
According to the brief processing of first two steps as a result, selecting reduction effect best feature selecting algorithm and example Selection algorithm and its corresponding reduction scale are simultaneously combined.In order to verify different built-up sequence to the reduction effect of training set Whether have an impact, the experimental verification experimental result of two kinds of various combination sequences.And with the good built-up sequence of reduction effect to instruction Practice the reduction that collection carries out dimension line number.
Further, NB Algorithm selection during, respectively with Naive Bayes, RandomTree, Four kinds of sorting algorithms such as ComplementNaiveBaye, J48 model training set, and it is best finally to select experiment effect Naive Bayes training set is modeled as final sorting algorithm.
A kind of method enhancing automatic Bug report distribution using weighted optimization training set disclosed by the invention, this method can To be applied to, text classification, text denoising, dimensionality reduction, developer recommends etc. fields, to handle the big quality of data set scale in the warehouse bug Low problem.To the disaggregated model got well, reduces the time used in training data and reduce calculator memory expense and mention There is remarkable result in terms of the accuracy of high text classification.
The foregoing is only a preferred embodiment of the present invention, but scope of protection of the present invention is not limited thereto, Anyone skilled in the art in the technical scope disclosed by the present invention, according to the technique and scheme of the present invention and its Inventive concept is subject to equivalent substitution or change, should be covered by the protection scope of the present invention.
Reference document:
[1]Cubranic D,Murphy G C.Automatic bug triage using text categorization[J].2004:92-97
[2] Anvik J, Hiew L and Murphy G C, Who should fix this Bug? [C] .Proc, 28th Intl.Conf,Software Engineering,2006,361-370.
[3]Jeong G,Kim S,Zimmermann T.Improving bug triage with bug tossing graphs[C].The,Joint Meeting of the European Software Engineering Conference and the ACM Sigsoft Symposium on the Foundations of Software Engineering.ACM, 2009:111-120.
[4]Xuan J,Jiang H,Ren Z,et al.Automatic Bug Triage using Semi- Supervised Text Classification[J].2010:209-214.
[5]Zou W,Hu Y,Xuan J,et al.Towards Training Set Reduction for Bug Triage[C].Computer Software and Applications Conference.IEEE,2011:576-581.

Claims (2)

1. a kind of method for enhancing automatic Bug report distribution using weighted optimization training set, it is characterised in that: including following step It is rapid:
S1: obtaining original training set data from the warehouse Bug, pre-processes to original training set: the mistake from original training set The bug report for filtering inefficient developer's processing extracts short description and first to the bug report in the data set filtered out respectively Description information of the long description as the bug report, carries out participle to the description information of each bug report and stop words is gone to handle, then The short description of bug report and long description are processed into text matrix S respectivelyBRWith text matrix LBR
S2: processing is weighted to pretreated Bug report: the text matrix S generated to short descriptionBRMultiplied by a weight The text matrix L that value η is generated with long description againBRBe added, will weighting treated text matrix as training set text matrix WBR
S3: to training set text matrix WBRCarry out reduction processing: first with 4 kinds of feature selecting algorithms and 4 kinds of example selection algorithms Respectively to training set text matrix WBRThe reduction for carrying out dimension and line number, is selected respectively from feature selecting and example selection algorithm Two best Algorithm for Reduction are combined to training set text matrix W by best Algorithm for ReductionBRIt carries out reduction and obtains final training Collect text;
S4: learning training is carried out using NB Algorithm to final training set text and obtains disaggregated model;
S5: new bug report is inputted in disaggregated model to the appointment developer for carrying out classification processing and exporting the bug report.
2. a kind of method for enhancing automatic Bug report distribution using weighted optimization training set according to claim 1, special Sign also resides in: processing is weighted to pretreated Bug report in the S2 and uses following algorithm:
η indicates that the weighted value of the text matrix generated to short description, m indicate the bug report number in training set, n table in above formula Show the number of various words in training set.
CN201811033587.0A 2018-09-05 2018-09-05 A method of automatic Bug report distribution is enhanced using weighted optimization training set Pending CN109255029A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811033587.0A CN109255029A (en) 2018-09-05 2018-09-05 A method of automatic Bug report distribution is enhanced using weighted optimization training set

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811033587.0A CN109255029A (en) 2018-09-05 2018-09-05 A method of automatic Bug report distribution is enhanced using weighted optimization training set

Publications (1)

Publication Number Publication Date
CN109255029A true CN109255029A (en) 2019-01-22

Family

ID=65047294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811033587.0A Pending CN109255029A (en) 2018-09-05 2018-09-05 A method of automatic Bug report distribution is enhanced using weighted optimization training set

Country Status (1)

Country Link
CN (1) CN109255029A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801176A (en) * 2019-02-22 2019-05-24 中科软科技股份有限公司 Identify method, system, electronic equipment and the storage medium of Insurance Fraud
CN109934286A (en) * 2019-03-12 2019-06-25 大连海事大学 Bug based on Text character extraction and uneven processing strategie reports severity recognition methods
CN110716512A (en) * 2019-09-02 2020-01-21 华电电力科学研究院有限公司 Environmental protection equipment performance prediction method based on coal-fired power plant operation data
CN111723010A (en) * 2020-06-12 2020-09-29 大连海事大学 Software BUG classification method based on sparse cost matrix
CN111797774A (en) * 2020-07-07 2020-10-20 金陵科技学院 Road surface target identification method based on radar image and similarity weight
CN113254329A (en) * 2021-04-30 2021-08-13 展讯通信(天津)有限公司 Bug processing method, system, equipment and storage medium based on machine learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107608732A (en) * 2017-09-13 2018-01-19 扬州大学 A kind of bug search localization methods based on bug knowledge mappings
CN108197295A (en) * 2018-01-22 2018-06-22 重庆邮电大学 Application process of the attribute reduction based on more granularity attribute trees in text classification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107608732A (en) * 2017-09-13 2018-01-19 扬州大学 A kind of bug search localization methods based on bug knowledge mappings
CN108197295A (en) * 2018-01-22 2018-06-22 重庆邮电大学 Application process of the attribute reduction based on more granularity attribute trees in text classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MIAOMIAO WEI 等: "Enhancing Bug Report Assignment with an Optimized Reduction of Training Set", 《SPRINGER》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109801176A (en) * 2019-02-22 2019-05-24 中科软科技股份有限公司 Identify method, system, electronic equipment and the storage medium of Insurance Fraud
CN109801176B (en) * 2019-02-22 2021-04-06 中科软科技股份有限公司 Method, system, electronic device and storage medium for identifying insurance fraud
CN109934286A (en) * 2019-03-12 2019-06-25 大连海事大学 Bug based on Text character extraction and uneven processing strategie reports severity recognition methods
CN109934286B (en) * 2019-03-12 2022-11-11 大连海事大学 Bug report severity recognition method based on text feature extraction and imbalance processing strategy
CN110716512A (en) * 2019-09-02 2020-01-21 华电电力科学研究院有限公司 Environmental protection equipment performance prediction method based on coal-fired power plant operation data
CN111723010A (en) * 2020-06-12 2020-09-29 大连海事大学 Software BUG classification method based on sparse cost matrix
CN111723010B (en) * 2020-06-12 2024-02-23 大连海事大学 Software BUG classification method based on sparse cost matrix
CN111797774A (en) * 2020-07-07 2020-10-20 金陵科技学院 Road surface target identification method based on radar image and similarity weight
CN111797774B (en) * 2020-07-07 2023-09-22 金陵科技学院 Pavement target recognition method based on radar image and similarity weight
CN113254329A (en) * 2021-04-30 2021-08-13 展讯通信(天津)有限公司 Bug processing method, system, equipment and storage medium based on machine learning

Similar Documents

Publication Publication Date Title
CN109255029A (en) A method of automatic Bug report distribution is enhanced using weighted optimization training set
CN109492026B (en) Telecommunication fraud classification detection method based on improved active learning technology
Kim et al. Ordinal classification of imbalanced data with application in emergency and disaster information services
CN109783879B (en) Radar radiation source signal identification efficiency evaluation method and system
CN103020643A (en) Classification method based on kernel feature extraction early prediction multivariate time series category
CN107273387A (en) Towards higher-dimension and unbalanced data classify it is integrated
CN110210625A (en) Modeling method, device, computer equipment and storage medium based on transfer learning
CN114612251A (en) Risk assessment method, device, equipment and storage medium
CN113674862A (en) Acute renal function injury onset prediction method based on machine learning
CN115358481A (en) Early warning and identification method, system and device for enterprise ex-situ migration
CN114881343A (en) Short-term load prediction method and device of power system based on feature selection
Tiruneh et al. Feature selection for construction organizational competencies impacting performance
CN112052990A (en) CNN-BilSTM hybrid model-based next activity prediction method for multi-angle business process
CN116719714A (en) Training method and corresponding device for screening model of test case
Hu Overdue invoice forecasting and data mining
Shubho et al. Performance analysis of NB Tree, REP tree and random tree classifiers for credit card fraud data
CN112131106B (en) Test data construction method and device based on small probability data
Kwok et al. Dataset Difficulty and the Role of Inductive Bias
Fedyk News-driven trading: who reads the news and when
CN109614489A (en) It is a kind of to report severity recognition methods based on transfer learning and the Bug of feature extraction
CN108805156A (en) A kind of improved selective Nae Bayesianmethod
Albanese Deep Anomaly Detection: an experimental comparison of deep learning algorithms for anomaly detection in time series data
Salman et al. A Prediction Approach for Small Healthcare Dataset
Tama et al. Leveraging an Advanced Heterogeneous Ensemble Learning for Outcome-Based Predictive Monitoring Using Business Process Event Logs
CN116881738B (en) Similarity detection method of project declaration documents applied to power grid industry

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190122

RJ01 Rejection of invention patent application after publication