CN102324007B - Abnormal detection method based on data mining - Google Patents
Abnormal detection method based on data mining Download PDFInfo
- Publication number
- CN102324007B CN102324007B CN201110283015XA CN201110283015A CN102324007B CN 102324007 B CN102324007 B CN 102324007B CN 201110283015X A CN201110283015X A CN 201110283015XA CN 201110283015 A CN201110283015 A CN 201110283015A CN 102324007 B CN102324007 B CN 102324007B
- Authority
- CN
- China
- Prior art keywords
- training
- observational variable
- matrix
- weak classifier
- separation matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Complex Calculations (AREA)
Abstract
The invention discloses an abnormal detection method based on data mining, belonging to the technical field of network safety. The abnormal detection method is based on independent component analysis (ICA) and an Adaboost method, wherein a Fast-ICA algorithm is firstly used for performing feature extraction so as to eliminate redundant attributes and reduce data dimensions; and the Adaboost method is used for sequentially training a group of weak classifiers and integrating the weak classifiers into a strong classifier. According to the invention, redundant attribute information in network data is effectively eliminated, and the computation amount for training and detecting the classifiers is reduced; meanwhile, the detection precision is increased, and the probability of misreport and report failure of samples is decreased.
Description
Technical field
The present invention relates to the computing machine method for detecting abnormality, the method for detecting abnormality that especially a kind of based on data excavates.
Background technology
Intrusion detection is the detection to the computer system attack, provides internaling attack, the real-time guard of external attack and maloperation.In order to identify accurately attack type, intrusion detection is by from collecting related data information in several key events in the network system of the log recording file the computing machine local system, computing machine etc., and in the analysis by for these data, whether the result of the behavior generation of violating security strategy or the sign that whether is subjected to attack is arranged in the computing machine local system that obtains detecting or computer network system.Intrusion detection can monitoring and the current entry of analysis user and system is movable, the existing known attack of integrality, identification of the keystone resources of the security breaches in the check system configuration, evaluates calculation machine system and data file or user's abuse, statistics and analyze abnormal behaviour, record and manage and safeguard for system journal, namely in computer system performance can't affected situation, computer system network is carried out to real-time monitoring and control.
In existing Intrusion Detection Technique, the mass data collected is as the data source of intruding detection system, it is carried out to analyzing and processing to judge whether to occur intrusion event, a large amount of data provide can be for the quantity of information of utilizing in, also increased the difficulty of effectively utilizing these data, useful information may be submerged among a large amount of redundant datas on the contrary, has increased the difficulty of feature extraction.
Summary of the invention
The purpose of this invention is to provide the method for detecting abnormality that a kind of based on data excavates, by extracting useful network data feature in network data, eliminated the redundant attributes in the network data, improved the precision detected, and the probability that has reduced wrong report and failed to report.
To achieve these goals, the invention provides the method for detecting abnormality that a kind of based on data excavates, it is characterized in that: formed by following steps:
S1, using network data as observational variable, adopt the Fast-ICA method from described observational variable, extracting the observational variable feature, form observational variable characteristic set Z, namely obtain the network data feature of eliminating redundant attributes and reducing data dimension;
S2, employing AdaBoost method training observation characteristics of variables: the observational variable feature set of take is training set, each observational variable feature is as training text, to each training text, give weights, wherein said weights are for meaning that described training text is selected into the probability of training set by Weak Classifier, after the Weak Classifier training finishes, according to the classification results of training set, regulate the weight of each training text: if described training sample is by described Weak Classifier precise classification, the weight of described Weak Classifier reduces, and it is reduced by the probability that next Weak Classifier is selected into training set; If described training sample is not by described Weak Classifier precise classification, it is promoted by the probability that next Weak Classifier is selected into training set, finally obtains strong classifier;
S3, according to described strong classifier, abnormal network data is detected.
In described step S1, formed by following steps:
S10, N observational variable of setting
, form observational variable set and each observational variable and all be expressed as M isolated component
Linear combination, M isolated component wherein
Form the isolated component set, i=1 ..., N, j=1 ..., M and N, M are and are greater than 1 integer, ask for the transposed matrix X=of observational variable set
And the transposed matrix S=of isolated component set
, and set X=A*S, wherein A
For unknown hybrid matrix;
S11, described observational variable is carried out to the albefaction processing;
The generalized inverse of S12, setting hybrid matrix A is separation matrix W, according to formula
By random gradient method, regulate described separation matrix W, ask for the optimal estimation of described transposed matrix S
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension.
In described step S12, regulating separation matrix W by Stochastic gradient method is comprised of following steps:
(1) according to formula
Described separation matrix W is carried out to iterative processing with behavior unit, wherein
Mean after k iteration in described separation matrix W with the observational variable set in i observational variable
Corresponding delegation's vector,
Mean after k+1 iteration in separation matrix W with the observational variable set in i observational variable
Corresponding delegation's vector,
Mean after k iteration in separation matrix W with the observational variable set in i observational variable
The transposed matrix of corresponding delegation's vector, E is the Gaussian distribution calculation symbol for expectation operational symbol, G, i, k are and are greater than 1 integer;
(2), judgement
-
Absolute value≤ξ whether set up, if set up finishing iteration is processed, obtains final separation matrix W(n), perform step (3), if be false repeated execution of steps (1), wherein ξ gets any number between 0~1;
(3), to described final separation matrix W(n) with behavior unit, carry out normalized, namely
, wherein
Mean to ask for norm;
(4) by final separation matrix W(n) the substitution formula
In try to achieve the optimal estimation of described transposed matrix S
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension.
In described step S2, formed by following steps:
S20, setting training set are G=
,
,
, wherein y is the optimal estimation of transposed matrix S, i=1 ..., m+n, m+n are greater than 1 integer;
For the class label,
=+1 o'clock is minority class,
=-1 o'clock is most classes, and the number of minority class sample is m, and the number of most class samples is n, and m<<n;
S21, the described training set of initialization: by training set G each
Weight all be initialized as 1/n;
S22, the BP of take are Weak Classifier, call Weaklearn and carry out T iteration training, and wherein each iteration training obtains one group of Weak Classifier function;
S23, judge whether iterations >=T sets up before each iteration training, if set up by T group Weak Classifier combination of function acquisition strong classifier, if be false adjust weight, repeated execution of steps S22.
In sum, owing to having adopted technique scheme, the invention has the beneficial effects as follows:
By the present invention, effectively eliminate the redundant attributes information in network data, reduced the operand of training and the detection of sorter; Also improved simultaneously the precision detected, the probability that reduces the sample wrong report and fail to report.
The accompanying drawing explanation
Examples of the present invention will be described by way of reference to the accompanying drawings, wherein:
Fig. 1 is process flow diagram of the present invention;
Fig. 2 is the process flow diagram that the Fast-ICA method is extracted feature;
Fig. 3 is the process flow diagram of AdaBoost method;
Fig. 4 is the experiment test design sketch.
Embodiment
Disclosed all features in this instructions, or the step in disclosed all methods or process, except mutually exclusive feature and/or step, all can combine by any way.
Disclosed arbitrary feature in this instructions (comprising any accessory claim, summary and accompanying drawing), unless special narration all can be replaced by other equivalences or the alternative features with similar purpose.That is, unless special narration, each feature is an example in a series of equivalences or similar characteristics.
As shown in Fig. 1, the method for detecting abnormality that this based on data excavates is comprised of three steps.
The Fast-ICA algorithm is called again fix point method (Fixed-point), and its thinking is by Stochastic gradient method, to regulate separation matrix W to make the independence between source signal the strongest.
As shown in Figure 2, the process that adopts the Fast-ICA method to extract the observational variable feature specifically is comprised of following steps: S10, N observational variable of setting
, form observational variable set and each observational variable and all be expressed as M isolated component
Linear combination, M isolated component wherein
Form the isolated component set, i=1 ..., N, j=1 ..., M and N, M are and are greater than 1 integer, ask for the transposed matrix X=of observational variable set
And the transposed matrix S=of isolated component set
, and set X=A*S, wherein A
For unknown hybrid matrix;
S11, described observational variable is carried out to the albefaction processing;
The generalized inverse of S12, setting hybrid matrix A is separation matrix W, according to formula
By random gradient method, regulate described separation matrix W, ask for the optimal estimation of described transposed matrix S
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension, i.e. observational variable set feature.
The Fast-ICA method is that to take the maximum criterion principle of negentropy be basis.The principle of the maximum criterion of negentropy is: as can be known by central limit theorem, and stochastic variable in the observational variable set
By many mutually independent random variables
Form, need only each independently stochastic variable
Have limited average and variance, no matter how it distributes, stochastic variable
Must be near Gaussian distribution.Therefore, in detachment process, measure the non-Gauss of optimal estimation y, when non-Gauss's tolerance reaches maximum, show the separation completed each isolated component, the definition negentropy is as follows:
Wherein
Mean to have with optimal estimation y the random quantity of mutually homoscedastic Gaussian distribution,
Information entropy for stochastic variable.By above-mentioned formula, can be found out, when optimal estimation y has Gaussian distribution
, when the non-Gauss of optimal estimation y stronger,
Value larger.
Therefore, Stochastic gradient method adopts
(namely
Be proportional to
, wherein E is the expectation operational symbol, G is Gaussian distribution calculation symbol) the maximum criterion of negentropy separation matrix W is carried out to iterative processing, by following steps, formed:
(1) according to formula
Described separation matrix W is carried out to iterative processing with behavior unit, wherein
Mean after k iteration in described separation matrix W with the observational variable set in i observational variable
Corresponding delegation's vector,
Mean after k+1 iteration in separation matrix W with the observational variable set in i observational variable
Corresponding delegation's vector,
Mean after k iteration in separation matrix W with the observational variable set in i observational variable
The transposed matrix of corresponding delegation's vector, E is the Gaussian distribution calculation symbol for expectation operational symbol, G, i, k are and are greater than 1 integer;
(2), judgement
-
Absolute value≤ξ whether set up, if set up finishing iteration is processed, obtains final separation matrix W(n), perform step (3), if be false repeated execution of steps (1), wherein ξ gets any number between 0~1;
(3), to described final separation matrix W(n) with behavior unit, carry out normalized, namely
, wherein
Mean to ask for norm;
(4) by final separation matrix W(n) the substitution formula
In try to achieve the optimal estimation of described transposed matrix S
Thereby, obtain the network data feature of eliminating redundant attributes and reducing data dimension.
Step 2, adopt AdaBoost method training observation characteristics of variables: the observational variable feature set of take is training set, each observational variable feature is as training text, to each training text, give weights, wherein said weights are for meaning that described training text is selected into the probability of training set by Weak Classifier, after the Weak Classifier training finishes, according to the classification results of training set, regulate the weight of each training text: if described training sample is by described Weak Classifier precise classification, the weight of described Weak Classifier reduces, it is reduced by the probability that next Weak Classifier is selected into training set, if described training sample is not by described Weak Classifier precise classification, it is promoted by the probability that next Weak Classifier is selected into training set, finally obtains strong classifier,
Step 3, according to described strong classifier, abnormal network data is detected.
As shown in Figure 3, using the BP network as Weak Classifier in AdaBoost method training process, formed by following steps:
S20, setting training set are G=
,
,
, wherein y is the optimal estimation of transposed matrix S, i=1 ..., m+n, m+n are greater than 1 integer;
For the class label,
=+1 o'clock is minority class,
=-1 o'clock is most classes, and the number of minority class sample is m, and the number of most class samples is n, and m<<n;
S22, the BP of take are Weak Classifier, call Weaklearn and carry out T iteration training, and wherein each iteration training obtains one group of Weak Classifier function;
S23, before each iteration, judging whether iterations >=T sets up, if set up obtain strong classifier by T group Weak Classifier combination of function, if be false adjust weight, repeated execution of steps S22.Because the training of the iteration of AdaBoost method and weight adjustment process are mature technology, will not tire out and state at this.
The KDD99 data set is selected in test, and this data set is the test data set of being set up by Massachusetts Institute of Technology's Lincoln laboratory in 1998.Wherein every data record all comprises 41 property values.These property values can be divided into four parts, the base attribute namely connected, the contents attribute of connection and time-based flow attribution, Host Based flow attribution.Experimental data is comprised of training set and test set two parts.
In feature extraction step, introduced the FASTICA feature extraction step, before to the network data classification, first use the FASTICA algorithm to carry out feature extraction to data, eliminated the redundant attributes in data, greatly reduce the operand of training and the detection of sorter, utilized independent component analysis method to find between each attribute of new feature space sample in this space independent.In experiment, training dataset comprises 4000 records, and test data set comprises 800 records.
Emulation platform: programming simulation under matlab7.6, test design sketch as shown in Figure 4:
Strong classifier error in classification rate
ans?=?0.0063;
Weak Classifier error in classification rate
ans?=?0.0142。
Experimental analysis:
The performance of intruding detection system is weighed in experiment by verification and measurement ratio (detection rate, DR) and rate of false alarm (false positive rate, FPR).They are defined as follows:
The invasion sample number of verification and measurement ratio (DR)=detect/invasion total sample number
The normal sample given figure of error rate (FPR)=be mistaken as invasion/normal total sample number
In experimentation, first use training data set pair system to train, to set up an intrusion detection rule base; After having trained, the use test data set is tested system.
From experimental data, can find out, what this patent proposed has higher verification and measurement ratio and low rate of false alarm based on the visible intrusion detection method of processing through the FASTICA Feature Dimension Reduction.
Table one error in classification statistics
Table two detection statistics
By the present invention, adopt the FASTICA algorithm to carry out feature extraction to data and carry out the data pre-service, redundant attributes in data is eliminated, greatly reduced the operand of sorter training and context of detection, aspect sorter, make Weak Classifier with BP simultaneously and form the Adaboost strong classifier, with 4000 training samples, remove to train the Adaboost sorter in test.As can be seen from the above table, through the pretreated Adaboost strong classifier of Fast-ICA data, higher verification and measurement ratio is arranged, simultaneously strong error in classification rate is lower than Weak Classifier error in classification rate, and the verification and measurement ratio of Adaboost strong classifier will be higher than Weak Classifier classification and Detection rate.
The present invention is not limited to aforesaid embodiment.The present invention expands to any new feature or any new combination disclosed in this manual, and the arbitrary new method disclosed or step or any new combination of process.
Claims (3)
1. the method for detecting abnormality that excavates of a based on data is characterized in that: be comprised of following steps:
S1, using network data as observational variable, adopt the Fast-ICA method from described observational variable, extracting the observational variable feature, form observational variable characteristic set Z, namely obtain the network data feature of eliminating redundant attributes and reducing data dimension;
S2, employing AdaBoost method training observation characteristics of variables: the observational variable feature set of take is training set, each observational variable feature is as training text, to each training text, give weights, wherein said weights are for meaning that described training text is selected into the probability of training set by Weak Classifier, after the Weak Classifier training finishes, according to the classification results of training set, regulate the weight of each training text: if described training sample is by described Weak Classifier precise classification, the weight of described Weak Classifier reduces, and it is reduced by the probability that next Weak Classifier is selected into training set; If described training sample is not by described Weak Classifier precise classification, it is promoted by the probability that next Weak Classifier is selected into training set, finally obtains strong classifier;
S3, according to described strong classifier, abnormal network data is detected;
In described step S1, formed by following steps:
S10, N observational variable x of setting
i, form observational variable set and each observational variable and all be expressed as M isolated component s
jLinear combination, M isolated component s wherein
jForm the isolated component set, i=1 ..., N, j=1 ..., M and N, M are and are greater than 1 integer, ask for the transposed matrix X=(x of observational variable set
1, x
2..., x
N)
TAnd the transposed matrix S=(s of isolated component set
1, s
2..., s
M)
T, and set X=A*S, wherein A=(a
Ij) N * M is unknown hybrid matrix;
S11, described observational variable is carried out to the albefaction processing;
The generalized inverse of S12, setting hybrid matrix A is separation matrix W, according to formula y=W*X, regulate described separation matrix W by random gradient method, ask for the optimal estimation y of described transposed matrix S, thereby obtain the network data feature of eliminating redundant attributes and reducing data dimension;
S13, described Stochastic gradient method employing Ng (y) ∝ [E|G (y) |-E|G (y
Gauss) |]
2The maximum criterion of negentropy separation matrix W is carried out to iterative processing, namely Ng (y) be proportional to [E|G (y) |-E|G (y
Gauss) |]
2, wherein E is the expectation operational symbol, and G is the Gaussian distribution calculation symbol, and Ng (y) is negentropy, y
GaussFor with optimal estimation y, having the random quantity of mutually homoscedastic Gaussian distribution.
2. the method for detecting abnormality that excavates of based on data according to claim 1 is characterized in that: in described step S12, regulate separation matrix W by Stochastic gradient method and be comprised of following steps:
(1) according to formula
Described separation matrix W is carried out to iterative processing with behavior unit, wherein W
i(k) mean after k iteration in described separation matrix W with the observational variable set in i observational variable x
iCorresponding delegation's vector, W
i(k+1) mean after k+1 iteration in separation matrix W with the observational variable set in i observational variable x
iCorresponding delegation's vector,
Mean after k iteration in separation matrix W with the observational variable set in i observational variable x
iThe transposed matrix of corresponding delegation's vector, E is the Gaussian distribution calculation symbol for expectation operational symbol, G, i, k are and are greater than 1 integer;
(2), judgement W
i(k+1)-W
iWhether absolute value≤ξ (k) sets up, if set up finishing iteration is processed, obtains final separation matrix W(n), perform step (3), if be false repeated execution of steps (1), wherein ξ gets any number between 0~1;
(3), to described final separation matrix W(n) with behavior unit, carry out normalized, namely
Wherein || || mean to ask for norm;
(4) by final separation matrix W(n) the substitution formula S
*In=W*X, try to achieve the optimal estimation y of described transposed matrix S, thereby obtain the network data feature of eliminating redundant attributes and reducing data dimension.
3. the method for detecting abnormality that excavates of based on data according to claim 1 is characterized in that: following steps, consist of in described step S2:
S20, setting training set are G={ (x
1, h
1) ..., (x
m+n, h
m+n), x
i∈ y, h
i∈ H={-1 ,+1}, wherein y is the optimal estimation of transposed matrix S, i=1 ..., m+n, m+n are greater than 1 integer; h
iFor class label, h
i=+1 o'clock is minority class, h
i=-1 o'clock is most classes, and the number of minority class sample is m, and the number of most class samples is n, and m<<n;
S21, the described training set of initialization: by each (x in training set G
i, h
i) weight all be initialized as 1/n;
S22, the BP of take are Weak Classifier, call Weaklearn and carry out T iteration training, and wherein each iteration training obtains one group of Weak Classifier function;
S23, judge whether iterations >=T sets up before each iteration training, if set up by T group Weak Classifier combination of function acquisition strong classifier, if be false adjust weight, repeated execution of steps S22.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110283015XA CN102324007B (en) | 2011-09-22 | 2011-09-22 | Abnormal detection method based on data mining |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110283015XA CN102324007B (en) | 2011-09-22 | 2011-09-22 | Abnormal detection method based on data mining |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102324007A CN102324007A (en) | 2012-01-18 |
CN102324007B true CN102324007B (en) | 2013-11-27 |
Family
ID=45451748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110283015XA Expired - Fee Related CN102324007B (en) | 2011-09-22 | 2011-09-22 | Abnormal detection method based on data mining |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102324007B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102879823B (en) * | 2012-09-28 | 2015-07-22 | 电子科技大学 | Method for fusing seismic attributes on basis of fast independent component analysis |
CN103536282B (en) * | 2013-11-06 | 2015-02-04 | 中国人民解放军第三军医大学 | Magnetic induction cardiopulmonary activity signal separation method based on Fast-ICA method |
US10417226B2 (en) * | 2015-05-29 | 2019-09-17 | International Business Machines Corporation | Estimating the cost of data-mining services |
CN108319883B (en) * | 2017-01-16 | 2020-11-06 | 广东精点数据科技股份有限公司 | Fingerprint identification method based on rapid independent component analysis |
CN106950945B (en) * | 2017-04-28 | 2019-04-09 | 宁波大学 | A kind of fault detection method based on dimension changeable type independent component analysis model |
CN107231348B (en) * | 2017-05-17 | 2020-07-28 | 桂林电子科技大学 | Network flow abnormity detection method based on relative entropy theory |
CN112153000B (en) * | 2020-08-21 | 2023-04-18 | 杭州安恒信息技术股份有限公司 | Method and device for detecting network flow abnormity, electronic device and storage medium |
CN112055007B (en) * | 2020-08-28 | 2022-11-15 | 东南大学 | Programmable node-based software and hardware combined threat situation awareness method |
-
2011
- 2011-09-22 CN CN201110283015XA patent/CN102324007B/en not_active Expired - Fee Related
Non-Patent Citations (4)
Title |
---|
Adaboost方法在入侵检测技术上的应用;郭红刚等;《计算机应用》;20050130;第25卷(第1期);第144-146页 * |
基于独立分量分析的入侵检测***研究;张磊;《西安电子科技大学硕士论文》;20041231;第21、28、30、31页 * |
张磊.基于独立分量分析的入侵检测***研究.《西安电子科技大学硕士论文》.2004, |
郭红刚等.Adaboost方法在入侵检测技术上的应用.《计算机应用》.2005,第25卷(第1期), |
Also Published As
Publication number | Publication date |
---|---|
CN102324007A (en) | 2012-01-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102324007B (en) | Abnormal detection method based on data mining | |
CN108737406B (en) | Method and system for detecting abnormal flow data | |
Pang et al. | Predicting vulnerable software components through deep neural network | |
CN113688042B (en) | Determination method and device of test scene, electronic equipment and readable storage medium | |
CN104598813B (en) | Computer intrusion detection method based on integrated study and semi-supervised SVM | |
CN104169909B (en) | Context resolution device and context resolution method | |
CN106709349B (en) | A kind of malicious code classification method based on various dimensions behavioural characteristic | |
Sharma et al. | A novel multi-classifier layered approach to improve minority attack detection in IDS | |
CN111652290A (en) | Detection method and device for confrontation sample | |
CN102291392A (en) | Hybrid intrusion detection method based on bagging algorithm | |
Kausar et al. | An approach towards intrusion detection using PCA feature subsets and SVM | |
CN107315956A (en) | A kind of Graph-theoretical Approach for being used to quick and precisely detect Malware on the zero | |
CN113868006A (en) | Time sequence detection method and device, electronic equipment and computer storage medium | |
CN106792883A (en) | Sensor network abnormal deviation data examination method and system | |
CN106991435A (en) | Intrusion detection method based on improved dictionary learning | |
Kye et al. | Hierarchical detection of network anomalies: A self-supervised learning approach | |
CN105590026A (en) | PCA (Principal Component Analysis) based satellite telemetering regression method | |
CN109902731B (en) | Performance fault detection method and device based on support vector machine | |
CN111275101A (en) | Fault identification method and device for aircraft hydraulic system and readable storage medium | |
CN116074092B (en) | Attack scene reconstruction system based on heterogram attention network | |
Sudha et al. | Analysis and evaluation of integrated cyber crime offences | |
CN104980442B (en) | A kind of network inbreak detection method based on first sample rarefaction representation | |
KR20200109677A (en) | An apparatus and method for detecting malicious codes using ai based machine running cross validation techniques | |
CN107943916A (en) | Webpage anomaly detection method based on online classification | |
CN103150501A (en) | Negative choice improvement-based intrusion detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20131127 Termination date: 20190922 |
|
CF01 | Termination of patent right due to non-payment of annual fee |