CN113780563A - Quick-covering case base maintenance method - Google Patents

Quick-covering case base maintenance method Download PDF

Info

Publication number
CN113780563A
CN113780563A CN202111043870.3A CN202111043870A CN113780563A CN 113780563 A CN113780563 A CN 113780563A CN 202111043870 A CN202111043870 A CN 202111043870A CN 113780563 A CN113780563 A CN 113780563A
Authority
CN
China
Prior art keywords
case base
coverage
case
samples
space
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111043870.3A
Other languages
Chinese (zh)
Inventor
李建洋
吴宏森
吴辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhenjiang College
Original Assignee
Zhenjiang College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhenjiang College filed Critical Zhenjiang College
Priority to CN202111043870.3A priority Critical patent/CN113780563A/en
Publication of CN113780563A publication Critical patent/CN113780563A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for maintaining a case base in a quick coverage mode, which comprises the following steps: firstly, acquiring case base information from a CBR application system and carrying out space dimension expansion projection; then, dividing the case library space according to the similarity to obtain a coverage field and a sub-classification; and then, a three-layer feedforward neural network is constructed to realize the quick recall of the most similar case. The invention adopts the three-layer feedforward neural network which is easy to construct and understand, effectively reduces the algorithm complexity of the neural network with the field coverage algorithm, and ensures the operation capability and efficiency of the CBR system.

Description

Quick-covering case base maintenance method
Technical Field
The invention belongs to a Case Base Maintenance (CBM) maintenance method Based on Case-Based Reasoning (CBR), and particularly relates to performance maintenance of a large-scale irreducible Case base.
Background
The CBR is derived from an analogy reasoning method of human cognition, a case is a section of knowledge with context information, and a case base is a main knowledge base in a CBR system; the learning function is to continuously add new cases to the case base, and each case in the case base is possible to be used for solving the future problem through adaptation. Generally, the larger the case base and the richer the knowledge, the more the intelligent level of the system can be reflected.
As an important machine learning method, the case base is the core knowledge base of the CBR inference system, but is difficult to maintain, and one major factor is that the case base is large and unstructured or semi-structured, even expressed in natural language. Therefore, the CBR system needs to maintain and manage a group of cases with a large number, and the complexity of time and space must be considered carefully, otherwise, a situation of "the case base is larger and the system performance is weaker" may occur, thereby causing a "swamp problem" -the capability and efficiency problem of the CBR system.
Therefore, the CBR application system must have an independent case base maintenance function. At present, the main method for the case base in the international academic world and the industrial application field is to limit the scale of the case base, and the common solution is to establish certain rules and search and delete 'inefficient' or even 'useless' cases. Such as ICCBR (International Conference on Case-Based reading, incorporated with the European application-Based EWCBR once a year), people of the army, Leake, D., & Schack, B. in 2015 Flexible feed Deletion: computing Case Bases by selection computing Case Contents, 2018 expansion vs. expansion in Case-Base maintanence: Leveraging composition-Based Deletion with host Cases, and other applications and studies, all of which solve the problem of swampiness.
However, in some fields of e-commerce online sales, interactive CBR, distributed CBR applications, especially fault diagnosis, online decision-making, etc., case libraries can easily reach thousands of scales, and each case represents an inexhaustible precious experience and is irreducible, and it is very necessary to implement large and irreducible case library maintenance to ensure the system to operate reliably.
Disclosure of Invention
The invention aims to solve the problems and the defects in the prior art, provides a method for maintaining a rapidly-covered case base, and is particularly suitable for maintaining an unreduced case base.
The invention effectively reduces the algorithm complexity of the neural network by adopting the multilayer feedforward neural network which is easy to construct and understand and the field coverage algorithm, solves the time and efficiency problems caused by the increase of the case library scale, and ensures the high-efficiency operation of the CBR system under the environment of unreduceable case library.
In order to achieve the purpose, the invention provides the following technical scheme for realizing the purpose:
a method for maintaining a rapidly covered case base comprises the following steps:
s1, acquiring case base information from a CBR application system and performing space dimension expansion projection;
s2, dividing the space of the case base according to the similarity to obtain a coverage field and a sub-classification;
and S3, constructing a three-layer feedforward neural network and realizing quick recall of the most similar case.
Further preferably, the specific content and steps of obtaining the case base information from the CBR application system and performing the spatial dimension extending projection in step S1 include:
s11, obtaining the attribute dimension, quantity and category information (respectively recorded as n, m and r) of the case base from the case base system;
step S12, adding one dimension to the vector space of the n-dimensional input sample to expand the dimension, wherein the input sample set K is { x ═ x1,x2,...,xmAre divided into r subsets by their categories, i.e. K ═ K1,K2,...,Kr};
S13, converting an input sample into a hypersphere with equal length; performing T hypersphere S transformation on a bounded set D in an n + 1-dimensional space, and according to a transformation formula T: d → SnTo, for
Figure BDA0003250500250000021
Is provided with
Figure BDA0003250500250000022
Wherein | x | is the length of x, and R is more than or equal to max { | | x ∈ D };
s14, implementing space projection to enable all input samples to be projected onto a hypersphere with radius R Sn
Further preferably, the step of dividing the case library space according to the similarity and obtaining the specific content of the coverage area and the sub-classification and the step of S2 include:
step S21, constructing a j-th class input sample KjThe method of (1) a certain sphere coverage area C (k) is as follows: calculating the center point of all samples, and taking KjPoint a not yet coverediThen find the closest sample point ai∈KjFrom this point, overlay is started;
step S22, calculating aiArea C (a) as the centeri) Let C (a)i)∩Kj=D(i),i=1,2,...,D0=Φ;
Step S23, if D isi-1Is DiThen D is obtainediThe center of gravity b;
step S24. ask for aiA translation point of (a), ai+1A, and finding the corresponding domain C (a)i+1) To obtain Di+1(ii) a If D isiIs Di+1Then D is obtainedi+1Center of gravity b: let ai+1If b, i + +, return to continue to solve aiArea C (a) as the centeri);
Step S25. after the coverage C (k) is obtained in this way, the covered points are deleted and are collectively marked as CijTo express KjThe ith overlay of (1); continue to find another coverage to get all sample coverage fields of class j C1j,C2j,...,Ckj};
S26, repeating the steps, and solving the spherical coverage fields of all types of the input samples; finally obtaining a batch of coverage { { C11,C21,...,Cp1},{C12,C22,...,Cq2},...,{C1r,C2r,...,CkrAnd } which correspond to r major classes and a plurality of domain sub-classes thereof, respectively.
Further, the step S2 of dividing the case base space according to the similarity, obtaining the coverage area and the sub-classification further includes processing the rejection sample, and the method includes: the rejection samples are independent and are singly listed as a field coverage; or the hash among the rejected samples is larger, and the radius of the field is enlarged; or the rejection samples are respectively divided into the nearest coverage according to experience probability.
The samples in the coverage area obtained by the above steps can prove to have the following technical characteristics:
1) samples in the same coverage area have the same large-class marks, and the samples in the same coverage area have strong similarity;
2) if the samples marked by the same large class are different greatly, the samples cannot be in the same coverage area, and the samples can be dispersed to form a plurality of different coverage areas;
3) similar samples, if of different categories, may not be grouped together in the same coverage area.
Further preferably, the specific contents and steps of constructing a three-layer feedforward neural network for the coverage area to be obtained in step S3 include:
s31. the first layer (input layer) is to take p neurons A1,A2,…Ap,AiFor neurons corresponding to coverage c (i), the weight and threshold is W (1) ═ a1),θ(1)=(θ1);
S32. second layer (hidden layer) taking the same number of neurons B as the first layer1,B2,…,Bp
Figure BDA0003250500250000031
S33. the third layer (output layer) of r neurons C1,C2,…CrR is the number of sample classes;
Figure BDA0003250500250000041
compared with the existing maintenance method, the invention has the following remarkable advantages and beneficial effects:
1. the large-scale irreducible case base can be directly maintained;
2. the classification of the case base adopts a plurality of field subclasses, so that the case can be recalled quickly;
3. the method adopts the easy-to-understand MP neural network spherical field for representation and easy understanding, and avoids the neural network black box problem;
4. the rejected samples can be processed independently, so that excessive generalization of classification algorithm training is avoided;
5. due to the fact that time consumption of classification training of the case base is low, dynamic attribute adjustment and classification updating of the case base can be achieved, time limit of case recall is met, and dynamic maintenance of the case base is achieved.
Drawings
Fig. 1 is a flowchart of the steps of case base maintenance for fast coverage according to the embodiment of the present invention.
Fig. 2 is a functional diagram of a system for maintaining a case base with fast coverage according to an embodiment of the present invention.
Fig. 3 is a schematic diagram (part) of the super-space coverage area formed during the case base maintenance of the fast coverage according to the embodiment of the present invention: the three types of samples are distributed in 6 different coverage areas, and the samples in each coverage area are the most similar cases.
Fig. 4 is a schematic diagram of a three-layer feedforward neural network for case base maintenance of rapid coverage according to an embodiment of the present invention, and the obtained coverage area is used as an input to construct the three-layer feedforward neural network to achieve rapid recall of the most similar case.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In a CBR application system, the CBM system maintenance effect involves two parts, training and testing (recall). The first is the time directly used for case base maintenance (background time); the second is case recall time, which is the actual time that a user needs to wait for the solution of the CBR problem when the system runs and directly reflects the running efficiency (foreground time) of the system. Aiming at the difficult problems that thousands of scales are easily achieved in the study of the irreducible case base, and the running time and the running efficiency of the system are caused, a three-layer feedforward neural network which is easy to construct and understand is adopted, and a domain coverage algorithm effectively reduces the algorithm complexity of the neural network, and ensures the running capability and the running efficiency of a CBR system.
The embodiment of the application provides a quick-covering case base maintenance method, wherein the following cases adopt a common personal notebook computer, Intel Core i7-7500U 2.7G MHz and 8G RAM, and data adopt a common data set learned by a UCI machine; training a data set as a case library, and simulating a test set as a new case to be matched and recalled; 5-pass cross-validated data analysis (randomly selected training set to test set ratio 4: 1).
The first embodiment is as follows:
fig. 1 is a flowchart illustrating steps of a case base maintenance method applied to fast coverage according to an embodiment of the present invention. Example data from respective UCI machine learning dataset "waveform" 5000 instances 21 dimensions 3 classes
As shown in fig. 1, a method for maintaining a rapidly covered case base according to an embodiment of the present invention includes the following steps:
s1, acquiring case base information from a CBR application system and performing space dimension expansion projection; the specific content and steps are as follows:
s11, obtaining case base attribute dimension 21, number 4000 and category information 3 from a case base system;
s12, adding one dimension to a vector space of a 21-dimensional input sample to expand the dimension;
s13, converting an input sample into a hypersphere with equal length; performing a spherical transformation of T on a bounded set D in an n +1 dimensional space according to a transformation lawFormula T: d → SnTo, for
Figure BDA0003250500250000051
Is provided with
Figure BDA0003250500250000052
Wherein | x | is the length of x, and R is more than or equal to max { | | x ∈ D };
and S14, implementing space projection to project 4000 case input samples onto a hypersphere.
S2, dividing the space of the case base according to the similarity to obtain a coverage field and a sub-classification;
fig. 2 is a functional diagram of a system for maintaining a case base with fast coverage according to an embodiment of the present invention, and 4000 cases projected into a high-dimensional space are spatially divided. The rejection sample list is a coverage area, and 74 coverage areas are obtained in total (namely 74 most similar sample clusters, subclasses and 3 large classes); resulting in a domain coverage map (partial slice) as shown in fig. 3, the three types of samples are distributed over 6 different coverage domains. The samples in each coverage area were the most similar cases, averaging about 68 instances per subclass, taking 0.105s (seconds).
S3, constructing a three-layer feedforward neural network to realize quick recall of the most similar case;
fig. 4 is a schematic diagram of a three-layer feedforward neural network for maintaining a case base of rapid coverage according to an embodiment of the present invention, and 74 coverage areas obtained are used as input to construct the three-layer feedforward neural network, so as to achieve rapid recall of the most similar cases; achieving an average recall of 1000 test cases took 0.013ms (milliseconds), with an accuracy of 77.52%.
The hierarchical division structure of the case base is realized, all cases are completely reserved, the maintenance directly consumes 0.105s (second), the case recall is more accurate and faster, and the maintenance of the irreducible case base is realized.
Example two:
fig. 1 is a flowchart illustrating steps of a case base maintenance method applied to fast coverage according to an embodiment of the present invention. Example data are from UCI machine learning data set "letter" 20000 instances 16 dimensions 26 classes, respectively.
As shown in fig. 1, a method for maintaining a rapidly covered case base according to an embodiment of the present invention includes the following steps:
s1, acquiring case base information from a CBR application system and performing space dimension expansion projection; the specific content and steps are as follows:
s11, obtaining the attribute dimension 16, the quantity 16000 and the category information 26 of the case base from the case base system;
s12, adding one dimension to a 16-dimensional input sample vector space for dimension expansion;
s13, converting an input sample into a hypersphere with equal length; performing spherical transformation of T on a bounded set D in an n + 1-dimensional space according to a transformation formula T: d → SnTo, for
Figure BDA0003250500250000061
Is provided with
Figure BDA0003250500250000062
Wherein | x | is the length of x, and R is more than or equal to max { | | x ∈ D };
and S14, implementing spatial projection to project 16000 case input samples onto a hypersphere.
S2, dividing the space of the case base according to the similarity to obtain a coverage field and a sub-classification;
fig. 2 is a functional diagram of a system for maintaining a rapidly-covered case base according to an embodiment of the present invention, and 16000 cases projected into a high-dimensional space are spatially divided. The hash between rejected samples is large, the radius of the field is enlarged by 10%, and 2107 coverage fields (i.e. 2107 most similar sample clusters, subclasses, 26 major classes) are obtained in total. Forming a domain coverage map (partial segment) as shown in fig. 3, the samples in each coverage domain are the most similar cases, averaging about 8 instances per subclass, taking 2.092s (seconds).
S3, constructing a three-layer feedforward neural network to realize quick recall of the most similar case;
fig. 4 is a schematic diagram of a three-layer feedforward neural network for maintaining a case base of rapid coverage according to an embodiment of the present invention, and 2107 coverage areas obtained are used as input to construct the three-layer feedforward neural network, so as to achieve rapid recall of the most similar cases; the average time to achieve recall of 4000 test cases was 0.021ms (milliseconds), with an accuracy of 86.51%.
The hierarchical division structure of the case base is realized, all cases are completely reserved, 2.092s (seconds) are consumed directly for maintenance, case recall is more accurate and faster, and irreducible case base maintenance is realized.
Example three:
fig. 1 is a flowchart of the steps applied to the case base maintenance of fast coverage according to the embodiment of the present invention. Example data are from UCI machine learning data set "forest cover type" 581012 example 55 dimension 7 type respectively, and dynamic maintenance of large-scale case base is tested in experiments.
As shown in fig. 1, a method for maintaining a rapidly covered case base according to an embodiment of the present invention includes the following steps:
s1, acquiring case base information from a CBR application system and performing space dimension expansion projection; the specific content and steps are as follows:
s11, acquiring an attribute dimension 55 and category information 7 of the case base from the case base system, wherein the number of the attribute dimension is 10000, 50000 and 100000 respectively;
s12, adding one dimension to a 55-dimensional input sample vector space for dimension expansion;
s13, converting an input sample into a hypersphere with equal length; performing spherical transformation of T on a bounded set D in an n + 1-dimensional space according to a transformation formula T: d → SnTo, for
Figure BDA0003250500250000071
Is provided with
Figure BDA0003250500250000072
Wherein | x | is the length of x, and R is more than or equal to max { | | x ∈ D };
and S14, implementing space projection, and projecting 10000, 50000 and 100000 case input samples onto a hypersphere respectively.
S2, dividing the space of the case base according to the similarity to obtain a coverage field and a sub-classification; FIG. 2 is a functional diagram of a system for maintaining a rapidly covered case base according to an embodiment of the present invention; the rejection samples are respectively divided into the nearest coverage according to empirical probability to form a domain coverage schematic diagram (local segment) as shown in fig. 3:
the 10000 cases projected to the high-dimensional space are spatially divided, and 2468 coverage areas (2468 most similar sample clusters and subclasses) are obtained on average, which takes 5.357s (seconds).
Space division is performed on 50000 cases projected to a high-dimensional space, 6843 coverage areas (i.e. 6843 most similar sample clusters, subclasses) are obtained on average, and 25.174s (seconds) are consumed.
The space partition is performed on 100000 cases projected to the high dimensional space, and 7651 coverage areas (i.e. 7651 most similar sample clusters, subclasses) are obtained on average, which takes 38.187s (seconds).
S3, constructing a three-layer feedforward neural network to realize quick recall of the most similar case;
the average time consumption for recalling three groups of test cases is 0.121ms-2.49ms, and the accuracy is 62.43% -83.25%.
The time for maintaining most of the existing case bases is directly used for maintaining the case bases, and due to the fact that the time consumption is too long, the case bases can only be processed in a background and cannot be dynamically adjusted. From step S2, it can be seen that, in the case of the experimental maximum 100000 case base, it takes 38.187S to maintain directly; therefore, dynamic maintenance of the case base can be realized, and the maintenance of the case base is fast and efficient.

Claims (5)

1. A method for maintaining a rapidly covered case base is characterized by comprising the following steps:
s1, acquiring case base information from a CBR application system and performing space dimension expansion projection;
s2, dividing the space of the case base according to the similarity to obtain a coverage field and a sub-classification;
and S3, constructing a three-layer feedforward neural network and realizing quick recall of the most similar case.
2. The method for maintaining a rapidly covered case base according to claim 1, wherein the specific content and steps of obtaining the case base information from the CBR application system and performing the spatial dimension extending projection in step S1 include:
s11, obtaining attribute dimension, quantity and category information of the case base from the case base system;
s12, adding one dimension to a vector space of an n-dimensional input sample to expand the dimension;
s13, converting the input samples into space vectors with equal length; performing T hypersphere S transformation on a bounded set D in an n + 1-dimensional space, and according to a transformation formula T: d → SnTo, for
Figure FDA0003250500240000011
Is provided with
Figure FDA0003250500240000012
Wherein | x | is the length of x, and R is more than or equal to max { | | x ∈ D };
and S14, implementing space projection to enable all input samples to be projected onto a hypersphere.
3. The method for maintaining the rapidly covered case base according to claim 1, wherein the steps of dividing the case base space according to the similarity, obtaining the specific contents of the covered field and the sub-classification and the steps of S2 include:
step S21, constructing a j-th class input sample KjThe method of (1) a certain sphere coverage area C (k) is as follows: calculating the center point of all samples, and taking KjPoint a not yet coverediThen find the closest sample point ai∈KjFrom this point, overlay is started;
step S22, calculating aiArea C (a) as the centeri) Let C (a)i)∩Kj=Di,i=1,2,...,D0=Φ;
Step S23, if D isi-1Is DiThen D is obtainediThe center of gravity b;
step S24. ask for aiA translation point of (a), ai+1A, and finding the corresponding domain C (a)i+1) To obtain Di+1(ii) a If D isiIs Di+1Then D is obtainedi+1Center of gravity b: let ai+1If b, i + +, return to continue to solve aiArea C (a) as the centeri);
S25, after the coverage C (k) is obtained, deleting the covered points; continue to find another coverage to get all sample coverage fields of class j C1j,C2j,...,Ckj};
S26, repeating the steps, and solving the spherical coverage fields of all types of the input samples; finally obtaining a batch of coverage { { C11,C21,...,Cp1},{C12,C22,...,Cq2},...,{C1r,C2r,...,Ckr}}。
4. The method for maintaining a case base with rapid coverage as claimed in claim 1, wherein the specific contents and steps of constructing a three-layer feedforward neural network in step S3 include:
s31. the first layer (input layer) is to take p neurons A1,A2,…Ap,AiFor neurons corresponding to coverage c (i), the weight and threshold is W (1) ═ a1),θ(1)=(θ1);
S32. second layer (hidden layer) taking the same number of neurons B as the first layer1,B2,…,Bp
Figure FDA0003250500240000021
S33. the third layer (output layer) of r neurons C1,C2,…CrR is the number of sample classes;
Figure FDA0003250500240000022
5. the method for maintaining a rapidly covered case base according to claim 3, wherein the step S2 of dividing the case base space according to the similarity, obtaining the coverage area and the sub-classification further comprises processing rejected samples, and the method comprises the steps of:
the rejection samples are independent and are singly listed as a field coverage;
or the hash among the rejected samples is larger, and the radius of the field is enlarged;
or the rejection samples are respectively divided into the nearest coverage according to experience probability.
CN202111043870.3A 2021-09-07 2021-09-07 Quick-covering case base maintenance method Pending CN113780563A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111043870.3A CN113780563A (en) 2021-09-07 2021-09-07 Quick-covering case base maintenance method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111043870.3A CN113780563A (en) 2021-09-07 2021-09-07 Quick-covering case base maintenance method

Publications (1)

Publication Number Publication Date
CN113780563A true CN113780563A (en) 2021-12-10

Family

ID=78841547

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111043870.3A Pending CN113780563A (en) 2021-09-07 2021-09-07 Quick-covering case base maintenance method

Country Status (1)

Country Link
CN (1) CN113780563A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015002367A1 (en) * 2014-03-02 2015-09-03 Gabriele Trinkel Secure data transfer and scaling, cloud over-load protection and cloud computing
CN112415583A (en) * 2020-11-06 2021-02-26 中国科学院精密测量科学与技术创新研究院 Seismic data reconstruction method and device, electronic equipment and readable storage medium
CN112765133A (en) * 2020-12-25 2021-05-07 广东电网有限责任公司电力科学研究院 Maintenance system and maintenance method for PAS case base

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102015002367A1 (en) * 2014-03-02 2015-09-03 Gabriele Trinkel Secure data transfer and scaling, cloud over-load protection and cloud computing
CN112415583A (en) * 2020-11-06 2021-02-26 中国科学院精密测量科学与技术创新研究院 Seismic data reconstruction method and device, electronic equipment and readable storage medium
CN112765133A (en) * 2020-12-25 2021-05-07 广东电网有限责任公司电力科学研究院 Maintenance system and maintenance method for PAS case base

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
JIAN YANG LI等: "case-base maintenance based on multi-layer alternative-covering algorithm", 《2006 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS》, pages 2035 - 2039 *
周瑛;谢阳群;张铃;: "基于概率的覆盖算法的模型及仿真研究", ***仿真学报, vol. 20, no. 17, pages 4609 - 4612 *
李建洋, 倪志伟, 刘慧婷: "多层前馈神经网络在基于案例推理的应用", 计算机应用, vol. 25, no. 11, pages 2650 - 2652 *
李建洋;倪志伟;刘慧婷;郑汉垣;: "基于覆盖算法与多层前馈网络的案例库维护", 中国科学技术大学学报, vol. 37, no. 02, pages 159 - 163 *

Similar Documents

Publication Publication Date Title
Maurya et al. Simplifying approach to node classification in graph neural networks
Wang et al. PC-GAIN: Pseudo-label conditional generative adversarial imputation networks for incomplete data
Xu et al. Effective community division based on improved spectral clustering
Hafiz et al. Image classification using convolutional neural network tree ensembles
Khoder et al. Ensemble learning via feature selection and multiple transformed subsets: Application to image classification
Hao et al. Sentiment recognition and analysis method of official document text based on BERT–SVM model
Bobadilla et al. Creating synthetic datasets for collaborative filtering recommender systems using generative adversarial networks
Lee et al. Framework for the classification of imbalanced structured data using under-sampling and convolutional neural network
Rasheed Improving prediction efficiency by revolutionary machine learning models
Nithya et al. A comprehensive survey of machine learning: Advancements, applications, and challenges
Mawane et al. Unsupervised deep collaborative filtering recommender system for e-learning platforms
Shan et al. Incorporating user behavior flow for user risk assessment
Hasibuan Towards using universal big data in artificial intelligence research and development to gain meaningful insights and automation systems
Jia et al. Investigating the geometric structure of neural activation spaces with convex hull approximations
Chen et al. Gaussian mixture embedding of multiple node roles in networks
Stock et al. Efficient pairwise learning using kernel ridge regression: an exact two-step method
Johnpaul et al. General representational automata using deep neural networks
Chen et al. D-trace: deep triply-aligned clustering
CN113780563A (en) Quick-covering case base maintenance method
Tofangchi et al. Towards distributed cognitive expert systems
Jiao et al. Neural network data mining clustering optimization algorithm
Rajita et al. GAN‐C: A generative adversarial network with a classifier for effective event prediction
Zhai et al. Label distribution learning based on ensemble neural networks
Djerrab et al. Output fisher embedding regression
Pareek et al. A Review on Rural Women’s Entrepreneurship Using Machine Learning Models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination