CN110276453B - Method for reasoning by using knowledge representation based on characteristics - Google Patents

Method for reasoning by using knowledge representation based on characteristics Download PDF

Info

Publication number
CN110276453B
CN110276453B CN201910456567.2A CN201910456567A CN110276453B CN 110276453 B CN110276453 B CN 110276453B CN 201910456567 A CN201910456567 A CN 201910456567A CN 110276453 B CN110276453 B CN 110276453B
Authority
CN
China
Prior art keywords
feature
rule
node
nodes
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910456567.2A
Other languages
Chinese (zh)
Other versions
CN110276453A (en
Inventor
胡启平
冯睿
胡煜州
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201910456567.2A priority Critical patent/CN110276453B/en
Publication of CN110276453A publication Critical patent/CN110276453A/en
Application granted granted Critical
Publication of CN110276453B publication Critical patent/CN110276453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for reasoning by using knowledge representation based on characteristics, which comprises the steps of firstly defining the category of an object, then defining the characteristics, forming a set of all the characteristics defined on the category, naming and identifying the characteristics when defining the characteristics, establishing the incidence relation between the identification and the characteristics, and facilitating centralized and unified management and maintenance. The invention can clearly express various knowledge in the relevant fields; repeated and conflicting knowledge can be found in time; the problem of structural knowledge representation is effectively solved; an exhaustive list of knowledge of facts is realized; and the sharing and acquisition of knowledge are facilitated. The characteristics are used as the reasoning conditions and the reasoning rules about the characteristic identification are established, so that the problem of consistency caused by too many rules in the knowledge base is solved. The feature set identification is used as an inference node in a deduction lattice structure, so that the inference and retrieval efficiency is greatly improved, the rule problem about repetition and conflict is solved, and the problem solving capability is strong.

Description

Method for reasoning by using knowledge representation based on characteristics
Technical Field
The invention belongs to the technical field of computers, relates to a method for solving complex problems by a computer system, and particularly relates to a knowledge representation method based on characteristics and a method for reasoning by using the knowledge representation method.
Background
Existing inferential decision methods include forward and reverse reasoning.
Forward reasoning, also known as forward chain reasoning, is a way of reasoning that derives conclusions from conditions. It uses a certain inference rule to prove the establishment of target facts or propositions based on a set of facts. The general reasoning process is to provide some initial known facts to the comprehensive database, and the control system uses these data to match the knowledge in the knowledge base, and the triggered knowledge adds its conclusion as a new fact to the comprehensive database. And repeating the process, matching the fact in the updated comprehensive database with another knowledge in the knowledge base, and updating the conclusion of the fact in the comprehensive database until no new matchable knowledge exists and no new fact is added into the comprehensive database. And then testing whether a solution is obtained or not, if so, returning the solution, and if not, prompting the failure of operation.
Reverse reasoning is also called reverse chain reasoning, also called reverse reasoning, target reasoning. The reasoning method is an inference mode for finding evidence in a knowledge base to verify the correctness of the conclusion from the conclusion, and once the evidence is found in the knowledge base, the conclusion is proved to be established. The basic reasoning process is to propose a set of assumptions (targets), and verify the correctness of the assumptions one by one using a set of knowledge.
Before forward and backward reasoning is used, an organization needs to acquire knowledge, namely, the knowledge acquired from data needs to be expressed in a certain form, so that the knowledge can be understood and can be executed by a computer, and the process is called knowledge expression.
The most important feature of knowledge is the repetition or regularity of some actual phenomena, which are observed by people over a long period of time, and which occur simultaneously or immediately after the occurrence of other actual phenomena. The information structure is formed by associating related information together and reflects the relation between things in the objective world.
Knowledge representation: is a set of conventions made to describe the world, either symbolized, formalized, or modeled knowledge. From the perspective of computer science, knowledge representation is a general method for researching feasibility and effectiveness of computer representation knowledge, and is a strategy for representing human knowledge into a data structure and a system control structure which can be processed by a machine. In brief, knowledge representation describes the knowledge of human brain in a manner of easy computer processing, reasonably describes, stores and effectively utilizes acquired related knowledge in a form of computer internal code, and is the basis of knowledge acquisition and application and the key problem of the whole process of constructing and applying a knowledge base. Knowledge representation in an expert system is therefore a series of technical means in an expert system that can accomplish the computer processing of the expert's knowledge.
A common knowledge representation method, namely a rule-based knowledge representation method, also called generative rule representation method, is widely applied in various fields because the method is very similar to a method for representing knowledge by human experts.
Generative rule representation, also known as rule-based knowledge representation, was proposed by logists e.post in 1943, and then Newell and Simon developed a rule-based generative system in studying human cognitive models. In a generative system, knowledge is divided into two parts: static knowledge (e.g., things, events, and relationships between them) is represented by facts, and the process of reasoning and behavior is represented by production rules. Generally, a production-type system includes three parts, a comprehensive database (a fact base), a knowledge base (a rule base) and a control system (an inference engine). The comprehensive database is used for storing data related to solving the problem, namely fact knowledge and intermediate results of reasoning; the knowledge base stores knowledge related to solving the problem, production formulas and operations related to the rules; the control system is a precondition of matching facts and knowledge, determining available knowledge, adding, deleting, modifying and the like to the comprehensive database by using knowledge or operation, judging the state of the database and stopping the operation of the system at proper time.
The generative rule representation comprises two parts: the IF part is a front item and is a precondition; the THEN part is a latter term, and is an operation result or conclusion. Its basic syntax is:
IF < antecedent > THEN < consequent >
For a rule, there may be multiple antecedents, which may be combined by AND, OR relationships. The latter term of a rule may also contain a plurality of statements. Such as: < consequent 1> < consequent 2>. the.
Referring to FIG. 1, feature extraction and classification are two important steps in building a fact library. The feature extraction mainly selects a group of features which are most effective for classification from an original feature space, so that the space dimension of the feature vector can be effectively reduced, redundant features are deleted, the interference of irrelevant information on the processing process is reduced, and the classification accuracy is improved. The feature space after feature extraction is used for classification, namely, the classification is to judge the samples with unknown class attributes to belong to a certain type. The classification problem has been studied more extensively and deeply in the fields of artificial intelligence, machine learning, pattern recognition, and the like. The classification problem can be described specifically as: the process of finding common features of the same category data from an existing data set with category labels and distinguishing them accordingly.
In order to make the classifier effectively make a classification decision, the classifier should be trained first, i.e., the learning process of the classifier. The learning process of the classifier needs to be repeated for many times, and errors are continuously corrected, so that the class identification rate meeting the requirements can be obtained. The classifier through the learning process distinguishes the characteristic sample to be identified and determines the category attribution, and the process is called a distinguishing process or a testing process. If the class of the training sample is known, the training sample can play a role of supervision and guidance in machine learning, and is called supervised pattern classification; if the class of the training sample is unknown, the method is called unsupervised mode classification; if the class attribute of a partial sample is known, it is called semi-supervised pattern classification.
The early classification problem mainly adopts a rule matching-based method, needs to manually make a large number of rules and establish a rule base, and has low efficiency. In recent years, a classification method is mainly to make a feature extraction strategy and then classify by means of a machine learning method, and at present, main classification methods mainly include a K-nearest neighbor classification method, a Support Vector Machine (SVM), a one-dimensional regression method, a method based on subspace analysis (PCA, PLSA, and the like), a method based on A Neural Network (ANN), a decision tree, Bayes, an algorithm based on tensor data, and the like.
In summary, the production rule representation has the following advantages and limitations:
the advantage of the production rule representation:
(1) good at expression domain knowledge;
(2) separating control and knowledge;
(3) the modularity of the knowledge is strong;
(4) facilitating implementation of interpretive reasoning;
(5) facilitating the description of suggestions, directives, and heuristic knowledge;
(6) digital and classified data can be processed well without being limited by the type of data.
The simultaneous production rule representation also has some drawbacks:
one, limitation of fact knowledge:
(1) with the gradual increase of the fact knowledge, the maintenance difficulty is also gradually increased;
(2) the fact knowledge cannot be exhausted;
(3) difficulty in expressing structural knowledge;
(4) the aged and the unrealized knowledge can not be updated and eliminated.
Secondly, limitation of inference rules:
(1) the logical relationship between rules is difficult to determine;
(2) when the number of the rules is too large, the consistency of the knowledge base is difficult to maintain;
(3) the problem of conflicting and repeated rules cannot be solved.
Disclosure of Invention
In order to solve the technical problems, the invention provides a knowledge representation method based on characteristics and a method for reasoning by using the knowledge representation method, which can construct an analysis and diagnosis system responsible for complex problems through intelligent, visual, digital and interactive equipment.
The technical scheme adopted by the invention is as follows:
1. a method for reasoning using a feature-based knowledge representation, comprising the steps of:
step 1: defining a category of the object;
step 1.1: defining a category of the object; the objects are divided into objects which can be enumerated and can not be enumerated, or are divided into simple objects and composite objects, and the composite objects are formed by combining the simple objects; objects belong to different categories;
the simple object class is defined by a combination of fields;
the composite object type is formed by combining simple object types, and the combined type comprises field combined definition and also comprises ioin operation of a plurality of type object sets;
step 1.2: setting an Algorithm library Algorithm Base, and storing a classification Algorithm which is used by the system and is related to deep learning and machine learning in the Algorithm Base;
step 1.3: setting Type Base, and storing all types in the Type Base for management;
step 2: defining all characteristics of the class on the basis of the object class;
the specific implementation of the step 2 comprises the following sub-steps:
step 2.1: defining characteristics as F (x), wherein x is an object, F represents that x satisfies a classification condition specified by F, And F is a simple condition consisting of a category field And a comparison operator Or a composite condition combining Ant And Or;
defining a series of features F on the class A1、F2、...、FnThe set formed by all the objects on A is marked as O, and the object set meeting the characteristic f is defined as Of
Figure BDA0002076762920000041
When in use
Figure BDA0002076762920000042
When it is called F1,F2,...,FnIs a complete classification, otherwise is an incomplete classification;
for any of i ≠ 1, · n, j ≠ 1,. n, and i ≠ j,
Figure BDA0002076762920000043
at first, call F1、F2、...、FnIs a mutually exclusive feature, otherwise is a compatibility feature;
for class A, the feature set is all the features F defined on A1、F2、...、FnA set of (a); defining a feature set as FeatureSet (a), where a is a category, FeatureSet representing the set of all features defined on category a;
setting a Feature Base, and storing Feature sets on all categories in the Feature Base in a centralized manner;
step 2.2: for compatibility characteristics, a matching mode is specified, and the matching mode comprises a priority matching mode and a full matching mode:
the preferential matching refers to matching according to the sequence and returning the only characteristic of successful matching;
the full matching refers to returning all feature sets successfully matched;
and step 3: defining an inference rule on the basis of the object class and the characteristics;
the precondition of the inference rule is defined to be composed of features, and then the inference rule is expressed as:
F1(x1)∧F2(x2)∧…∧Fn(xn)→h;
in the formula, F1(x1)、F2(x2)、…、Fn(xn) The characteristic is a true or false value, and when the characteristic is true, the characteristic is satisfied; h is the conclusion that when feature F1(x1)、F2(x2)、…、Fn(xn) If the two are true, the corresponding conclusion h is obtained;
setting Rule Base for storing the constructed inference Rule;
setting a Conclusion Base to save the Conclusion or operation result in the rule as a back item;
and 4, step 4: and reasoning is carried out based on the reasoning rule to obtain a reasoning conclusion.
The invention also provides a method for reasoning by using the knowledge representation based on the characteristics, which is characterized by comprising the following steps:
step 1: defining a category of the object;
step 1.1: the objects can be classified into enumeratable objects and non-enumeratable objects, and also can be classified into simple objects and composite objects, wherein the composite objects are formed by combining the simple objects; objects belong to different categories;
the simple object class is defined by a combination of fields;
the composite object category is formed by combining simple object categories, and the combined category comprises a combination definition of fields and also comprises join operation of a plurality of category object sets;
step 1.2: setting an Algorithm Base Algorithm Base, and storing the classification algorithms related to deep learning and machine learning used by the system in the Algorithm Base;
step 1.3: setting Type Base, and storing all types in the Type Base for management;
step 2: defining all characteristics of the class on the basis of the object class;
the specific implementation of the step 2 comprises the following substeps:
step 2.1: defining characteristics as F (x), wherein x is an object, F represents that x satisfies a classification condition specified by F, And F is a simple condition consisting of a category field And a comparison operator Or a composite condition combining Ant And Or;
defining a series of features F on a class A1、F2、...、FnThe set formed by all the objects on A is marked as O, and the object set meeting the characteristic f is defined as Of
Figure BDA0002076762920000061
When in use
Figure BDA0002076762920000062
When it is called F1,F2,...,FnIs a complete classification, otherwise is an incomplete classification;
for any of i ≠ 1, · n, j ≠ 1,. n, and i ≠ j,
Figure BDA0002076762920000063
when it is called F1、F2、...、FnIs a mutually exclusive feature, otherwise is a compatibility feature;
for class A, the feature set is all the features F defined on A1、F2、...、FnA set of (a); define the feature set as Featureset (a), where a is category and FeatureSet represents the set of all features defined on category a;
setting a Feature Base, and storing Feature sets on all categories in the Feature Base in a centralized manner;
step 2.2: and assigning matching modes for the compatibility characteristics, wherein the matching modes comprise a priority matching mode and a full matching mode:
the preferential matching refers to matching according to the sequence and returning the only characteristic of successful matching;
the full matching refers to returning all feature sets successfully matched;
and step 3: using the feature set as a node in a deduction lattice structure to carry out reasoning;
step 3.1: matching Feature features according to the object Type, wherein the matching result comprises three types according to different characteristics of the Feature features, including mutual exclusion features or compatibility features, and different compatibility Feature matching modes: 1) a single feature; 2) multiple features, marked as feature sets; 3) a null feature; for a single feature, establishing a deduction lattice or matching in the deduction lattice until a conclusion is reached; for the condition of the feature set, it is indicated that a plurality of rules can be matched at the same time, a plurality of deduction lattices need to be constructed, the deduction lattice matched with the important conclusion can be preferentially selected for reasoning according to the importance of the conclusion, and then the deduction lattice of the secondary conclusion is selected for reasoning in a gradual manner, so that the effect of matching a plurality of conclusions at the same time is achieved; for the null feature, selecting a path on the right of the current node for matching until all child nodes under the node are exhausted, wherein the condition shows that no rule which can be matched exists;
the deduction lattice is firstly defined as follows:
(1) grid: considering any one partially ordered set (L, ≦) if for any element a, b in the set L, there is a maximum lower bound and a minimum upper bound for a, b in L, then (L, ≦) is a lattice;
(2) the rule is as follows: c1∧C2∧...∧Cn→h;
Wherein, C1、C2、...、CnIs a condition, the condition has a true or false value, whenIf the condition is true, the condition is satisfied; h is the conclusion that when condition C1、C2、...、CnIf the two are true, the corresponding conclusion h is obtained;
if there is a rule C1∧C2∧(Ci∨...∨Ci+n) → h, then the rule needs to be split into multiple rules, which are: c1∧C2∧Ci→h,...,C1∧C2∧Ci+n→h;
If there is a rule C1∧C2∧C3→h1And h1∧C4∧C5→ h, then the two rules need to be merged into C1∧C2∧C3∧C4∧C5→h;
If the corresponding action is taken according to the condition, the rule is used to draw a conclusion first, and then the corresponding action is carried out according to the conclusion;
then, setting a deduction lattice structure:
the deduction lattice is a lattice with a hierarchical relationship and formed by n finite elements, wherein the elements in the lattice are called nodes, and n is more than 1; the nodes comprise root nodes, intermediate nodes and leaf nodes; nodes without predecessors are called root nodes, and the root nodes are unique; nodes without successors are called leaf nodes; the other nodes are called intermediate nodes, and the intermediate nodes are divided into two types: one node is a successor or an intermediate node and is called a pure intermediate node, the other intermediate node is a leaf node and is called a leaf predecessor node, the root node and the intermediate node correspond to regular conditions, the leaf node corresponds to a regular conclusion, and the root node to the leaf node correspond to a rule in a rule base;
the construction algorithm of the deduction lattice is a process of acquiring knowledge in a knowledge base in a certain sequence and forming the deduction lattice. The construction of the deduction lattice is developed by the following six steps:
(1) initializing deduction root nodes
The user initializes an object representing a phenomenon, searches, according to its type, a set of features defined on the type, at which the features are to be foundThe feature whose match was found on the set, assuming the feature is C, it is identified as IDCBuilding Node with Id of IDCThe Node is used as a root Node of the deduction lattice and is used as a current root Node CurrentNode, and child nodes of the CurrentNode are empty;
(2) form a complete rule set
By initially selecting node ID during initializationCLooking up all contained IDs in the rule baseCAssuming that n rules are found in total, the rule set is recorded as BaseRuleSet. Setting mi characteristics of each Rule in the Rule set, and setting a conclusion corresponding to each Rule as hi, wherein the searched rules are as follows:
Rule 1:{IDc,IDc11,…,IDc1m1}→h1
Rule 2:{IDc,IDc21,…,IDc2m2}→h2
……
Rule n:{IDc,IDcn1,…,IDcnmn}→hn
(3) calculating from CurrentID to IDCReverse feature set of
Setting the ReverseIdSet to be null, and adding the identifier on each node into the ReverseIdSet to the root node along the reverse direction of the CurrentID;
(4) computing remaining marker packet conditions in a rule
Counting Counts for each feature in BaseRuleSet that does not belong to revertidset, each count element consisting of:
[1] the feature ID;
[2] calculating the number count of each feature appearing in the BaseRuleSet;
[3] which rules RuleSet appear in, respectively;
forming;
let n be the number of non-repeating features. Sorting the Counts according to the size of count;
(5) partitioning of rule sets
i=0...n,j=i+1...n;
Figure BDA0002076762920000081
And x ∈ Counts [ j ]]Ruleset, then Counts [ j ] will]Count-1, and get x from Counts [ j ]]Remove in RuleSet;
calculating the number k of Counts [ i ] in i-0.. n, wherein the count is not 0, and reordering k times that the number k is not 0 from large to small;
(6) constructing child nodes
If k is not equal to 0, constructing a sub-node set Nodes [0], Nodes [ k-1], wherein Id is Counts [ i ] Id, sub-Nodes are empty, and i is 0.. k-1;
setting child Nodes of a CurrentNode, namely Nodes [0],. and Nodes [ k-1], and putting Nodes [1],. and Nodes [ k-1], Nodes [1] and Counts [1],. and Counts [ k-1] into a stack, adjusting the CurrentNode to be Nodes [0], adjusting BaseRuleSet to be Counts [0]. RuleSet, and turning to (3);
if k is 0, constructing a child Node, wherein id of the child Node is the conclusion of the corresponding rule, the child Node is null, the child Node of the CurrentNode points to the Node, if the stack is null, the deduction lattice construction is finished, otherwise, popping up Count and Node from the stack respectively, adjusting the CurrentNode to the Node, adjusting the BaseRuleSet to Count.
Thus, the deduction lattice is constructed.
The original production rule expression method is accompanied by the continuous increase of knowledge and rule quantity in the system, so that the reasoning efficiency is low, and the difficulty of maintaining and managing a knowledge base is also increased. In the face of the mass data information, the production rule expression method cannot maintain and update in time and guarantee high-efficiency reasoning work.
The present invention addresses the limitations described above with respect to the production rule representation method, and constructs the concepts of "features" and "feature sets". Firstly defining the class of an object, then defining the characteristics on the basis of defining the class, forming a set (simply called a characteristic set) of all the characteristics defined on the class, naming and identifying the characteristics (ID) when defining the characteristics, and establishing the association relationship between the identification and the characteristics, thereby facilitating centralized and unified management and maintenance. The method can clearly express various knowledge in related fields; repeated and conflicting knowledge can be discovered in time; the problem of structural knowledge representation is effectively solved; an exhaustive list of knowledge of facts is realized; and the sharing and acquisition of knowledge are facilitated.
The consistency problem caused by too many knowledge base rules is solved by taking the characteristics as the reasoning conditions and establishing the reasoning rules about the characteristic Identification (ID). Meanwhile, the invention is well combined with a deduction lattice and a deduction method based on the deduction lattice in the invention patent CN201610985211.4, and the characteristic set mark is used as a deduction node in the deduction lattice structure, thereby greatly improving the reasoning and retrieval efficiency, solving the rule problem about repetition and conflict, and having strong problem solving capability.
Drawings
FIG. 1 is a flow of feature extraction and classification in the background of the invention;
FIG. 2 is a flow chart of an embodiment of the present invention;
fig. 3 is an algorithm library, a type library, a feature library, a rule library, and a conclusion library according to an embodiment of the present invention.
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Aiming at the defects revealed by the production rule representation method, the invention considers that a good knowledge representation method can meet the following requirements:
(1) has good grammar and definition;
(2) has sufficient expression ability and can clearly express various knowledge in related fields;
(3) the method is convenient for effective reasoning and retrieval, has stronger problem solving capability, is suitable for the requirement of application problems, and improves the reasoning and retrieval efficiency;
(4) knowledge sharing and knowledge acquisition are facilitated;
(5) the centralized management and maintenance of the fact knowledge are realized, and the integrity and consistency of the fact base are easy to maintain;
(6) the detection of rules such as repetition, conflict, error, aging, failure and the like is facilitated;
(7) the machine learning and deep learning technology at the front edge can be fused.
The knowledge expression method to be stated in the invention is a brand new knowledge expression method formed by improving the defects of the production formula rule expression method.
Referring to fig. 2, the method for reasoning using feature-based knowledge representation according to the present invention includes the following steps:
step 1: defining a category of the object;
step 1.1: defining a category of the object;
in this embodiment, the objects are divided into enumeratable and non-enumeratable objects, or are divided into simple objects and composite objects, and the composite objects are formed by combining the simple objects; objects belong to different categories;
simple object classes are defined by a combination of fields;
the composite object category is formed by combining simple object categories, and the combined category comprises combination definition of fields and join operation of a plurality of category object sets;
step 1.2: setting an Algorithm library Algorithm Base, and storing a classification Algorithm which is used by the system and is related to deep learning and machine learning in the Algorithm Base;
step 1.3: setting Type Base, and storing all types in the Type Base for management;
step 2: defining all characteristics of the class on the basis of the object class;
the specific implementation of the step 2 comprises the following substeps:
step 2.1: defining characteristics as F (x), wherein x is an object, F represents that x satisfies a classification condition specified by F, And F is a simple condition consisting of a category field And a comparison operator Or a composite condition combining Ant And Or;
for example, the object "person":
i. the classification can be divided into men and women according to gender, and the classification characteristics are composed of category fields (gender) and comparison operators, wherein the gender is male or female;
can be classified according to age: infant … and the like, the classification characteristics of which are composed of category fields (age) and comparison operators — infant age ≦ 1, infant age ≦ 1 ≦ 5 …, and the like;
may be composed of operation and comparison operators between multiple category fields (currentDate, birthDate) — infant ═ currentDate-birthDate ≦ 1, infant ≦ 1 ≦ current-birthDate ≦ 5 …, and so on;
may be composed of a combination of multiple category fields (gender, age) and comparison operators — gender ═ male and age ≦ 1, gender ≦ female and age ≦ 1 …, etc.;
defining a series of features F on a class A1、F2、...、FnThe set formed by all the objects on A is marked as O, and the object set meeting the characteristic f is defined as Of
Figure BDA0002076762920000111
When in use
Figure BDA0002076762920000112
When it is called F1,F2,...,FnIs a complete classification, otherwise is an incomplete classification;
when any i is equal to 1, · n, j is equal to 1,. · n, and i is not equal to j,
Figure BDA0002076762920000113
when it is called F1、F2、...、FnIs a mutually exclusive feature, otherwise is a compatibility feature;
for class A, the feature set is all the features F defined on A1、F2、...、FnA set of (a); define feature set as FeatureSet (A), where A is the class and FeatureSet represents all features defined on class AA set of (a);
setting a Feature Base, and storing Feature sets on all categories in the Feature Base in a centralized manner; (ii) a
Naming and identifying features when defining features
Figure BDA0002076762920000114
The characteristics are uniformly managed and maintained by using the identification, so that search matching and subsequent inference rule construction are facilitated.
Step 2.2: and assigning matching modes for the compatibility characteristics, wherein the matching modes comprise a priority matching mode and a full matching mode:
the preferential matching refers to matching according to the sequence and returning the only characteristic of successful matching;
full matching refers to returning all feature sets successfully matched;
this embodiment names and Identifies (ID) features when defining the features, and this identification reflects a category and is globally unique. Meanwhile, the identification is used for carrying out unified management and maintenance on the features, so that search matching and subsequent inference rule construction are facilitated.
In this embodiment, objects belonging to the same category may be classified, and what satisfies the classification condition is the feature. For example, the object "person":
i. the classification can be divided into men and women according to gender, and the classification characteristics are composed of category fields (gender) and comparison operators, wherein the gender is male or female;
can be classified according to age: infants, toddlers, etc., whose classification features consist of category fields (age) and comparison operators — infants are age ≦ 1, toddlers are age ≦ 1 ≦ 5, etc.;
may be composed of arithmetic and comparison operators between multiple category fields (currentDate, birthDate) -infant ═ currentDate-birthDate ≦ 1, toddler ≦ current-birthDate ≦ 5, etc.;
may be composed of a combination of a plurality of category fields (gender, age) and comparison operators — gender ═ male and age ≦ 1, gender ≦ female and age ≦ 1.
And step 3: defining an inference rule on the basis of the object class and the characteristics;
as can be seen from the foregoing description of the production rule representation, the inference rule is composed of antecedents (preconditions) and consequent (results or conclusions) and can be expressed herein as:
C1∧C2∧…∧Cn→h
wherein, C1、C2、…、CnThe condition is a precondition, the condition has a true or false value, and the condition is satisfied when the condition is true; h is the conclusion (or operation result) when condition C1、C2、…、CnWhen all are true, the corresponding conclusion h is reached.
i. If there is a rule C1∧C2∧(CiVCi+1…VCi+n) → h, the rule needs to be split into a plurality of rules, which are: c1∧C2∧Ci→h,…,C1∧C2∧Ci+n→h;
if there is a rule C1∧C2∧C3→h1And h1∧C4∧C5→ h, then the two rules need to be merged into C1∧C2∧C3∧C4∧C5→h;
And iii, if corresponding action is taken according to the condition, using the rule to draw a conclusion first, and then carrying out corresponding action according to the conclusion.
In this embodiment, if the precondition of the inference rule is defined as being composed of features, the inference rule is expressed as:
F1(x1)∧F2(x2)∧…∧Fn(xn)→h;
in the formula, F1(x1)、F2(x2)、…、Fn(xn) If the feature is true, the feature is satisfied; h is the conclusion that when feature F1(x1)、F2(x2)、…、Fn(xn) If the two are true, the corresponding conclusion h is obtained;
in the process of constructing the inference rule, the Identification (ID) of the feature is used for searching matching and reasoning, and the method is to
Figure BDA0002076762920000121
Storing the antecedent serving as the inference Rule in the Rule Base, storing the Conclusion or the operation result serving as the consequent in the classification Base, increasing the construction efficiency of the inference Rule, facilitating the unified management and reducing the maintenance work of the knowledge Base;
when defining the feature f (x), f (x) may be either a simple feature Or a composite feature composed of And Or. Thus, any feature F in the inference rulen(xn) Composite features are also possible;
setting Rule Base for storing the constructed inference Rule;
setting a Conclusion Base to save the Conclusion or operation result in the rule as a back item;
when two rules in Rule Base, their predecessors
Figure BDA0002076762920000131
When the two rules are the same as the subsequent item h, the two rules are repeated, and when the previous items are consistent and the subsequent items are different, the two rules conflict with each other;
and 4, step 4: and reasoning is carried out based on the reasoning rule to obtain a reasoning conclusion.
The invention also provides a method for reasoning by using the knowledge representation based on the characteristics, which is characterized by comprising the following steps:
step 1: defining a category of the object;
step 1.1: defining a category of the object;
the objects can be classified into enumeratable objects and non-enumeratable objects, and also can be classified into simple objects and composite objects, wherein the composite objects are formed by combining the simple objects; objects belong to different categories;
simple object classes are defined by a combination of fields;
the composite object type is formed by combining simple object types, and the combined type comprises the combined definition of fields and also comprises the ioin operation of a plurality of type object sets;
step 1.2: setting an Algorithm library Algorithm Base, and storing a classification Algorithm which is used by the system and is related to deep learning and machine learning in the Algorithm Base;
step 1.3: setting Type Base, and storing all types in the Type Base for management;
step 2: defining all characteristics of the class on the basis of the object class;
the specific implementation of the step 2 comprises the following substeps:
step 2.1: defining characteristics as F (x), wherein x is an object, F represents that x satisfies a classification condition specified by F, And F is a simple condition formed by a class field And a comparison operator Or a composite condition formed by combining Ant And Or;
for example, the object "person":
i. the classification can be divided into men and women according to gender, and the classification characteristics are composed of category fields (gender) and comparison operators, wherein the gender is male or female;
can be classified according to age: infant … and the like, the classification characteristics of which are composed of category fields (age) and comparison operators — infant age ≦ 1, infant age ≦ 1 ≦ 5 …, and the like;
may be composed of operation and comparison operators between multiple category fields (currentDate, birthDate) — infant ═ currentDate-birthDate ≦ 1, infant ≦ 1 ≦ current-birthDate ≦ 5 …, and so on;
may be composed of a combination of multiple category fields (gender, age) and comparison operators — gender ═ male and age ≦ 1, gender ≦ female and age ≦ 1 …, etc.;
defining a series of features F on a class A1、F2、...、FnThe set formed by all the objects on A is marked as O, and the object set meeting the characteristic f is defined as Of
Figure BDA0002076762920000141
When the temperature is higher than the set temperature
Figure BDA0002076762920000142
When it is called F1,F2,...,FnIs a complete classification, otherwise is an incomplete classification;
for any of i ≠ 1, · n, j ≠ 1,. n, and i ≠ j,
Figure BDA0002076762920000143
when it is called F1、F2、...、FnIs a mutually exclusive feature, otherwise is a compatibility feature;
for class A, the feature set is all the features F defined on A1、F2、...、FnA set of (a); defining a feature set as FeatureSet (a), where a is a category, FeatureSet representing the set of all features defined on category a;
setting a Feature Base, and storing Feature sets on all categories in the Feature Base in a centralized manner;
step 2.2: and assigning matching modes for the compatibility characteristics, wherein the matching modes comprise a priority matching mode and a full matching mode:
the preferential matching refers to matching according to the sequence and returning the only successfully matched characteristic;
full matching refers to returning all feature sets successfully matched;
and step 3: using the feature set as a node in a deduction lattice structure to carry out reasoning;
matching Feature features according to the object Type, wherein the matching results comprise three types according to different characteristics of the Feature features, including mutual exclusion features or compatibility features, and different compatibility Feature matching modes: 1) a single feature; 2) multiple features, marked as feature sets; 3) a null feature; for a single feature, establishing a deduction lattice or matching in the deduction lattice until a conclusion is reached; for the condition of the feature set, it is indicated that a plurality of rules can be matched at the same time, a plurality of deduction lattices need to be constructed, the deduction lattice matched with the important conclusion can be preferentially selected for reasoning according to the importance of the conclusion, and then the deduction lattice of the secondary conclusion is selected for reasoning in a gradual manner, so that the effect of matching a plurality of conclusions at the same time is achieved; for the null feature, selecting a path on the right of the current node for matching until all child nodes under the node are exhausted, wherein the condition shows that no rule which can be matched exists;
the deduction lattice is firstly defined as follows:
(1) grid: considering any one partially ordered set (L, ≦) if for any element a, b in the set L, there is a maximum lower bound and a minimum upper bound for a, b in L, then (L, ≦) is a lattice;
(2) rule: c1∧C2∧...∧Cn→h;
Wherein, C1、C2、...、CnIs a condition, the condition has a true or false value, and is said to be satisfied when the condition is true; h is the conclusion that when condition C1、C2、...、CnIf the two are true, the corresponding conclusion h is obtained;
if there is a rule C1∧C2∧(Ci∨...∨Ci+n) → h, the rule needs to be split into a plurality of rules, which are: c1∧C2∧Ci→h,...,C1∧C2∧Ci+n→h;
If there is a rule C1∧C2∧C3→h1And h1∧C4∧C5→ h, then the two rules need to be merged into C1∧C2∧C3∧C4∧C5→h;
If the corresponding action is taken according to the condition, the rule is used to draw a conclusion first, and then the corresponding action is carried out according to the conclusion;
then, setting a deduction lattice structure:
the deduction lattice is a lattice with a hierarchical relationship and formed by n finite elements, wherein the elements in the lattice are called nodes, and n is more than 1; the nodes comprise root nodes, intermediate nodes and leaf nodes; the nodes without predecessors are called root nodes, and the root nodes are unique; nodes without successors are called leaf nodes; the other nodes are called intermediate nodes, and the intermediate nodes are divided into two types: one node is a successor or an intermediate node and is called a pure intermediate node, the other intermediate node is a leaf node and is called a leaf predecessor node, the root node and the intermediate node correspond to regular conditions, the leaf node corresponds to a regular conclusion, and the root node to the leaf node correspond to a rule in a rule base;
the construction algorithm of the deduction lattice is a process of acquiring knowledge in a knowledge base in a certain sequence and forming the deduction lattice. The construction of the deduction lattice is developed by the following six steps:
(1) initializing deduction root nodes
The user initializes an object representing a phenomenon, searches a set of features defined on the type according to its type, finds its matching feature on the set of features, assuming that the feature is C, identified as IDCBuilding Node with Id of IDCThe Node is used as a root Node of the deduction lattice and is used as a current root Node CurrentNode, and child nodes of the CurrentNode are empty;
(2) form a complete rule set
By initially selecting node ID during initializationCLooking up all contained IDs in the rule baseCAssuming that n rules are found in total, the rule set is recorded as BaseRuleSet. Setting mi characteristics of each Rule in the Rule set, and setting a conclusion corresponding to each Rule as hi, wherein the searched rules are as follows:
Rule 1:{IDc,IDc11,…,IDc1m1}→h1
Rule 2:{IDc,IDc21,…,IDc2m2}→h2
……
Rule n:{IDc,IDcn1,…,IDcnmn}→hn
(3) calculating from CurrentID to IDCReverse feature set of
Setting the ReverseIdSet to be null, and adding the identifier on each node into the ReverseldSet to the root node along the reverse direction of the CurrentID;
(4) computing remaining marker packet conditions in a rule
Counting Counts for each feature in BaseRuleSet that does not belong to revertidset, each count element consisting of:
[4] the feature ID;
[5] calculating the number count of each feature appearing in the BaseRuleSet;
[6] which rules RuleSet appear in, respectively;
forming;
let n be the number of non-repeating features. Sorting the Counts according to the size of count;
(5) partitioning of rule sets
i=0...n,j=i+1...n;
Figure BDA0002076762920000161
And x ∈ Counts [ j ]]Ruleset, then Counts [ j ] will]Count-1 and get x from Counts [ j ]]Remove in RuleSet;
calculating the number k of Counts [ i ] in i-0.. n, wherein the count is not 0, and reordering k times that the number k is not 0 from large to small;
(6) constructing child nodes
If k is not equal to 0, constructing a sub-node set Nodes [0], Nodes [ k-1], wherein Id is Counts [ i ] Id, sub-Nodes are empty, and i is 0.. k-1;
setting child Nodes of a CurrentNode, namely Nodes [0],. and Nodes [ k-1], and putting Nodes [1],. and Nodes [ k-1], Nodes [1] and Counts [1],. and Counts [ k-1] into a stack, adjusting the CurrentNode to be Nodes [0], adjusting BaseRuleSet to be Counts [0]. RuleSet, and turning to (3);
if k is 0, constructing a child Node, wherein id of the child Node is the conclusion of the corresponding rule, the child Node is null, the child Node of the CurrentNode points to the Node, if the stack is null, the deduction lattice construction is finished, otherwise, popping up Count and Node from the stack respectively, adjusting the CurrentNode to the Node, adjusting the BaseRuleSet to Count.
Thus, the deduction lattice is constructed.
In this embodiment, the invention method in the patent "a deduction lattice and inference method based on the deduction lattice" is combined, and the feature set in the invention is used as a node in the deduction lattice structure to perform inference, so as to improve inference efficiency.
Firstly, dividing an entity object into a simple object and a composite object manually or by using a deep learning and machine learning related algorithm; classifying the objects of the same category by utilizing manual or deep learning and machine learning related classification algorithms so as to construct object characteristics, and judging characteristics (compatibility or mutual exclusion) of the characteristics and a matching mode (priority matching or full matching) of the compatibility characteristics; forming a front part in the inference rule by taking the characteristics of the object as the inference conditions, and finishing the construction of the inference rule by taking a Conclusion in the Conclusion Base as a back part in the inference rule; in the process of actually constructing the inference rule, the Identification (ID) of the characteristics is used for searching, matching and reasoning, so that the construction efficiency of the inference rule is increased, the unified management is facilitated, and the maintenance work of a knowledge base is reduced. The method can clearly express various knowledge in related fields; repeated and conflicting knowledge can be discovered in time; the problem of structural knowledge representation is effectively solved; an exhaustive list of knowledge of facts is realized; and the sharing and acquisition of knowledge are facilitated.
The invention takes the characteristics as the reasoning conditions and establishes the reasoning rules about the characteristic Identification (ID), thus solving the consistency problem caused by too many rules of the knowledge base. Meanwhile, the invention is well combined with the invention of 'a deduction lattice and a deduction method based on the deduction lattice', the characteristic set is marked as a deduction node in the deduction lattice structure, the reasoning and retrieval efficiency is greatly improved, the problem of repeated and conflicting rules is solved, and the invention has strong problem solving capability.
Referring to FIG. 3, the present invention sets forth information about the Type Base, Feature Base, Rule Base, Conclusion Base, and Algorithm Base Algorithm Base. The algorithm library, the type library, the feature library, the rule library and the conclusion library jointly form a Knowledge Base (Knowledge Base) of the system, and the Knowledge Base plays a supporting role in an inference engine.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A method for reasoning using a feature-based knowledge representation, comprising the steps of:
step 1: defining a category of the object;
step 1.1: defining a category of the object; the objects are divided into objects which can be enumerated and can not be enumerated, or are divided into simple objects and composite objects, and the composite objects are formed by combining the simple objects; objects belong to different categories;
the simple object class is defined by a combination of fields;
the composite object category is formed by combining simple object categories, and the combined category comprises a combination definition of fields and also comprises join operation of a plurality of category object sets;
step 1.2: setting an Algorithm library Algorithm Base, and storing a classification Algorithm which is used by the system and is related to deep learning and machine learning in the Algorithm Base;
step 1.3: setting Type Base, and storing all types in the Type Base for management;
step 2: defining all characteristics of the class on the basis of the object class;
the specific implementation of the step 2 comprises the following substeps:
step 2.1: defining characteristics as F (x), wherein x is an object, F represents that x satisfies a classification condition specified by F, And F is a simple condition consisting of a category field And a comparison operator Or a composite condition combining Ant And Or;
defining a series of features F on a class A1、F2、…、FnThe set formed by all the objects on A is marked as O, and the object set meeting the characteristic f is defined as Of
Figure FDA0002076762910000011
When in use
Figure FDA0002076762910000012
When it is called F1,F2,…,FnIs a complete classification, otherwise is an incomplete classification;
for any i ≠ 1, …, n, j ≠ 1, …, n, and i ≠ j,
Figure FDA0002076762910000013
when it is called F1、F2、…、FnIs a mutually exclusive feature, otherwise is a compatibility feature;
for class A, the feature set is all the features F defined on A1、F2、…、FnA set of (a); defining a feature set as FeatureSet (a), where a is a category, FeatureSet representing the set of all features defined on category a;
setting a Feature Base, and storing Feature sets on all categories in the Feature Base in a centralized manner;
step 2.2: for compatibility characteristics, a matching mode is specified, and the matching mode comprises a priority matching mode and a full matching mode:
the preferential matching refers to matching according to the sequence and returning the only characteristic of successful matching;
the full matching refers to returning all feature sets successfully matched;
and 3, step 3: defining an inference rule on the basis of the object class and the characteristics;
the preconditions of the inference rule are defined to consist of features, the inference rule is expressed as:
F1(x1)∧F2(x2)∧…∧Fn(xn)→h;
in the formula, F1(x1)、F2(x2)、…、Fn(xn) If the feature is true, the feature is satisfied; h is the conclusion that when feature F1(x1)、F2(x2)、…、Fn(xn) If the two are true, the corresponding conclusion h is obtained;
setting Rule Base for storing the constructed inference Rule;
setting a Conclusion Base to save the Conclusion or operation result in the rule as a back item;
and 4, step 4: and reasoning is carried out based on the reasoning rule to obtain a reasoning conclusion.
2. The method for reasoning about with feature-based knowledge representation of claim 1, wherein: in step 2, naming and identifying the features while defining the features
Figure FDA0002076762910000021
And the identification is utilized to carry out unified management and maintenance on the characteristics, so that search matching and subsequent inference rule construction are facilitated.
3. The method for reasoning about with feature-based knowledge representation of claim 2, wherein: in the step 3, in the process of establishing the inference rule, the ID of the feature is used for searching, matching and reasoning, and the inference rule is to be constructed
Figure FDA0002076762910000022
Storing the former item as inference Rule in Rule Base, storing the Conclusion or operation result as the latter item in classification Base, and adding inference Rule construction and searchThe efficiency of the method is convenient for unified management, and the maintenance work of the knowledge base is reduced.
4. The method for reasoning about with feature-based knowledge representation of claim 2, wherein: in step 3, any feature F in the inference rulen(xn) Either simple or compound.
5. The method for reasoning about with feature-based knowledge representation of claim 2, wherein: in step 3, when two rules in Rule Base are used, their predecessors
Figure FDA0002076762910000023
When the two rules are identical to the latter item h, the two rules are repeated, and when the former items are identical but the latter items are different, the two rules conflict with each other.
6. A method for reasoning using a feature-based knowledge representation, comprising the steps of:
step 1: defining a category of the object;
step 1.1: the objects can be classified into enumeratable objects and non-enumeratable objects, and also can be classified into simple objects and composite objects, wherein the composite objects are formed by combining the simple objects; objects belong to different categories;
the simple object class is defined by a combination of fields;
the composite object category is formed by combining simple object categories, and the combined category comprises a combination definition of fields and also comprises join operation of a plurality of category object sets;
step 1.2: setting an Algorithm library Algorithm Base, and storing a classification Algorithm which is used by the system and is related to deep learning and machine learning in the Algorithm Base;
step 1.3: setting Type Base, and storing all types in the Type Base for management;
and 2, step: defining all characteristics of the class on the basis of the object class;
the specific implementation of the step 2 comprises the following substeps:
step 2.1: defining characteristics as F (x), wherein x is an object, F represents that x satisfies a classification condition specified by F, And F is a simple condition consisting of a category field And a comparison operator Or a composite condition combining Ant And Or;
defining a series of features F on a class A1、F2、…、FnThe set formed by all the objects on A is marked as O, and the object set meeting the characteristic f is defined as Of
Figure FDA0002076762910000031
When in use
Figure FDA0002076762910000032
When it is called F1,F2,…,FnIs a complete classification, otherwise is an incomplete classification;
for any i ≠ 1, …, n, j ≠ 1, …, n, and i ≠ j,
Figure FDA0002076762910000033
at first, call F1、F2、…、FnIs a mutually exclusive feature, otherwise is a compatibility feature;
for class A, the feature set is all the features F defined on A1、F2、…、FnA set of (a); defining a feature set as FeatureSet (a), where a is a category, FeatureSet representing the set of all features defined on category a;
setting a Feature Base, and storing Feature sets on all categories in the Feature Base in a centralized manner;
step 2.2: and assigning matching modes for the compatibility characteristics, wherein the matching modes comprise a priority matching mode and a full matching mode:
the preferential matching refers to matching according to the sequence and returning the only characteristic of successful matching;
the full matching refers to returning all feature sets successfully matched;
and step 3: using the feature set as a node in a deduction lattice structure to carry out reasoning;
step 3.1: matching Feature features according to the object Type, wherein the matching result comprises three types according to different characteristics of the Feature features, including mutual exclusion features or compatibility features, and different compatibility Feature matching modes: 1) a single feature; 2) multiple features, marked as feature sets; 3) a null feature; for a single feature, establishing a deduction lattice or matching in the deduction lattice until a conclusion is reached; for the condition of the feature set, it is indicated that a plurality of rules can be matched at the same time, a plurality of deduction lattices need to be constructed, the deduction lattice matched with the important conclusion can be preferentially selected for reasoning according to the importance of the conclusion, and then the deduction lattice of the secondary conclusion is selected for reasoning in a gradual manner, so that the effect of matching a plurality of conclusions at the same time is achieved; for the null feature, selecting a path on the right of the current node for matching until all child nodes under the node are exhausted, wherein the condition shows that no rule which can be matched exists;
the deduction lattice is firstly defined as follows:
(1) grid: considering any one of the partially ordered sets (L, ≦ if any of the elements a, b in the set L are such that a, b have a maximum lower bound and a minimum upper bound in L, then (L, ≦ is a lattice;
(2) rule: c1∧C2∧...∧Cn→h;
Wherein, C1、C2、…、CnIs a condition, the condition has a true or false value, and is said to be satisfied when the condition is true; h is the conclusion that when condition C1、C2、…、CnIf the two are true, the corresponding conclusion h is obtained;
if there is a rule C1∧C2∧(Ci∨…∨Ci+n) → h, the rule needs to be split into a plurality of rules, which are: c1∧C2∧Ci→h,…,C1∧C2∧Ci+n→h;
If there is a rule C1∧C2∧C3→h1And h1∧C4∧C5→ h, then the two rules need to be merged into C1∧C2∧C3∧C4∧C5→h;
If the corresponding action is taken according to the condition, the rule is used to draw a conclusion first, and then the corresponding action is carried out according to the conclusion;
then, setting a deduction lattice structure:
the deduction lattice is a lattice with a hierarchical relationship and formed by n finite elements, wherein the elements in the lattice are called nodes, and n is greater than 1; the nodes comprise root nodes, intermediate nodes and leaf nodes; the nodes without predecessors are called root nodes, and the root nodes are unique; nodes without successors are called leaf nodes; the other nodes are called intermediate nodes, and the intermediate nodes are divided into two types: one node is a successor or an intermediate node and is called a pure intermediate node, the other intermediate node is a leaf node and is called a leaf predecessor node, the root node and the intermediate node correspond to regular conditions, the leaf node corresponds to a regular conclusion, and the root node to the leaf node correspond to a rule in a rule base;
the construction algorithm of the deduction lattice is a process of acquiring knowledge in a knowledge base in a certain sequence and forming the deduction lattice; the construction of the deduction lattice is developed by the following six steps:
(1) initializing a deduction lattice root node;
the user initializes an object representing a phenomenon, searches a set of features defined on the type according to its type, finds its matching feature on the set of features, assuming that the feature is C, identified as IDCBuilding Node with Id of IDCThe Node is used as a root Node of the deduction lattice and is used as a current root Node CurrentNode, and child nodes of the CurrentNode are empty;
(2) forming an overall rule set;
by initially selecting node ID during initializationCLooking up all the contained IDs in the rule baseCAssuming that n rules are found in total, forming a rule set and recording the rule set as BaseRuleSet; set rules in eachThe Rule has mi characteristics, and the conclusion corresponding to each Rule is recorded as hi, and the searched rules are as follows:
Rule1:{IDc,IDc11,…,IDc1m1}→h1
Rule2:{IDc,IDc21,…,IDc2m2}→h2
……
Rule n:{IDc,IDcn1,…,IDcnmn}→hn
(3) calculating from CurrentID to IDCThe reverse feature set of (a);
setting the ReverseIdSet to be null, and adding the identifier on each node into the ReverseIdSet to the root node along the reverse direction of the CurrentID;
(4) calculating the condition of the rest identification groups in the rule;
counting Counts for each feature in BaseRuleSet that does not belong to revertidset, each count element consisting of:
[1] the feature ID;
[2] calculating the number count of each feature appearing in the BaseRuleSet;
[3] which rules RuleSet appear in, respectively;
composition is carried out;
the number of the non-repeated characteristics is assumed to be n; sorting the Counts according to the size of count;
(5) partitioning a rule set;
i=0…n,j=i+1…n;
Figure FDA0002076762910000061
and x ∈ Counts [ j ]]Ruleset, then Counts [ j ] will]Count-1, and get x from Counts [ j ]]Remove in RuleSet;
calculating the number k of Counts [ i ] in i-0 … n, wherein the count is not 0, and reordering k times of the Counts not 0 from large to small;
(6) constructing child nodes;
if k is not equal to 0, constructing a sub-node set Nodes [0], …, Nodes [ k-1], wherein Id is Counts [ i ] Id, sub-Nodes are null, and i is 0 … k-1;
setting child Nodes of CurrentNode as Nodes [0], … and Nodes [ k-1], stacking Nodes [1], …, Nodes [ k-1] and Counts [1], … and Counts [ k-1], adjusting CurrentNode as Nodes [0], adjusting BaseRuleSet as Counts [0]. RuleSet, and turning to (3);
if k is 0, constructing a child Node, wherein id of the child Node is the conclusion of the corresponding rule, the child Node is null, the child Node of the CurrentNode points to the Node, if the stack is null, the deduction lattice construction is finished, otherwise, popping up Count and Node from the stack respectively, adjusting the CurrentNode to the Node, adjusting the BaseRuleSet to Count.
Thus, the deduction lattice is constructed.
CN201910456567.2A 2019-05-29 2019-05-29 Method for reasoning by using knowledge representation based on characteristics Active CN110276453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910456567.2A CN110276453B (en) 2019-05-29 2019-05-29 Method for reasoning by using knowledge representation based on characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910456567.2A CN110276453B (en) 2019-05-29 2019-05-29 Method for reasoning by using knowledge representation based on characteristics

Publications (2)

Publication Number Publication Date
CN110276453A CN110276453A (en) 2019-09-24
CN110276453B true CN110276453B (en) 2022-06-14

Family

ID=67959162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910456567.2A Active CN110276453B (en) 2019-05-29 2019-05-29 Method for reasoning by using knowledge representation based on characteristics

Country Status (1)

Country Link
CN (1) CN110276453B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111930869B (en) * 2020-08-11 2024-02-06 上海寻梦信息技术有限公司 Address correction method, address correction device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7644052B1 (en) * 2006-03-03 2010-01-05 Adobe Systems Incorporated System and method of building and using hierarchical knowledge structures
CN101710393A (en) * 2009-11-25 2010-05-19 北京航空航天大学 Method for knowledge expressing and reasoning mechanism of expert system
CN103745191A (en) * 2013-11-15 2014-04-23 中国科学院遥感与数字地球研究所 Landform analysis based method for automatically identifying tablelands, ridges and loess hills in loess region
CN104331455A (en) * 2014-10-30 2015-02-04 北京科技大学 Traditional Chinese medicine QI and blood syndrome identifying deductive reasoning recurrence method and device
CN106529676A (en) * 2016-10-25 2017-03-22 胡煜州 Deductive lattice and reasoning method based on deductive lattice

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1990741A1 (en) * 2007-05-10 2008-11-12 Ontoprise GmbH Reasoning architecture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7644052B1 (en) * 2006-03-03 2010-01-05 Adobe Systems Incorporated System and method of building and using hierarchical knowledge structures
CN101710393A (en) * 2009-11-25 2010-05-19 北京航空航天大学 Method for knowledge expressing and reasoning mechanism of expert system
CN103745191A (en) * 2013-11-15 2014-04-23 中国科学院遥感与数字地球研究所 Landform analysis based method for automatically identifying tablelands, ridges and loess hills in loess region
CN104331455A (en) * 2014-10-30 2015-02-04 北京科技大学 Traditional Chinese medicine QI and blood syndrome identifying deductive reasoning recurrence method and device
CN106529676A (en) * 2016-10-25 2017-03-22 胡煜州 Deductive lattice and reasoning method based on deductive lattice

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Research on Process Knowledge Representation and Reasoning Decision Method Based on Structural;ZHANG Dan-dan等;《2017 Chinese Automation Congress (CAC)》;20180101;第7902-7907页 *
基于产生式***的煤层气井排采异常识别技术;王林等;《煤田地质与勘探》;20170630;第45卷(第3期);第72-76页 *

Also Published As

Publication number Publication date
CN110276453A (en) 2019-09-24

Similar Documents

Publication Publication Date Title
Yang et al. Learn to explain efficiently via neural logic inductive learning
Amiri et al. Application of shuffled frog-leaping algorithm on clustering
Mansinghka et al. Structured priors for structure learning
Tang et al. CART decision tree combined with Boruta feature selection for medical data classification
Amin A novel classification model for cotton yarn quality based on trained neural network using genetic algorithm
Rahman et al. Discretization of continuous attributes through low frequency numerical values and attribute interdependency
Kumar et al. A benchmark to select data mining based classification algorithms for business intelligence and decision support systems
CN108229578B (en) Image data target identification method based on three layers of data, information and knowledge map framework
CN110737805B (en) Method and device for processing graph model data and terminal equipment
Pal et al. Deep learning for network analysis: problems, approaches and challenges
Moldovan et al. Chicken swarm optimization and deep learning for manufacturing processes
Mahanipour et al. Using fuzzy-rough set feature selection for feature construction based on genetic programming
Salama et al. Ant colony algorithms for constructing Bayesian multi-net classifiers
CN110276453B (en) Method for reasoning by using knowledge representation based on characteristics
US20190347302A1 (en) Device, system, and method for determining content relevance through ranked indexes
CN109542949A (en) A kind of decision information system knowledge acquisition method based on type vector
Angayarkanni Predictive analytics of chronic kidney disease using machine learning algorithm
Wu et al. Multi-graph-view learning for complicated object classification
CN115440387A (en) Artificial intelligence-based resident respiratory infectious disease monitoring and early warning system and method
Durgadevi et al. Medical distress prediction based on Classification Rule Discovery using ant-miner algorithm
CN114528333A (en) Test question implicit knowledge attribute association mining and related test question pushing method and system based on attribute exploration
Pazos et al. Linking of an Artificial Neural Network with the Eeie Expert System to Identify Environmental Impacts.
Sun et al. ERGP: A combined entity resolution approach with genetic programming
Raoui et al. Automated Interoperability based on Decision Tree for Schema Matching
CN112085085B (en) Multi-source migration learning method based on graph structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant