CN115329962A - Visual interpretation method of normal form graph model - Google Patents

Visual interpretation method of normal form graph model Download PDF

Info

Publication number
CN115329962A
CN115329962A CN202210948263.XA CN202210948263A CN115329962A CN 115329962 A CN115329962 A CN 115329962A CN 202210948263 A CN202210948263 A CN 202210948263A CN 115329962 A CN115329962 A CN 115329962A
Authority
CN
China
Prior art keywords
node
value
intervention
graph
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210948263.XA
Other languages
Chinese (zh)
Inventor
冯亚维
黄胜蓝
周玺
周林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Shiluhuitu Information Technology Co ltd
Original Assignee
Xi'an Shiluhuitu Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Shiluhuitu Information Technology Co ltd filed Critical Xi'an Shiluhuitu Information Technology Co ltd
Priority to CN202210948263.XA priority Critical patent/CN115329962A/en
Publication of CN115329962A publication Critical patent/CN115329962A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/045Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/041Abduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/046Forward inferencing; Production systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a visual interpretation method of a paradigm chart model, which comprises the following steps: step 1: the visualization mode creates a cause and effect graph; and 2, step: creating training and prediction tasks; and step 3: and setting different evidences and intervention conditions in different predicted graph structures in an interactive mode to calculate the probability under different conditions, so as to find out the correct causal relationship. The method uses a visual interaction mode, and can simply and conveniently construct an initial cause and effect diagram structure; training and predicting tasks can be quickly created by using the dag canvas component operation flow of the platform; and aiming at the interpretability of the result, different evidences and intervention conditions can be set in different predicted graph structures in an interactive mode to calculate the probability under different conditions, so that the correct causal relationship is found.

Description

Visual interpretation method of normal form graph model
Technical Field
The invention relates to the technical field of paradigm chart models, in particular to a paradigm chart model visualization interpretation method.
Background
Causal inference is a logical way of deducing to some positive result by the cause of the matter, and is a specific method of causal analysis. By causal inference, the magnitude of the probability between causal things can be derived. Causal relationships that exist between things can be represented by a causal graph. The causality map is a simulation in the form of a graph. Points represent objects (variables), and edges with points represent causal effects of objects. This point-edge relationship graph is referred to as a causal graph. If each sideband has a weight, it can indicate the magnitude of the effect of different factors on this result.
While existing machine learning models have made great progress, unfortunately, all models are simply an accurate curve fit to the data, looking for correlations between variables rather than potential causality, and are poorly interpretable. Such knowledge can lead to a shallower level of correlation for scientific research, leading to a loss of model robustness and interpretability, blocking the ability to further explore intervention variables, and counterfactual inferences.
Disclosure of Invention
In view of this, the present invention provides a visual interpretation method for paradigm chart models to solve the above technical problems.
The invention discloses a visual interpretation method of a paradigm chart model, which comprises the following steps:
step 1: the visualization mode creates a cause and effect graph;
step 2: creating training and prediction tasks;
and step 3: different evidences and intervention conditions are set in different predicted graph structures in an interactive mode to calculate the probability under different conditions, and therefore correct causal relationships are found.
Further, the step 1 comprises:
creating a single thing by using a floating menu, and editing a node by a right-click menu;
and for the association relation between the things, the connection operation is triggered by a connection button of the floating menu or a connection node in the right-click menu.
Further, after the step 1 and before the step 2, the method further comprises:
after the causal graph is drawn, selecting task configuration information by using a form, and finally submitting a paradigm model of causal reasoning to a computing platform for operation; the task configuration information comprises an evidence node, an intervention node and a query node.
Furthermore, after the component runs successfully, clicking the result to check can enter a result explanatory page.
Furthermore, the head of the result explanatory page is result evaluation details which can show configured evidence nodes, intervention nodes, query nodes, accuracy and structure scores; wherein the accuracy and structure score is an estimate of the results of the model run.
Further, a normal form model of causal reasoning can generate a recommended graph structure through automatic learning in the operation process; the method can be used for carrying out result exploration on different graph structures, and can find the probability calculated by inquiring nodes under different graph structures and different intervention conditions to finally determine the causal relationship between things.
Further, in the step 2, the training input: defining a plurality of events as nodes, and providing influence relations among the events as edges; providing a plurality of groups of samples to represent the value of each node sampled each time; appointing an intervention node and an intervention value; and other attributes and edge attributes of the nodes except the occurrence probability are not supported, and the node value only supports a limited discrete value.
Further, in the step 2, the prediction input: inputting an observed value of each group of samples; and (3) prediction output: the directed graph containing each node represents the influence relation of each event; and (4) each value probability of the trunk prognosis of the node to be queried under the input observation value.
Further, the step 3 comprises:
carrying out maximum likelihood estimation on the value of each event of the training sample to obtain the conditional probability of each node;
and searching for a graph structure which best accords with the causal relationship of each event by using breadth-first search, forming a probability graph model after obtaining the causal structure and parameters of each node, and calculating more accurate value probability of the node to be inquired after inputting observation values of some nodes during prediction.
Further, in the process of calculating the hypothesis, the value conditional probability of the query node when the intervention node value is the intervention value is calculated as the value probability of the query node after the intervention by blocking the confounding factor thanks to the causal structure learned by the model.
Due to the adoption of the technical scheme, the invention has the following advantages: by using a visual interaction mode, an initial cause and effect diagram structure can be simply and conveniently constructed. And training and prediction tasks can be created quickly using the platform's dag canvas component runtime flow. And aiming at the interpretability of the result, different evidences and intervention conditions can be set in different predicted graph structures in an interactive mode to calculate the probability under different conditions, so that the correct causal relationship is found.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments described in the embodiments of the present invention, and it is obvious for those skilled in the art that other drawings may be obtained according to the drawings.
Fig. 1 is a flow chart diagram of a normal graph model visualization interpretation method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a cause and effect diagram creation according to an embodiment of the invention;
FIG. 3 is a diagram of a normal model of causal reasoning according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a results interpretation page of an embodiment of the present invention;
FIG. 5 is a diagram illustrating an inference process of a result of something, according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and examples, it being understood that the examples described are only some of the examples and are not intended to limit the invention to the embodiments described herein. All other embodiments available to those of ordinary skill in the art are intended to be within the scope of the embodiments of the present invention.
Referring to fig. 1, the present invention provides an embodiment of a normal graph model visualization interpretation method, which includes the following steps:
step 1: the visualization mode creates a cause and effect graph;
and 2, step: creating training and prediction tasks;
and step 3: different evidences and intervention conditions are set in different predicted graph structures in an interactive mode to calculate the probability under different conditions, and therefore correct causal relationships are found.
Referring to fig. 2, in this embodiment, the step 1 includes:
creating a single thing by using a floating menu, and editing a node through a right-click menu;
and for the association relation between the things, the connection operation is triggered by a connection button of the floating menu or a connection node in the right-click menu.
In this embodiment, after the step 1 and before the step 2, the method further includes:
referring to fig. 3, after the causal graph is drawn, a form is used to select task configuration information, and a paradigm model of causal reasoning is finally submitted to a computing platform for operation; the task configuration information comprises an evidence node, an intervention node and a query node.
In this embodiment, after the component is successfully operated, the result can be entered into the result explanatory page by clicking the result for viewing.
Referring to fig. 4, in the present embodiment, the head of the result explanatory page is result evaluation details, which can show configured evidence nodes, intervention nodes, query nodes, accuracy rates, and structure scores; wherein the accuracy and structure score is an estimate of the results of the model run.
In this embodiment, the causal reasoning paradigm model generates a recommended graph structure through automatic learning in the operation process; the method can carry out result exploration on different graph structures, and can find the probability calculated by inquiring nodes under different graph structures and different intervention conditions through exploration, thereby finally determining the causal relationship among things.
In this embodiment, in the step 2, the training input: defining a plurality of events as nodes, and providing influence relations among the events as edges; providing a plurality of groups of samples to represent the value of each node sampled each time; appointing an intervention node and an intervention value; and other attributes and edge attributes of the nodes except the occurrence probability are not supported, and the node value only supports a limited discrete value.
In this embodiment, in the step 2, the prediction input is: inputting an observed value of each group of samples; and (3) prediction output: the directed graph containing each node represents the influence relation of each event; and (4) each value probability of the trunk prognosis of the node to be queried under the input observation value.
In this embodiment, the step 3 includes:
carrying out maximum likelihood estimation on the value of each event of the training sample to obtain the conditional probability of each node;
and searching for a graph structure which best accords with the causal relationship of each event by using breadth-first search, forming a probability graph model after obtaining the causal structure and parameters of each node, and calculating more accurate value probability of the node to be inquired after inputting observation values of some nodes during prediction.
In this embodiment, in the case of calculating the hypothesis, the model benefits from the learned causal structure, and the conditional probability of the query node when the intervention node takes the intervention value is calculated as the value probability of the post-intervention query node by blocking the confounding factor.
To facilitate understanding, the present application provides a more specific example:
the probability that a person gets cancer, whether it is affected by air pollution or smoking, and the person who gets cancer also receives X-ray radiation treatment and presents symptoms of respiratory disorders. For each node, the association between the nodes is not considered, only one crowd is observed, how many people in the crowd get the cancer, and how many people do not get the cancer, so that the probability of the cancer can be obtained. The purpose of causal reasoning is also a probability value, which is known academically as the "posterior probability". For example, we have a large body of data indicating that: what is a clear relationship between cancer and smoking and even if we have obtained a conditional probability between them, is that a patient who has a habit of smoking and a serious environmental pollution of life, and what is the probability of cancer? The probability is the 'posterior probability' that we ask, and the habit of smoking and the environmental pollution of life of the people are serious 'evidence information' that we know in advance, and the evidence information plays a key role in reasoning process and is also the key for obtaining the cancer probability of the patients.
Referring to fig. 5, the whole process is as follows: a) Acquiring historical data; b) Constructing a causal graph network among things (variables) by combining professionally recognized causal knowledge or learning from data; c) Loading historical data by a graph network to construct a causal probability graph; d) And deducing a result of something based on the reason of the evidence property of something.
Training input: defining a plurality of events as nodes, and optionally providing influence relations among the events as edges; providing a plurality of groups of samples to represent the value of each node sampled each time; appointing an intervention node and an intervention value; the node value only supports limited discrete values.
And (3) prediction input: the observed values for each set of samples are input. And (3) outputting: the directed graph containing each node represents the influence relation of each event; and (4) each value probability of the trunk prognosis of the node to be queried under the input observation value.
The calculation principle is as follows: and performing maximum likelihood estimation on the value of each event of the training sample to obtain the conditional probability of each node. And searching for a graph structure which best accords with the causal relationship of each event by using breadth-first search. And forming a probability graph model after obtaining the causal structure and the parameters of each node, and calculating more accurate value probability of the node to be inquired after inputting some node observed values during prediction. In the process of calculating the hypothesis, the value conditional probability of the query node when the intervention node is the intervention value is calculated as the value probability of the query node after the intervention by blocking the confounding factors thanks to the learned causal structure of the model.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. A visual interpretation method of a paradigm chart model is characterized by comprising the following steps:
step 1: the visualization mode creates a cause and effect graph;
and 2, step: creating training and prediction tasks;
and 3, step 3: different evidences and intervention conditions are set in different predicted graph structures in an interactive mode to calculate the probability under different conditions, and therefore correct causal relationships are found.
2. The method of claim 1, wherein step 1 comprises:
creating a single thing by using a floating menu, and editing a node by a right-click menu;
and for the association relation between the things, the connection operation is triggered by a connection button of the floating menu or a connection node in the right-click menu.
3. The method of claim 1, wherein after step 1 and before step 2, the method further comprises:
after the causal graph is drawn, selecting task configuration information by using a form, and finally submitting a paradigm model of causal reasoning to a computing platform for operation; the task configuration information comprises an evidence node, an intervention node and a query node.
4. The method of claim 3, wherein clicking on the results view after the component has run successfully can enter a results interpretive page.
5. The method of claim 4, wherein the header of the results interpretive page is results evaluation details, which can show configured evidence nodes, intervention nodes, query nodes, and accuracy and structure scores; wherein the accuracy and structure score is an evaluation of the results of the model run.
6. The method of claim 3, wherein the paradigm model of causal reasoning generates the recommended graph structure through automatic learning during operation; the method can be used for carrying out result exploration on different graph structures, and can find the probability calculated by inquiring nodes under different graph structures and different intervention conditions to finally determine the causal relationship between things.
7. The method of claim 1, wherein in step 2, the training input: defining a plurality of events as nodes, and providing influence relations among the events as edges; providing a plurality of groups of samples to represent the value of each node sampled each time; appointing an intervention node and an intervention value; the node value only supports limited discrete values.
8. The method according to claim 1, characterized in that in said step 2, the prediction input: inputting an observed value of each group of samples; and (3) prediction output: the directed graph containing each node represents the influence relation of each event; and (4) each value probability of the trunk prognosis of the node to be queried under the input observation value.
9. The method of claim 1, wherein step 3 comprises:
carrying out maximum likelihood estimation on the value of each event of the training sample to obtain the conditional probability of each node;
and searching for a graph structure which best accords with the causal relationship of each event by using breadth-first search, forming a probability graph model after obtaining the causal structure and parameters of each node, and calculating more accurate value probability of the node to be inquired after inputting observation values of some nodes during prediction.
10. The method of claim 9, wherein in the process of computing the hypothesis, thanks to the causal structure learned by the model, the conditional probability of the query node value when the intervention node value is the intervention value is computed as the value probability of the query node after the intervention by blocking the confounding factor.
CN202210948263.XA 2022-08-09 2022-08-09 Visual interpretation method of normal form graph model Pending CN115329962A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210948263.XA CN115329962A (en) 2022-08-09 2022-08-09 Visual interpretation method of normal form graph model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210948263.XA CN115329962A (en) 2022-08-09 2022-08-09 Visual interpretation method of normal form graph model

Publications (1)

Publication Number Publication Date
CN115329962A true CN115329962A (en) 2022-11-11

Family

ID=83920985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210948263.XA Pending CN115329962A (en) 2022-08-09 2022-08-09 Visual interpretation method of normal form graph model

Country Status (1)

Country Link
CN (1) CN115329962A (en)

Similar Documents

Publication Publication Date Title
Innocent et al. Computer aided fuzzy medical diagnosis
Darwiche Bayesian networks
Bøttcher et al. deal: A package for learning Bayesian networks
Cobb et al. A comparison of Bayesian and belief function reasoning
RU2689818C1 (en) Method of interpreting artificial neural networks
US20220121902A1 (en) Method and apparatus for quality prediction
EP3701403B1 (en) Accelerated simulation setup process using prior knowledge extraction for problem matching
WO2021137897A1 (en) Bias detection and explainability of deep learning models
CN115640159A (en) Micro-service fault diagnosis method and system
CN116611546B (en) Knowledge-graph-based landslide prediction method and system for target research area
CN111126552A (en) Intelligent learning content pushing method and system
CN115240843A (en) Fairness prediction system based on structure causal model
Fine et al. Query by committee, linear separation and random walks
Dutta et al. An adversarial explainable artificial intelligence (XAI) based approach for action forecasting
Trabelsi et al. Pruning belief decision tree methods in averaging and conjunctive approaches
Bermejo et al. Interactive learning of Bayesian networks using OpenMarkov
CN115329962A (en) Visual interpretation method of normal form graph model
Roubtsova et al. A method for modeling of KPIs enabling validation of their properties
US20210110287A1 (en) Causal Reasoning and Counterfactual Probabilistic Programming Framework Using Approximate Inference
EP3975071A1 (en) Identifying and quantifying confounding bias based on expert knowledge
Kumar et al. Predictive analysis of novel coronavirus using machine learning model-a graph mining approach
Prashanthi et al. Defect prediction in software using spiderhunt-based deep convolutional neural network classifier
Say L'Hôpital's filter for QSIM
Ackerman et al. Theory and Practice of Quality Assurance for Machine Learning Systems An Experiment Driven Approach
US20220383167A1 (en) Bias detection and explainability of deep learning models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination