WO2013016429A2 - Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations - Google Patents

Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations Download PDF

Info

Publication number
WO2013016429A2
WO2013016429A2 PCT/US2012/048151 US2012048151W WO2013016429A2 WO 2013016429 A2 WO2013016429 A2 WO 2013016429A2 US 2012048151 W US2012048151 W US 2012048151W WO 2013016429 A2 WO2013016429 A2 WO 2013016429A2
Authority
WO
WIPO (PCT)
Prior art keywords
autoverification
parameter
rule
output
applying
Prior art date
Application number
PCT/US2012/048151
Other languages
French (fr)
Other versions
WO2013016429A8 (en
WO2013016429A3 (en
Inventor
John M. ASHILEY
Jason M. PARKHURST
Kathleen M. Payne
Original Assignee
Beckman Coulter, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beckman Coulter, Inc. filed Critical Beckman Coulter, Inc.
Priority to BR112014001822A priority Critical patent/BR112014001822A2/en
Publication of WO2013016429A2 publication Critical patent/WO2013016429A2/en
Publication of WO2013016429A8 publication Critical patent/WO2013016429A8/en
Publication of WO2013016429A3 publication Critical patent/WO2013016429A3/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/40ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis

Definitions

  • Clinical laboratories either in government, educational, or private settings include different types of laboratory equipment for performing various tests on samples.
  • the different pieces of equipment that perform these tests include hematology, coagulation, immunoassay, and chemistry analyzers. Since certain samples require testing on more than a single piece of equipment, a computer system is often used to synchronize and prioritize testing such that samples are processed efficiently. Using a computer system to perform this function is far more efficient than using a human operator to manage the tests, thus reducing costs, speeding analysis, and ensuring accurate results. In fact, clinical laboratories that limit or even eliminate human operators may produce the most accurate results. While computers help ensure accuracy in laboratory performance, there are a number of issues that must be addressed before the test results of a particular laboratory may be relied upon for diagnostic purposes.
  • An autoverification operation is the process of using one or more rules to automatically validate a test result by a computer. Each operation generates an autoverification outcome for the clinical test result.
  • Example autoverification outcomes include: (i) validate the test result; (ii) hold the test result for manual review; (iii) rerun the test; (iv) dilute the sample (and rerun the test on the diluted sample); and (v) cancel the test.
  • An autoverification operation may comprise a single, complex rule that is difficult to enter or understand because it is composed of compound logical expressions with multiple branches in the flow of logic through the rule.
  • One way of making such an autoverification operation easier to enter and understand is to break this complex rule up into multiple simpler rules or subparts. Each rule performs a specific task (such as determining if a value is within a given range or comparing one value with a prior value).
  • Each rule then generates as output a suggested action. If an autoverification operation contains only one rule, this suggested action becomes the outcome of the operation. However, multiple rules applied to a clinical test result may generate conflicting suggested actions, and a decision must be made as to which suggested action will be the outcome of the autoverification operation. It would be advantageous to enable the computer to make that decision, thereby reducing the operator's workload and making the validation process more efficient. Prior approaches to reconciling this conflict include holding the results for manual review, presenting each possible action to the user and awaiting the user's choice of which action to take.
  • Another prior approach requires the user to choose the order in which the multiple rules are applied so that only a single suggested action is generated (the one associated with the rule applied first), regardless of the possibility of a preferred other action from a later-applied rule. This results in reduced efficiency, often with an excessive number of test results requiring operator review.
  • Simulations are used to confirm that an autoverification operation functions as intended.
  • Example test results, or parameters indicative of a test result are provided by an operator, who then confirms that application of the operation to each example test result provides the expected autoverification outcome.
  • Particularly complicated autoverification operations may involve multiple rules, or rules having multiple logic segments (i.e., subparts) and, therefore, multiple possible actions.
  • Changes to the configuration of the laboratory require evaluation of an operation to ensure that the operation performs as expected.
  • the operation can be validated by applying the operation to a set of example test results or parameters and verifying that the operation provides the expected outcome.
  • a clinical laboratory may be configured with multiple laboratory instruments. During operation, the configuration of the lab may change, such as when an instrument goes off-line or when a particular reagent or test becomes unavailable. This changed configuration may affect the validity of an autoverification operation if any part of that operation is dependent on a particular laboratory instrument or test that is lost in the changed configuration.
  • the technology relates to a method of processing a clinical test result, the method including: applying a first rule to the clinical test result to generate a first suggested action; applying a second rule to the clinical test result to generate a second suggested action, wherein the second suggested action is associated with a weighted factor; and automatically performing the second suggested action based at least in part on the weighted factor.
  • the technology relates to a method of evaluating an autoverification operation, the method including: applying a first logic segment to a first parameter to generate a first logic decision; and applying a second logic segment to a second parameter to generate a second logic decision, wherein applying the second logic segment requires applying the first logic segment to the first parameter.
  • the technology relates to a method of evaluating an autoverification operation, the method including: receiving a first parameter to evaluate a first logic segment of the autoverification operation; applying the autoverification operation to the first parameter to generate a first output; receiving a second parameter to evaluate a second logic segment of the autoverification operation; and applying the autoverification operation to the first parameter and the second parameter to generate a second output, wherein the step of receiving the second parameter occurs after the step of applying the autoverification operation to the first parameter to generate the first output.
  • the technology relates to a method of evaluating an autoverification operation, the method including: applying, to a first parameter, the autoverification operation based on a first laboratory configuration to generate a first output; detecting a change in the first laboratory configuration, the change resulting in a second laboratory configuration; applying, to the first parameter, the autoverification operation based on the second laboratory configuration to generate a second output; and comparing the first output to the second output.
  • the technology relates to a method of evaluating a rule of an autoverification operation, the method including: receiving a first parameter to evaluate a first logic segment of the rule; applying the autoverification operation to the first parameter to generate a first output; receiving a second parameter to evaluate a second logic segment of the rule; and applying the autoverification operation to the second parameter to generate a second output, wherein the step of applying the autoverification operation to the second parameter includes applying the first output as input to the second logic segment.
  • FIG. 1 depicts an exemplary clinical laboratory testing system.
  • FIG. 2 depicts a method for processing a clinical test result and reconciling conflict between conflicting suggested actions.
  • FIGS. 3 and 4 depict methods for simulating autoverification operations in laboratory settings.
  • FIG. 5 depicts a method of evaluating an autoverification operation when a laboratory configuration has changed.
  • the technology disclosed herein addresses the problems with existing systems identified above.
  • a schematic of a clinical laboratory 100 that would benefit from the disclosed technology is depicted in FIG. 1.
  • the laboratory 100 includes a control computing devce 102 and one or more pieces of laboratory equipment 104. Any number of equipment (e.g., 104A, 104B, 104C, . . . 104N, where "N" represents any integer value), having virtually any function, may be utilized.
  • An autoverification operation for a clinical test result includes one or more rules.
  • Each rule includes one or more logical expressions or segments to generate a suggested action as an output, using the clinical test result as an input.
  • the suggested action is what the autoverification outcome would be if the autoverification operation were composed of only a single rule.
  • an autoverification operation includes the following two rules:
  • the first rule determines whether the test result is within an instrument's normal operating range. It uses the test result value as input, and generates a suggestion action of 'validate' if the test result is within the range, and a suggested action of 'hold' if it is out of that range.
  • the second rule determines whether the test result is below a medically critical value. It also uses the test result value as input, generating a suggested action of 'validate' if that value is at or below a critical level, and a suggested action of 'rerun' if it is above that level.
  • the suggested actions include, for example, (i) validating the clinical test result, (ii) holding the clinical test result, (iii) rerunning a clinical test, (iv) diluting a clinical test sample, and (v) cancelling the test.
  • the hold suggestion has the highest weighted value, followed by dilution/rerun/cancel, then validate.
  • the weighted factors of these suggested actions may be changed by an operator.
  • the weighted factors may be graphically displayed with a color coded icon or other indicator to identify importance of the suggested action. For example, the hold suggestion may be identified with a red icon, while the validate suggestion may be identified with a green icon.
  • the factor is used to establish a priority among the various suggested actions.
  • FIG. 2 One example of a method 120 for processing a clinical test result and reconciling conflict between conflicting suggested actions is depicted in FIG. 2.
  • the method includes processing clinical test results by applying any number of rules of an autoverification operation to generate suggested actions, and then taking actions based on the results obtained.
  • An operation may include a single rule or multiple rules.
  • a validation rule may be a procedure that evaluates parameters of the test and sample and makes a decision to perform one operation or another on the test/sample/result.
  • an operation may include a single, complex rule having a sequence of nodes or logic segments.
  • a logic segment includes an input and either a logical decision or an action dependent on the input, wherein the input may be a test result, demographic information of the sample, or a decision from a previously executed logic segment.
  • a first rule is applied to a clinical test result to generate a first suggested action in operation 124, which may be one of the actions identified above.
  • a second rule is also applied to the clinical test result in operation 126, and a second suggested action is generated in operation 128.
  • Either or both of these suggested actions may include a weighted factor, as described above.
  • An operation 130 is performed to determine which weighted factor is more important.
  • the action having the weighted factor of the most importance is then automatically performed in operation 132 by the computing device as the autoverification outcome of that clinical test.
  • the steps of this method are depicted in sequence, the application of the first and second rules may be performed in parallel, or in any other order required or desired by the operator. Suggested actions resulting from operations having multiple rules are reserved until all the rules associated with a clinical test have been performed.
  • the autoverification operation stops or is disabled upon the occurrence of certain events. For example, the autoverification operation is disabled when two suggested next actions conflict with each other (e.g., when the rules have the same weighted factor).
  • the autoverification operation may also stop or be disabled if the user asynchronously takes an action on a test such as ordering a rerun over a pending dilution.
  • a hold signal can be generated, and the user notified so that the user can decide the appropriate action to take.
  • the autoverification operation includes Rule 1 and Rule 2 above, applying the two rules to a test result value that was above the instrument's operational range (and therefore also above the medically critical level) would generate the suggested actions of 'hold for manual review' (to investigate why the value is beyond a reportable range) and 'rerun the test' (to confirm a critical result). Because the out-of-range value may indicate a problem with the instrument or the sample, simply rerunning the test is likely to result in the same problem, and holding for manual review to investigate the cause of this problem is therefore the more efficient result. By pre-assigning a higher weighted factor to the suggested action of hold, the system can automatically select this more efficient outcome.
  • Another problem with existing systems is the inability of those systems to ensure that the autoverification operations are running properly when they are set up.
  • One solution proposed herein contemplates performing a simulation of an autoverification operation by displaying the functioning of the various logic segments of the operation as a step-by-step progression on a timeline. An operator may be prompted to enter parameters to begin a simulation procedure.
  • a parameter may include any information that may be necessary to perform a particular operation or a part thereof.
  • Exemplary parameters may include previous test results, data values (either based on real tests or an operator's expectations based on certain factors). Other parameters may include demographic information regarding the donor of the test sample, sample information such as the specimen type and draw time, and the tests ordered on the sample.
  • the system displays as a timeline the steps taken by the system in applying the operation to the entered data, including the next action (or actions) that would be suggested by the system.
  • the user may then enter example results (or user action) for this next suggested action, and the timeline updates to display further steps taken by the system, including the (new) next suggested action. In this way, a complete path through the operation (or the logic segments thereof) is evaluated in a step-by-step process that is easy for a user or operator to follow and to understand.
  • Outputs obtained after performance of each logic segment may be used to verify that logic segments and simples rules are behaving as the user intended.
  • Example outputs include, but are not limited to, validating a result, canceling a test, ordering a rerun, ordering a new test, and providing a test result.
  • FIG. 3 depicts a particular method 140 for simulating an autoverification operation in a laboratory setting.
  • a first parameter is applied to a first logic segment to produce a first logic decision.
  • a second parameter is applied to a second logic segment to produce a second logic decision.
  • the first logic segment must be reapplied in order to apply the second logic segment.
  • the first logic segment need not be reapplied. Generation of multiple logic decisions continues until the simulation of the entire rule or operation is complete.
  • the system then prompts the operator for that second parameter in operation 148.
  • the system then applies the second parameter to the second logic segment of a rule in operation 1 0, and a second logic decision is generated in operation 152.
  • This result, as well as the second logic segment, first parameter, first logic decision, and any other relevant information are then displayed, in operation 154.
  • the entire simulation and therefore the entire operation, will be clearly understood by the operator, the system again shows the operator the first logic segment and other relevant information, prior to displaying the second logic segment, second parameter, second logic decision, and other relevant information. This process repeats until the entire operation has been simulated, thus allowing the user to verify the proper functionality of the operation.
  • the simulation may be operator defined, such that only certain logic decisions, logic segments, etc., are displayed, depending on operator requirements.
  • the first parameter is received by the system in operation 162, and used to evaluate a first logic segment of a rule of the autoverification operation.
  • the first parameter may be received directly upon prompting an operator, or may be obtained from a data set entered previously by an operator, or a data set obtained from a sample test result.
  • the entire autoverification operation is then applied to the first parameter in operation 164 to generate a first output in operation 166. If additional parameters are required, they are subsequently obtained by the system, as described above (as in operations 168, 170, and 172). As the simulation continues, the entire autoverification operation is now applied in operation 170 to the first parameter and the second parameter (as well as subsequent parameters, as the simulation proceeds further) to generate a second output in operation 172.
  • This rule which evaluates only two ranges (an instrument range and a critical range) to determine the autoverification outcome, requires a complicated series of compound logical statements that are difficult to write and difficult to follow and understand.
  • the method 160 depicted in FIG. 4 receives as a first parameter the first test result.
  • the user enters a value that is above the usable range of the instrument.
  • the system applies that value to a first logic segment of the rule (the segment that determines whether the value is within a usable range) to determine a first output (the action of diluting the sample for retest).
  • the system displays this action, prompting the user to enter the test results for the diluted test sample (the second parameter) to continue progression through the rule.
  • the user enters a value that is below the maximum usable range of the instrument but above a medically critical level, and the system applies that value to a second logic segment of the rule (determining whether the diluted test result is within range but critically high), and displays the result (hold for manual review of critically high result). In this way, the user is able to follow the rule through simple, progressive steps to ensure that the rule is performing each step as intended.
  • test result of the diluted sample is greater than the maximum usable range of the instrument, then hold the result
  • parameters are used to evaluate logic segments (i.e., subparts) of a first rule.
  • One or more outputs from application of one or more logic segments to one or more parameters are produced. These outputs may be an intermediate output of a rule, and in certain cases, the simulation is unable to proceed beyond a first output in the absence of a second parameter.
  • a second or subsequent output may be the final outcome of an autoverification operation.
  • test results are obtained when first validating an autoverification operation, or when simulating an autoverification operation.
  • the system detects whenever the configuration of the lab changes, and reevaluates the autoverification operation whenever the configuration changes by reapplying each example test result to the autoverification operation (or a logic segment thereof) and comparing the output with the previously saved output from the original evaluation of the autoverification operation.
  • the system may automatically continue testing and autoverifying clinical test results if the autoverification operation is still valid.
  • the system can automatically proceed if the changed configuration has no effect on the autoverification process, without operator intervention, thereby minimizing delays due to manual intervention.
  • FIG. 5 One example of such a method 180 is depicted in FIG. 5.
  • autoverification operation based on a first laboratory configuration is applied to a first parameter in operation 184.
  • the parameter may be data entered by an operator based on previously obtained test results, or the parameter may be the results of a test itself. Alternatively, the parameter may be entered by the operator based on other factors (operator experience or expectations, for example).
  • a first output is generated in operation 186, and all relevant information related thereto is stored.
  • the autoverification operation may continue, and the system may continue to process samples 188 until it detects a change in the first laboratory configuration in operation 190, thus resulting in a second, updated laboratory configuration (set in operation 192).
  • processing of samples would cease so proper functioning of the autoverification operation based on the second, updated configuration may be verified.
  • the autoverification operation based on this second, updated laboratory configuration is then applied to the first parameter in operation 194, and a second output is generated in operation 196. If a comparison between the first and second outputs shows no difference in operation 198, the autoverification operation is determined to be functioning properly (i.e., it is verified) in operation 200, and further processing of samples may proceed. If there is a difference, further processing may be prohibited and a notification may be sent to an operator in operation 202.
  • a change in lab configuration can be, e.g., the addition or removal of an instrument, a change in the assay menu of an instrument, a change in the model of an instrument, a change in the QC requirements of an instrument, a change in the performance of an assay, or a change in the acceptability criteria of a result, e.g., the validation range of a test, a delta check value, a critical limit value, or a sample type for an test.
  • the system may detect the change by input from a user, or the system may detect the change automatically by monitoring the status and operation of instruments attached to or in communication with the system.
  • the autoverification operation for a test for the cation sodium (Na) includes the following three rules:
  • This autoverification operation requires test results for Na, hemolysis, chloride (CI), bicarbonate (C0 2 ), and potassium (K). This autoverification operation can therefore be valid only in a lab where each of these tests is available. If, for example, the lab configuration changes so that hemolysis results are no longer available, the system automatically reevaluates the autoverification operation and can notify the user that the Na test cannot be autoverified (or even manually verified - since both require hemolysis results) under the current lab configuration. This helps prevent wasteful tests that cannot be verified. Similarly, in this example, the loss of the ability to test for calcium, a cation that is not used to calculate AGAP, would not affect the autoverification of Na, and Na testing could automatically continue without manual intervention.
  • the first rule of the Na autoverification operation determines whether the Na test result is within a set validation range.
  • This validation range is part of the first laboratory configuration.
  • the validation range is set to be between 100 and 200 units.
  • a first parameter to evaluate whether the autoverification operation performs as expected is a value of 101 units. Applying the autoverification operation to this parameter would give an expected outcome of 'validate Na'. If the set validation range were changed to a range of 120 to 200 units, the system would detect this change to a second configuration and automatically reevaluate all autoverification operations.
  • the technology described herein can be realized in hardware, software, or a combination of hardware and software.
  • a typical combination of hardware and software can be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • the computing device 102 (shown in FIG. 1) is a computer system.
  • Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • a group of functional modules that control the operation of the CPU and effectuate the operations of the technology as described above can be located in system memory (on the server or on a separate machine, as desired).
  • An operating system directs the execution of low-level, basic system functions such as memory allocation, file management, and operation of mass storage devices.
  • a control block implemented as a series of stored instructions, responds to client-originated access requests by retrieving the user-specific profile and applying the one or more rules as described above.
  • the software may be configured to run on any computing device or workstation such as a PC or PC-compatible machine, an Apple Macintosh, a Sun workstation, etc.
  • any device can be used as long as it is able to perform all of the functions and capabilities described herein.
  • the particular type of computing device, whether a workstation, or other system, is not central to the technology, nor is the configuration, location, or design of a database, which may be flat-file, relational, or object-oriented, and may include one or more physical and/or logical components.
  • Such a computing device may include a network interface continuously connected to the network, and thus support numerous geographically dispersed users and applications.
  • the network interface and the other internal components of the servers intercommunicate over a main bi-directional bus.
  • the main sequence of instructions effectuating the functions of the technology can reside on a mass-storage device (such as a hard disk or optical storage unit) as well as in a main system memory during operation. Execution of these instructions and effectuation of the functions of the technology is accomplished by a processing device, such as a central-processing unit ("CPU").
  • CPU central-processing unit
  • the computing device typically includes at least some form of computer readable media, and such computer readable media can be used to store the computer program product (containing data instructions) thereon.
  • Computer readable media includes any available media that can be accessed by the computing device.
  • computer readable media include computer readable storage media and computer readable communication media.
  • Computer readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer readable instructions, data structures, program modules or other data.
  • Computer readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory or other memory technology, compact disc read only memory, digital versatile disks or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the computing device 1 10.
  • Computer readable storage media is a type of non- transitory storage media.
  • Computer readable communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • computer readable communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Automatic Analysis And Handling Materials Therefor (AREA)

Abstract

A method of processing a clinical test result includes applying a first rule to the clinical test result to generate a first suggested action. A second rule is also applied to the clinical test result to generate a second suggested action, which may be associated with a weighted factor. Thereafter, the method includes automatically performing the second suggested action based at least in part on the weighted factor.

Description

Conflict Reconciliation, Incremental Representation, and Lab Change Retesting Procedures for Autoverification Operations
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application is being filed on 25 July 2012, as a PCT International Patent application in the name of Beckman Coulter, Inc., a U.S. national corporation, applicant for the designation of all countries except the U.S., and, John M. Ashley, a citizen of the U.S., Jason M. Parkhurst, a citizen of the U.S., and Kathleen M. Payne, a citizen of the U.S., applicants for the designation of the U.S. only, and claims priority to U.S. Patent Application Serial No. 61/51 1 ,473 filed on 25 July 201 1 , the disclosure of which is incorporated herein by reference in its entirety.
INTRODUCTION
[0002] Clinical laboratories, either in government, educational, or private settings include different types of laboratory equipment for performing various tests on samples. The different pieces of equipment that perform these tests include hematology, coagulation, immunoassay, and chemistry analyzers. Since certain samples require testing on more than a single piece of equipment, a computer system is often used to synchronize and prioritize testing such that samples are processed efficiently. Using a computer system to perform this function is far more efficient than using a human operator to manage the tests, thus reducing costs, speeding analysis, and ensuring accurate results. In fact, clinical laboratories that limit or even eliminate human operators may produce the most accurate results. While computers help ensure accuracy in laboratory performance, there are a number of issues that must be addressed before the test results of a particular laboratory may be relied upon for diagnostic purposes.
[0003] First, to ensure that clinical test results run in a laboratory are valid, each clinical test result must be validated before it can be released to a doctor. An autoverification operation is the process of using one or more rules to automatically validate a test result by a computer. Each operation generates an autoverification outcome for the clinical test result. Example autoverification outcomes include: (i) validate the test result; (ii) hold the test result for manual review; (iii) rerun the test; (iv) dilute the sample (and rerun the test on the diluted sample); and (v) cancel the test.
[0004] Before a clinical laboratory can use an auto verification operation to validate clinical test results, it must ensure that the autoverification operation performs as intended. To ensure that the operation performs as intended, it is advantageous if the rules included in the autoverification operation are easy to enter into the computer and easy for the technician to understand. An autoverification operation may comprise a single, complex rule that is difficult to enter or understand because it is composed of compound logical expressions with multiple branches in the flow of logic through the rule. One way of making such an autoverification operation easier to enter and understand is to break this complex rule up into multiple simpler rules or subparts. Each rule performs a specific task (such as determining if a value is within a given range or comparing one value with a prior value). Each rule then generates as output a suggested action. If an autoverification operation contains only one rule, this suggested action becomes the outcome of the operation. However, multiple rules applied to a clinical test result may generate conflicting suggested actions, and a decision must be made as to which suggested action will be the outcome of the autoverification operation. It would be advantageous to enable the computer to make that decision, thereby reducing the operator's workload and making the validation process more efficient. Prior approaches to reconciling this conflict include holding the results for manual review, presenting each possible action to the user and awaiting the user's choice of which action to take. Another prior approach requires the user to choose the order in which the multiple rules are applied so that only a single suggested action is generated (the one associated with the rule applied first), regardless of the possibility of a preferred other action from a later-applied rule. This results in reduced efficiency, often with an excessive number of test results requiring operator review.
[0005] Simulations are used to confirm that an autoverification operation functions as intended. Example test results, or parameters indicative of a test result, are provided by an operator, who then confirms that application of the operation to each example test result provides the expected autoverification outcome.
Particularly complicated autoverification operations may involve multiple rules, or rules having multiple logic segments (i.e., subparts) and, therefore, multiple possible actions. For these complex operations, it may be difficult for an operator to provide all the appropriate example test results to properly test the operation. Furthermore, it may be difficult for a user to follow the progress of the operation to confirm that the operations are functioning as intended. Some prior approaches allow for the testing of entire operations by applying the operations to sample test data and indicating the test results that would be generated.
[0006] Changes to the configuration of the laboratory, either in equipment or types or number of operations run, require evaluation of an operation to ensure that the operation performs as expected. The operation can be validated by applying the operation to a set of example test results or parameters and verifying that the operation provides the expected outcome. A clinical laboratory may be configured with multiple laboratory instruments. During operation, the configuration of the lab may change, such as when an instrument goes off-line or when a particular reagent or test becomes unavailable. This changed configuration may affect the validity of an autoverification operation if any part of that operation is dependent on a particular laboratory instrument or test that is lost in the changed configuration.
SUMMARY
[0007] In one aspect, the technology relates to a method of processing a clinical test result, the method including: applying a first rule to the clinical test result to generate a first suggested action; applying a second rule to the clinical test result to generate a second suggested action, wherein the second suggested action is associated with a weighted factor; and automatically performing the second suggested action based at least in part on the weighted factor.
[0008] In another aspect, the technology relates to a method of evaluating an autoverification operation, the method including: applying a first logic segment to a first parameter to generate a first logic decision; and applying a second logic segment to a second parameter to generate a second logic decision, wherein applying the second logic segment requires applying the first logic segment to the first parameter.
[0009] In another aspect, the technology relates to a method of evaluating an autoverification operation, the method including: receiving a first parameter to evaluate a first logic segment of the autoverification operation; applying the autoverification operation to the first parameter to generate a first output; receiving a second parameter to evaluate a second logic segment of the autoverification operation; and applying the autoverification operation to the first parameter and the second parameter to generate a second output, wherein the step of receiving the second parameter occurs after the step of applying the autoverification operation to the first parameter to generate the first output.
[0010] In another aspect, the technology relates to a method of evaluating an autoverification operation, the method including: applying, to a first parameter, the autoverification operation based on a first laboratory configuration to generate a first output; detecting a change in the first laboratory configuration, the change resulting in a second laboratory configuration; applying, to the first parameter, the autoverification operation based on the second laboratory configuration to generate a second output; and comparing the first output to the second output.
[0011] In another aspect, the technology relates to a method of evaluating a rule of an autoverification operation, the method including: receiving a first parameter to evaluate a first logic segment of the rule; applying the autoverification operation to the first parameter to generate a first output; receiving a second parameter to evaluate a second logic segment of the rule; and applying the autoverification operation to the second parameter to generate a second output, wherein the step of applying the autoverification operation to the second parameter includes applying the first output as input to the second logic segment.
BRIEF DESCRIPTION OF THE DRAWINGS
[0012] There are shown in the drawings, embodiments which are presently preferred, it being understood, however, that the technology is not limited to the precise arrangements and instrumentalities shown.
[0013] FIG. 1 depicts an exemplary clinical laboratory testing system.
[0014] FIG. 2 depicts a method for processing a clinical test result and reconciling conflict between conflicting suggested actions.
[0015] FIGS. 3 and 4 depict methods for simulating autoverification operations in laboratory settings.
[0016] FIG. 5 depicts a method of evaluating an autoverification operation when a laboratory configuration has changed. DETAILED DESCRIPTION
[0017] The technology disclosed herein addresses the problems with existing systems identified above. A schematic of a clinical laboratory 100 that would benefit from the disclosed technology is depicted in FIG. 1. The laboratory 100 includes a control computing devce 102 and one or more pieces of laboratory equipment 104. Any number of equipment (e.g., 104A, 104B, 104C, . . . 104N, where "N" represents any integer value), having virtually any function, may be utilized.
[0018] An autoverification operation for a clinical test result includes one or more rules. Each rule includes one or more logical expressions or segments to generate a suggested action as an output, using the clinical test result as an input. The suggested action is what the autoverification outcome would be if the autoverification operation were composed of only a single rule. In one embodiment of the proposed technology, an autoverification operation includes the following two rules:
Rule 1. If the numerical value of the test result is within an
instrument range, then validate the test result, else hold the
test result for manual review.
Rule 2. If the numerical value of the test result is above a
critically high value, then order a STAT retest of the clinical
sample, else validate the test result.
The first rule determines whether the test result is within an instrument's normal operating range. It uses the test result value as input, and generates a suggestion action of 'validate' if the test result is within the range, and a suggested action of 'hold' if it is out of that range. The second rule determines whether the test result is below a medically critical value. It also uses the test result value as input, generating a suggested action of 'validate' if that value is at or below a critical level, and a suggested action of 'rerun' if it is above that level.
[0019] One problem is the inability of existing systems to automatically determine which action to take when the rules of an autoverification operation generate conflicting suggested actions. To address this issue with the proposed technology, all rules for a particular test are applied whenever the state of the test sample changes (i.e., when a test result from that sample is received or an action on that sample is taken). Suggested actions are assigned a weighted factor. If two rules for the same test suggest different actions wherein each action has a different weighted factor, the action with the higher factor is automatically applied as the autoverification outcome. If the different actions have the same weighted factor, then automatic processing of the test result is stopped and the test result is held for manual review.
[0020] Using weighted factors to resolve conflicting suggested actions allows more test results to be automatically processed without manual review, improving sample turnaround time and reducing workload on the technician. Furthermore, running all the rules for a test to determine all possible suggested actions instead of running the rules in a preselected order (to determine one suggested action) allows for more efficient processing of the sample, because the best action can be selected from the group of possible actions instead of relying on a predetermined order.
[0021] The suggested actions include, for example, (i) validating the clinical test result, (ii) holding the clinical test result, (iii) rerunning a clinical test, (iv) diluting a clinical test sample, and (v) cancelling the test. In general, the hold suggestion has the highest weighted value, followed by dilution/rerun/cancel, then validate.
However, depending on the particular application, the weighted factors of these suggested actions may be changed by an operator. In certain embodiments, the weighted factors may be graphically displayed with a color coded icon or other indicator to identify importance of the suggested action. For example, the hold suggestion may be identified with a red icon, while the validate suggestion may be identified with a green icon. Regardless of the particular identification scheme used as the weighted factor, the factor is used to establish a priority among the various suggested actions.
[0022] One example of a method 120 for processing a clinical test result and reconciling conflict between conflicting suggested actions is depicted in FIG. 2. In general, the method includes processing clinical test results by applying any number of rules of an autoverification operation to generate suggested actions, and then taking actions based on the results obtained. An operation may include a single rule or multiple rules. In such a case, a validation rule may be a procedure that evaluates parameters of the test and sample and makes a decision to perform one operation or another on the test/sample/result. In another embodiment, an operation may include a single, complex rule having a sequence of nodes or logic segments. A logic segment includes an input and either a logical decision or an action dependent on the input, wherein the input may be a test result, demographic information of the sample, or a decision from a previously executed logic segment.
[0023] Returning to FIG. 2, in operation 122 a first rule is applied to a clinical test result to generate a first suggested action in operation 124, which may be one of the actions identified above. A second rule is also applied to the clinical test result in operation 126, and a second suggested action is generated in operation 128.
Either or both of these suggested actions may include a weighted factor, as described above. An operation 130 is performed to determine which weighted factor is more important. The action having the weighted factor of the most importance is then automatically performed in operation 132 by the computing device as the autoverification outcome of that clinical test. Although the steps of this method are depicted in sequence, the application of the first and second rules may be performed in parallel, or in any other order required or desired by the operator. Suggested actions resulting from operations having multiple rules are reserved until all the rules associated with a clinical test have been performed.
[0024] In some embodiments, the autoverification operation stops or is disabled upon the occurrence of certain events. For example, the autoverification operation is disabled when two suggested next actions conflict with each other (e.g., when the rules have the same weighted factor). The autoverification operation may also stop or be disabled if the user asynchronously takes an action on a test such as ordering a rerun over a pending dilution. When the autoverification operation is stopped, a hold signal can be generated, and the user notified so that the user can decide the appropriate action to take.
[0025] In an example, the autoverification operation includes Rule 1 and Rule 2 above, applying the two rules to a test result value that was above the instrument's operational range (and therefore also above the medically critical level) would generate the suggested actions of 'hold for manual review' (to investigate why the value is beyond a reportable range) and 'rerun the test' (to confirm a critical result). Because the out-of-range value may indicate a problem with the instrument or the sample, simply rerunning the test is likely to result in the same problem, and holding for manual review to investigate the cause of this problem is therefore the more efficient result. By pre-assigning a higher weighted factor to the suggested action of hold, the system can automatically select this more efficient outcome.
[0026] Another problem with existing systems is the inability of those systems to ensure that the autoverification operations are running properly when they are set up. Thus, it is desirable for a system to provide an easier way for a user to follow the functioning of an autoverification operation and facilitating the selection of example test results for evaluating the various functions of the autoverification operation. One solution proposed herein contemplates performing a simulation of an autoverification operation by displaying the functioning of the various logic segments of the operation as a step-by-step progression on a timeline. An operator may be prompted to enter parameters to begin a simulation procedure. A parameter may include any information that may be necessary to perform a particular operation or a part thereof. Exemplary parameters may include previous test results, data values (either based on real tests or an operator's expectations based on certain factors). Other parameters may include demographic information regarding the donor of the test sample, sample information such as the specimen type and draw time, and the tests ordered on the sample. The system then displays as a timeline the steps taken by the system in applying the operation to the entered data, including the next action (or actions) that would be suggested by the system. The user may then enter example results (or user action) for this next suggested action, and the timeline updates to display further steps taken by the system, including the (new) next suggested action. In this way, a complete path through the operation (or the logic segments thereof) is evaluated in a step-by-step process that is easy for a user or operator to follow and to understand. Outputs obtained after performance of each logic segment may be used to verify that logic segments and simples rules are behaving as the user intended. Example outputs include, but are not limited to, validating a result, canceling a test, ordering a rerun, ordering a new test, and providing a test result.
[0027] For an operator, a system having this functionality is easy to understand. By following single, discrete subparts of a complete operation, it is easier to generate appropriate example results. Also, the user is able to identify additional information, data, or other parameters that are required for other parts of the operation because the system prompts the user at intermediate steps of the operation.
[0028] FIG. 3 depicts a particular method 140 for simulating an autoverification operation in a laboratory setting. In general, a first parameter is applied to a first logic segment to produce a first logic decision. Thereafter, a second parameter is applied to a second logic segment to produce a second logic decision. In this particular embodiment, the first logic segment must be reapplied in order to apply the second logic segment. In an alternative embodiment, the first logic segment need not be reapplied. Generation of multiple logic decisions continues until the simulation of the entire rule or operation is complete.
[0029] The benefits of reapplying a preceding logic segments are most apparent in the context of a method that includes prompting a user, as depicted in FIG. 3. In the depicted method, prior to application of each logic segment, an operator is prompted for each of the parameters. In an example, at the beginning of the simulation, the system prompts a user for the first parameter in an operation 142. The system then applies the first parameter to the first logic segment of a rule in operation 144, and a first logic decision is generated in operation 146. This result, as well as the first logic segment, first parameter, first logic decision, and any other relevant information are then displayed, in operation 148. This allows the operator to clearly understand the operation of the system. In this case, since the system requires a second parameter, the system then prompts the operator for that second parameter in operation 148. The system then applies the second parameter to the second logic segment of a rule in operation 1 0, and a second logic decision is generated in operation 152. This result, as well as the second logic segment, first parameter, first logic decision, and any other relevant information are then displayed, in operation 154.
[0030] In some embodiments, the entire simulation, and therefore the entire operation, will be clearly understood by the operator, the system again shows the operator the first logic segment and other relevant information, prior to displaying the second logic segment, second parameter, second logic decision, and other relevant information. This process repeats until the entire operation has been simulated, thus allowing the user to verify the proper functionality of the operation. The simulation may be operator defined, such that only certain logic decisions, logic segments, etc., are displayed, depending on operator requirements.
[0031] Other methods of simulating proper functioning of an autoverification operation are contemplated. One such method 160 is depicted in FIG. 4. In this embodiment, the first parameter is received by the system in operation 162, and used to evaluate a first logic segment of a rule of the autoverification operation. The first parameter may be received directly upon prompting an operator, or may be obtained from a data set entered previously by an operator, or a data set obtained from a sample test result. The entire autoverification operation is then applied to the first parameter in operation 164 to generate a first output in operation 166. If additional parameters are required, they are subsequently obtained by the system, as described above (as in operations 168, 170, and 172). As the simulation continues, the entire autoverification operation is now applied in operation 170 to the first parameter and the second parameter (as well as subsequent parameters, as the simulation proceeds further) to generate a second output in operation 172.
[0032] In one embodiment, the following rule is simulated:
Rule 3. If the numerical value of the test result is greater than
the maximum usable range of the instrument, then (dilute the
clinical sample and rerun the test, and if the numerical value
of the test result of the diluted sample is greater than the
maximum usable range of the instrument, then hold the result
for manual review, else if the numerical value of the test
result is greater than a medically critical value, then hold the
result for manual review, else validate the test result), else if
the numerical value of the test result of the undiluted sample
is greater than a medically critical value, then hold the result
for manual review, else validate the test result.
[0033] This is a single rule that first determines whether a test result is within the usable range of the instrument, then whether it is above a medically critical value. If the initial result is above the instrument range, the system directs the sample to be diluted, and the test is rerun on the diluted sample. If the test result of the diluted sample is still beyond the usable range of the instrument, then the test results are held for manual review to investigate the problem. If the result of either the original test or the diluted test is within instrument range but greater than a medically critical value, then the test result is held for manual review (to investigate the critically high value). If the test result is within instrument range and below the critical value, the result is validated. This rule, which evaluates only two ranges (an instrument range and a critical range) to determine the autoverification outcome, requires a complicated series of compound logical statements that are difficult to write and difficult to follow and understand.
[0034] To evaluate whether this rule performs as intended when a first test result is above a usable range of the instrument, but the test result of the diluted sample is within range but critically high, the method 160 depicted in FIG. 4 receives as a first parameter the first test result. In this example, the user enters a value that is above the usable range of the instrument. The system applies that value to a first logic segment of the rule (the segment that determines whether the value is within a usable range) to determine a first output (the action of diluting the sample for retest). The system displays this action, prompting the user to enter the test results for the diluted test sample (the second parameter) to continue progression through the rule. The user enters a value that is below the maximum usable range of the instrument but above a medically critical level, and the system applies that value to a second logic segment of the rule (determining whether the diluted test result is within range but critically high), and displays the result (hold for manual review of critically high result). In this way, the user is able to follow the rule through simple, progressive steps to ensure that the rule is performing each step as intended.
[0035] To illustrate an advantage of this method, consider a case where the user had incorrectly entered Rule 3 as:
Rule 3a. If the numerical value of the test result is greater
than a medically critical value, then hold the result for manual
review, else if the numerical value of the test result is greater
than the maximum usable range of the instrument, then (dilute
the clinical sample and rerun the test, and if the numerical
value of the test result of the diluted sample is greater than the maximum usable range of the instrument, then hold the result
for manual review, else if the numerical value of the test
result is greater than a medically critical value, then hold the
result for manual review, else validate the test result), else
validate the test result.
[0036] Here, although the rule has been entered incorrectly, it still gives the same (apparently correct) outcome for a test result that is out of instrument range on a first test and within range but critically high on a second, diluted test. This is because the system never reaches the step of determining whether the first result is within range of the instrument, and instead generates the outcome that the result is above a critical range. By guiding the user in a step-by-step progression through the rule, the system makes it easy to identify at which step the rule diverges from the expected behavior, and how the rule needs to be corrected. [0037] Another method contemplates simulating a rule of an autoverification operation. In such a case, parameters are used to evaluate logic segments (i.e., subparts) of a first rule. One or more outputs from application of one or more logic segments to one or more parameters are produced. These outputs may be an intermediate output of a rule, and in certain cases, the simulation is unable to proceed beyond a first output in the absence of a second parameter. Depending on the length of the operation, a second or subsequent output may be the final outcome of an autoverification operation.
[0038] Another problem with existing systems is they are unable to determine whether an autoverification operation is still valid when the configuration of the lab changes. To address this issue, the proposed technology saves the set of example test results used to evaluate the autoverification operation, together with the
autoverification outcome for each example test result. These test results are obtained when first validating an autoverification operation, or when simulating an autoverification operation. The system detects whenever the configuration of the lab changes, and reevaluates the autoverification operation whenever the configuration changes by reapplying each example test result to the autoverification operation (or a logic segment thereof) and comparing the output with the previously saved output from the original evaluation of the autoverification operation. The system may automatically continue testing and autoverifying clinical test results if the autoverification operation is still valid. Thus, the system can automatically proceed if the changed configuration has no effect on the autoverification process, without operator intervention, thereby minimizing delays due to manual intervention.
[0039] One example of such a method 180 is depicted in FIG. 5. An
autoverification operation based on a first laboratory configuration (set in operation 182) is applied to a first parameter in operation 184. As described elsewhere herein, the parameter may be data entered by an operator based on previously obtained test results, or the parameter may be the results of a test itself. Alternatively, the parameter may be entered by the operator based on other factors (operator experience or expectations, for example). Thereafter, a first output is generated in operation 186, and all relevant information related thereto is stored. The autoverification operation may continue, and the system may continue to process samples 188 until it detects a change in the first laboratory configuration in operation 190, thus resulting in a second, updated laboratory configuration (set in operation 192). At this time, processing of samples would cease so proper functioning of the autoverification operation based on the second, updated configuration may be verified. The autoverification operation based on this second, updated laboratory configuration is then applied to the first parameter in operation 194, and a second output is generated in operation 196. If a comparison between the first and second outputs shows no difference in operation 198, the autoverification operation is determined to be functioning properly (i.e., it is verified) in operation 200, and further processing of samples may proceed. If there is a difference, further processing may be prohibited and a notification may be sent to an operator in operation 202.
[0040] A change in lab configuration can be, e.g., the addition or removal of an instrument, a change in the assay menu of an instrument, a change in the model of an instrument, a change in the QC requirements of an instrument, a change in the performance of an assay, or a change in the acceptability criteria of a result, e.g., the validation range of a test, a delta check value, a critical limit value, or a sample type for an test. The system may detect the change by input from a user, or the system may detect the change automatically by monitoring the status and operation of instruments attached to or in communication with the system.
[0041] In one embodiment, for example, suppose that the autoverification operation for a test for the cation sodium (Na) includes the following three rules:
If the Na test result is within validation range, validate Na,
else hold Na.
If hemolysis is greater than 5, hold Na.
If the value of the anion gap (AGAP) is less than 1 , hold Na,
CI, C02, and K.
This autoverification operation requires test results for Na, hemolysis, chloride (CI), bicarbonate (C02), and potassium (K). This autoverification operation can therefore be valid only in a lab where each of these tests is available. If, for example, the lab configuration changes so that hemolysis results are no longer available, the system automatically reevaluates the autoverification operation and can notify the user that the Na test cannot be autoverified (or even manually verified - since both require hemolysis results) under the current lab configuration. This helps prevent wasteful tests that cannot be verified. Similarly, in this example, the loss of the ability to test for calcium, a cation that is not used to calculate AGAP, would not affect the autoverification of Na, and Na testing could automatically continue without manual intervention. Furthermore, in this example, the first rule of the Na autoverification operation determines whether the Na test result is within a set validation range. This validation range is part of the first laboratory configuration. Suppose that in this first laboratory configuration, the validation range is set to be between 100 and 200 units. A first parameter to evaluate whether the autoverification operation performs as expected is a value of 101 units. Applying the autoverification operation to this parameter would give an expected outcome of 'validate Na'. If the set validation range were changed to a range of 120 to 200 units, the system would detect this change to a second configuration and automatically reevaluate all autoverification operations. In this case, reapplying the Na autoverification operation to the first parameter of 101 units would give a different outcome of 'hold Na', because the value of 101 is no longer within the rule's validation range. In this way, a user could be automatically notified that the performance of the autoverification operation for Na has changed and may no longer perform as intended due to the change in laboratory configuration.
[0042] Although described in terms of hardware, the technology described herein can be realized in hardware, software, or a combination of hardware and software. A typical combination of hardware and software can be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. In some embodiments, the computing device 102 (shown in FIG. 1) is a computer system.
[0043] The technology described herein also can be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
[0044] A group of functional modules that control the operation of the CPU and effectuate the operations of the technology as described above can be located in system memory (on the server or on a separate machine, as desired). An operating system directs the execution of low-level, basic system functions such as memory allocation, file management, and operation of mass storage devices. At a higher level, a control block, implemented as a series of stored instructions, responds to client-originated access requests by retrieving the user-specific profile and applying the one or more rules as described above.
[0045] In the embodiments described above, the software may be configured to run on any computing device or workstation such as a PC or PC-compatible machine, an Apple Macintosh, a Sun workstation, etc. In general, any device can be used as long as it is able to perform all of the functions and capabilities described herein. The particular type of computing device, whether a workstation, or other system, is not central to the technology, nor is the configuration, location, or design of a database, which may be flat-file, relational, or object-oriented, and may include one or more physical and/or logical components. [0001] Such a computing device may include a network interface continuously connected to the network, and thus support numerous geographically dispersed users and applications. In a typical implementation, the network interface and the other internal components of the servers intercommunicate over a main bi-directional bus. The main sequence of instructions effectuating the functions of the technology can reside on a mass-storage device (such as a hard disk or optical storage unit) as well as in a main system memory during operation. Execution of these instructions and effectuation of the functions of the technology is accomplished by a processing device, such as a central-processing unit ("CPU").
[0002] The computing device typically includes at least some form of computer readable media, and such computer readable media can be used to store the computer program product (containing data instructions) thereon. Computer readable media includes any available media that can be accessed by the computing device. By way of example, computer readable media include computer readable storage media and computer readable communication media.
[0003] Computer readable storage media includes volatile and nonvolatile, removable and non-removable media implemented in any device configured to store information such as computer readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, random access memory, read only memory, electrically erasable programmable read only memory, flash memory or other memory technology, compact disc read only memory, digital versatile disks or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information and that can be accessed by the computing device 1 10. Computer readable storage media is a type of non- transitory storage media.
[0004] Computer readable communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, computer readable communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media. Combinations of any of the above are also included within the scope of computer readable media.
[0046] While there have been described herein what are to be considered exemplary and preferred embodiments of the present technology, other
modifications of the technology will become apparent to those skilled in the art from the teachings herein. The particular methods of manufacture and geometries disclosed herein are exemplary in nature and are not to be considered limiting. It is therefore desired to be secured in the appended claims all such modifications as fall within the spirit and scope of the technology. Accordingly, what is desired to be secured by Letters Patent is the technology as defined and differentiated in the following claims, and all equivalents.

Claims

What is claimed is:
1. A method of processing a clinical test result, the method comprising:
applying a first rule to the clinical test result to generate a first suggested action;
applying a second rule to the clinical test result to generate a second suggested action, wherein the second suggested action is associated with a weighted factor; and
automatically performing the second suggested action based at least in part on the weighted factor.
2. The method of claim 1, wherein the second suggested action comprises at least one of: (i) validating the clinical test result; (ii) holding the clinical test result; (iii) rerunning a clinical test; (iv) diluting a clinical test sample; and (v) cancelling the test.
3. A method of evaluating an autoverification operation, the method comprising:
applying a first logic segment to a first parameter to generate a first logic decision; and
applying a second logic segment to a second parameter to generate a second logic decision, wherein applying the second logic segment requires applying the first logic segment to the first parameter.
4. The method of claim 3, wherein the first logic segment comprises a first part of a rule and the second logic segment comprises a second part of the rule.
5. The method of claim 3, wherein the autoverification operation comprises a single rule.
6. The method of claim 3, further comprising prompting an operator for the first parameter, and separately prompting the operator for the second parameter.
7. The method of claim 3, further comprising displaying the first logic segment and the first logic decision prior to applying the second logic segment.
8. A method of evaluating an autoverification operation, the method comprising:
receiving a first parameter to evaluate a first logic segment of the autoverification operation;
applying the autoverification operation to the first parameter to generate a first output;
receiving a second parameter to evaluate a second logic segment of the autoverification operation; and
applying the autoverification operation to the first parameter and the second parameter to generate a second output, wherein the step of receiving the second parameter occurs after the step of applying the autoverification operation to the first parameter to generate the first output.
9. The method of claim 8, wherein generating the second output requires the first output, as an input to the second logic segment.
10. A method of evaluating an autoverification operation, the method comprising:
applying, to a first parameter, the autoverification operation based on a first laboratory configuration to generate a first output;
detecting a change in the first laboratory configuration, the change resulting in a second laboratory configuration;
applying, to the first parameter, the autoverification operation based on the second laboratory configuration to generate a second output; and
comparing the first output to the second output.
1 1. The method of claim 10, further comprising sending a notification based on a difference between the first output and the second output.
12. The method of claim 10, wherein the change in the first laboratory configuration is automatically detected.
13. The method of claim 10, wherein application of the autoverification operation based on the second laboratory configuration occurs automatically.
14. A method of evaluating a rule of an autoverification operation, the method comprising:
receiving a first parameter to evaluate a first logic segment of the rule; applying the autoverification operation to the first parameter to generate a first output;
receiving a second parameter to evaluate a second logic segment of the rule; and
applying the autoverification operation to the second parameter to generate a second output, wherein the step of applying the autoverification operation to the second parameter includes applying the first output as input to the second logic segment.
15. The method of claim 14, wherein the first output is an intermediate output of the rule.
16. The method of claim 15, wherein the rule cannot logically proceed past the first output in the absence of the second parameter.
17. The method of claim 14, wherein the second output is an outcome of the autoverification operation.
18. The method of claim 14, wherein the method includes receiving parameters to evaluate each logic segment of the rule.
19. The method of claim 18, wherein the parameters are applied in a step by step progression through the rule.
PCT/US2012/048151 2011-07-25 2012-07-25 Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations WO2013016429A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
BR112014001822A BR112014001822A2 (en) 2011-07-25 2012-07-25 conflict reconciliation, incremental representation, and laboratory variation testing procedures for self-checking operations

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161511473P 2011-07-25 2011-07-25
US61/511,473 2011-07-25

Publications (3)

Publication Number Publication Date
WO2013016429A2 true WO2013016429A2 (en) 2013-01-31
WO2013016429A8 WO2013016429A8 (en) 2013-04-04
WO2013016429A3 WO2013016429A3 (en) 2014-01-30

Family

ID=46727573

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/048151 WO2013016429A2 (en) 2011-07-25 2012-07-25 Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations

Country Status (2)

Country Link
BR (1) BR112014001822A2 (en)
WO (1) WO2013016429A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111406294A (en) * 2017-10-26 2020-07-10 拜克门寇尔特公司 Automatically generating rules for laboratory instruments

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0619715B2 (en) * 1987-12-03 1994-03-16 シャープ株式会社 Question answering device
US7158890B2 (en) * 2003-03-19 2007-01-02 Siemens Medical Solutions Health Services Corporation System and method for processing information related to laboratory tests and results
US8868353B2 (en) * 2007-02-02 2014-10-21 Beckman Coulter, Inc. System and method for testing autoverification rules

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111406294A (en) * 2017-10-26 2020-07-10 拜克门寇尔特公司 Automatically generating rules for laboratory instruments
CN111406294B (en) * 2017-10-26 2024-02-13 拜克门寇尔特公司 Automatically generating rules for laboratory instruments

Also Published As

Publication number Publication date
WO2013016429A8 (en) 2013-04-04
BR112014001822A2 (en) 2017-02-21
WO2013016429A3 (en) 2014-01-30

Similar Documents

Publication Publication Date Title
US9703686B2 (en) Software testing optimizer
EP2641179B1 (en) Method and apparatus for automatic diagnosis of software failures
CN110674047B (en) Software testing method and device and electronic equipment
US11176019B2 (en) Automated breakpoint creation
CN110765596B (en) Modeling method and device for auditing process simulation model and electronic equipment
CN107368418A (en) A kind of method, apparatus and medium for traveling through test
CN103885898B (en) The analysis system for analyzing biological sample with multiple operating environments
CN107038120A (en) A kind of method for testing software and equipment
US10061679B2 (en) Evaluating fairness in devices under test
CN110941547B (en) Automatic test case library management method, device, medium and electronic equipment
KR20130096033A (en) Computer system and siglature verification server
US8818783B2 (en) Representing state transitions
US20160217017A1 (en) Determining workflow completion state
US11639804B2 (en) Automated testing of HVAC devices
WO2013016429A2 (en) Conflict reconciliation, incremental representation, and lab chance retesting procedures for autoverification operations
CN115964122A (en) Method for operating an in-vitro diagnostic laboratory control software module
CN110008098B (en) Method and device for evaluating operation condition of nodes in business process
CN114330859A (en) Optimization method, system and equipment for real-time quality control
CN115812195A (en) Calculating developer time in a development process
EP1959382A1 (en) Organisation representational system
CN113282482A (en) Compatibility test method and system for software package
CN115176233A (en) Performing tests in deterministic order
CN112181485A (en) Script execution method and device, electronic equipment and storage medium
CN110459276A (en) A kind of data processing method and relevant device
US11645142B1 (en) Use sequential set index for root cause location and problem verification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12750898

Country of ref document: EP

Kind code of ref document: A2

WWE Wipo information: entry into national phase

Ref document number: MX/A/2014/000792

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2014522967

Country of ref document: JP

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112014001822

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: JP

122 Ep: pct application non-entry in european phase

Ref document number: 12750898

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase

Ref document number: 112014001822

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20140124