WO2006075133A1 - Physiological data classification - Google Patents

Physiological data classification Download PDF

Info

Publication number
WO2006075133A1
WO2006075133A1 PCT/GB2006/000014 GB2006000014W WO2006075133A1 WO 2006075133 A1 WO2006075133 A1 WO 2006075133A1 GB 2006000014 W GB2006000014 W GB 2006000014W WO 2006075133 A1 WO2006075133 A1 WO 2006075133A1
Authority
WO
WIPO (PCT)
Prior art keywords
probability density
density function
physiological
physiological data
data
Prior art date
Application number
PCT/GB2006/000014
Other languages
French (fr)
Inventor
Lead Rezek
Stephen John Roberts
Elilini Siva
Original Assignee
Isis Innovation Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isis Innovation Limited filed Critical Isis Innovation Limited
Publication of WO2006075133A1 publication Critical patent/WO2006075133A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/369Electroencephalography [EEG]
    • A61B5/372Analysis of electroencephalograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Definitions

  • the present invention relates to the analysis of physiological data measured from a human or animal subject.
  • the analysis is performed to classify the physiological state of the human or animal subject.
  • An example of the type of physiological data which may be analysed is EEG data measured by performing an electroencephalogram. In this case, the cognitive state of the subject may be classified.
  • Hypnosis besides analgesia and areflexia, forms an integral part of anaesthesia.
  • the bispectral index (BIS index for short) monitor is probably the longest commercially available and has been used in over 200 studies on a range of agents.
  • the chief feature of the BIS index is the bispectrum. It is capable of quantifying interactions in the EEG data that standard spectral estimation cannot since it extracts higher order statistics, in particular of 3rd order.
  • Bispectral estimation is a particular measure of the class of higher order spectral estimators which measure a probability distribution's third-moment, or skewness.
  • a review article on bispectral estimation is Raghuveer & Nikias, "Bispectral Estimation: A Parametric Approach", IEEE Transactions on Acoustics, Speech, ans Signal Processing, 33(4): 1213-1230, 1985.
  • the autoregressive model is very similar to the standard autoregressive model is identical to our model, in which the current data sample is assumed to be the result of a linearly weighted mixture of past samples and some added noise component.
  • the crucial difference between a standard autoregressive model and that underlying bispectral estimators is the residual error or noise component.
  • the residual noise is no longer assumed to be Gaussian, merely characterised by its mean and variance, but non-Gaussian, possibly with some skewness, which is measured by bispectral estimation. Other moments can also be used, such as kurtosis for trispectral estimation.
  • the first is its large sample assumption required for a stable estimation of the third order moments from data.
  • the second is the stationarity within the blocks of data the size of which are chosen ad-hoc.
  • the first assumption it should be noted that triple products, as they arise in third order moment estimation, are naturally more sensitive to outliers, simply because estimation involves raising to a higher power. By simply using more data, the impact of this assumption can be minimised.
  • the block-stationarity assumption is, in our opinion, much harder to justify. For one, there is no a priori knowledge of the block length. Further, it is a very rigid assumption which is supposedly valid for all varieties of clinical conditions and across all patient groups.
  • a method of analysing physiological data measured from a human or animal subject to classify the physiological state of the human or animal subject comprising the computer-implemented steps of: performing an autoregressive analysis of the physiological data which derives an autoregressive model fitting the physiological data and derives the probability density function of the residual error between the physiological data and the derived autoregressive model, the probability density function being represented by parameters of a weighted sum of functions in the exponential family; calculating a measure describing the shape of the probability density function; classifying the physiological state of the human or animal subject on the basis of at least the measure describing the shape of the probability density function.
  • This method fits the physiological data with an autoregressive model which is one of the two most frequently used models in relation to bispectral estimation.
  • the present method derives the full probability density function of the residual error between the physiological data and the derived autoregressive model.
  • the method then calculates a measure of the shape of the probability density function, for example the third order and higher order moments and/or the entropy. Given the exact knowledge of the actual distribution this may be done analytically and this is a particular advantage of the present invention.
  • the full knowledge of the probability distributions of all model parameters means that the significance of measures derived from them can be properly assessed. This is in contrast to standard higher order statistical methods which derive such measures on assumed statistical properties, unaware of whether or not these assumptions hold.
  • the present method differs from the standard approach to modelling of first considering the physics which gave rise to the observations and then to describe them with a suitable model, hi this standard approach, variations of the model from the real world are typically captured by the residuals which are used to decide on the best fitting model, but otherwise ignored. In contrast, the present method uses them as the basis for classification.
  • the present invention deals with the above-mentioned shortcomings of the known bispectral estimation techniques as follows.
  • the present method overcomes the shortcomings of using large sample assumptions and the stationarity of the blocks of data. It does so by not directly estimating the third or higher order moments, or other measures of the shape of the probability distribution, but instead by fitting a mixture distribution to the residual error. This is a flexible approach which improves the estimation accuracy. It allows the third or higher order moments, or other measures of the shape of the probability distribution, to be computed analytically rather than being repeatedly sampled. Similarly the present method is not forced to assume constant phase.
  • the present method is relatively quick to perform.
  • the model estimator used in the present invention requires less data for estimation compared to a bispectral estimator and avoids lengthy averaging.
  • common commercially available systems using the bispectral estimator have a lag of the order of 5s-10s.
  • the present method may be performed with a lag of the order of ls-2s. This increase in speed enhances the possibility of timely correlating the output classification with actual changes in state of consciousness.
  • the present method also has the advantage that the probability density function of the residual error between the physiological data and the derived autoregressive model may be derived using Bayesian techniques.
  • the present method allows the use of statistical computations which are fully reversible in the sense that every step may be backtracked to demonstrate the validity of the measure. Rather than being a "black box” method, the possibility of reversing all the computations allows for clinical validation.
  • a method of analysing physiological data measured from a human or animal subject to classify the physiological state of the human or animal subject comprising the computer-implemented steps of: performing an analysis of the physiological data which derives a model fitting the physiological data and derives the probability density function of the residual error between the physiological data and the derived model, the probability density function being represented by parameters of a weighted sum of functions in the exponential family; calculating a measure describing the shape of the probability density function; classifying the physiological state of the human or animal subject on the basis of at least the measure describing the shape of the probability density function.
  • Fig. 1 is a flow chart of a method of measuring and analysing physiological data
  • Figs. 2(a) and 2(b) are flow diagrams illustrating the usage of data in the known direct estimation technique and in the present method, respectively.
  • FIG. 1 A method of implementing the present invention is illustrated in the flow chart of Fig. 1.
  • the overall method including measurement and analysis of physiological data is shown in Fig. 1 but as described below the analysis is performed by a computer system miming a computer program.
  • Step 1 measurements are performed on a human or animal subject to obtain physiological data.
  • physiological data This may be any type of physiological data obtained using appropriate techniques which are known in themselves.
  • One possible type of physiological data is EEG obtained by performing an electroencephalogram.
  • Steps 2 to 7 the physiological data is analysed. Steps 2 to 7 are conveniently performed by a computer program executed on a computer system 8 illustrated schematically by a dotted line in Fig. 1.
  • the computer system 8 may be any type of computer system but is typically a conventional personal computer.
  • the computer program may be written in any suitable programming language.
  • the computer program may be stored on a computer-readable storage medium, which may be of any type, for example: a recording medium which is insertable into a drive of the computing system and which may store information magnetically, optically or opto-magnetically; a fixed recording medium of the computer system such as a hard drive; or a computer memory.
  • the computer system 8 could be a system dedicated to the present analysis, for example associated with the system used to perform the measurements in Step 1.
  • the computer system 8 could be optimised for performing the analysis, for example by running various processes in parallel.
  • Step 2 the physiological data is input into the computer system 8.
  • Step 3 the input physiological data is segmented into blocks.
  • the length of blocks is typically 1 second in the case of EEG data sampled at 128Hz.
  • Steps 4 to 7 are performed on each of the blocks.
  • classification of the physiological state of the subject is performed on the basis of each of the blocks individually.
  • a schematic of the estimation steps is shown for the known direct estimation techniques in Fig. 2(a) and for the present method in Fig. 2(b), showing the difference and how there is a significant increase in data utilisation.
  • Step 4 an autoregressive analysis of the physiological data is performed to derive an autoregressive model fitting the physiological data and to derive the probability density function of the residual error between the physiological data and the derived autoregressive model.
  • Step 4 is performed as follows. The mathematics used in the analysis of Step 4 will be explained first.
  • the intention is to explain the data in terms of a linear autoregressive model and temporally correlated residuals which follow an arbitrary distribution.
  • the data sample, X 1 is a weighted combination of past samples, y t . precede y t _ 2 , ..., y,. p with residual error e t
  • Vt - xto. e t . (1)
  • y t is the observation at time instant t
  • e W is the vector containing p past observations and e, is the autoregressive driving noise
  • Markov models are also possible, such as hidden semi- Markov models. Such alternative Markov models are desirable whenever the ergodicity assumption in standard hidden Markov models is too strong and must be relaxed, i.e. when changes in the residuals are of only rare or irregular.
  • the probability of drawing component k at time t is thus dependent upon the component / drawn at time t- ⁇ . Since the component labels form a discrete set, the conditional probability is a AT if-dimensional Multinomial distribution with parameters ⁇ lh
  • the probability of the first component indicator, s 0 is parameterised by a i ⁇ -dimensional Multinomial distribution with parameters ⁇ Ok!
  • the estimation of the autoregressive model is done in a Bayesian framework, that is there are estimated posterior distributions of all model parameters, consisting of autoregressive coefficients and the means, variances and weights of the residual mixture distributions (although other estimation frameworks would be equally applicable, such as the Maximum Likelihood (ML) or Maximum Aposteriori (MAP) framework).
  • ML Maximum Likelihood
  • MAP Maximum Aposteriori
  • the estimation technique per se is described in Penny & Roberts, "Variational Bayes for Generalised autoregressive Models", IEEE Transactions on Signal Processing, 50(9):2245-2257, 2002 the contents of which are incorporated herein by reference. To summarise, the estimation involves recursive evaluation of update equations for these parameters until a measure for the posterior probability of the data no longer changes significantly.
  • the densities we choose are either Gaussian, Gamma or Dirichlet and are defined in Bernardo & Smith, "Bayesian Theory", John Wiley and Sons, 1994.
  • ⁇ 0 we use an ⁇ -dimensional Dirichlet density
  • the model estimation is done in the variational Bayesian learning paradigm, as disclosed in Jaakkola "Tutorial on Variational Approximation Methods", In Opper and Saad, editors, “Advanced Mean Field Methods: Theory and Practice", MIT Press, 2000 and also in Jordan, Ghahramani, Jaakkola, and Saul, "An Introduction to Variational Methods for Graphical Models", In Jordan, editor, “Learning in Graphical Models” Kluwer Academic Press, 1997, the contents of both of which are incorporated herein by reference.
  • the aim of variational Bayesian learning is to minimise the Kullback-Leibler (KL) divergence between two distributions, say q and
  • p (S, ⁇ , Y) is the exact posterior probability density over the model parameters, ⁇ , and the component labels, S, and the data, Y.
  • the exact posterior is intractable and thus approximated by a simpler but tractable density, q( ⁇ , S
  • update formula (14) refers to the entire set of component labels, S. Since q(S) has a chain-like structure, the update for the individual component labels, _?, ⁇ is obtained using the well-known Baum- Welch, or forward/backward iterations.
  • the update equations derived by the minimisation of the KL-divergence assume predetermined model order, i.e. number of components for residual density and autoregressive model coefficients is fixed. To perform model order estimation we assume that models of different order are independent from one another.
  • the posterior model probability can be computed as
  • F a is the KL divergence of the entire model for a fixed model size, i.e.
  • the terms are the estimate of the initial state and state- transition probabilities of a hidden Markov chain, whilst is the equivalent of the likelihood of the observation given the state.
  • the forward/backward then provide the single and joint posterior distributions for the residual component labels
  • the autoregressive model is derived.
  • the probability density function of the residual error between the physiological data and the derived autoregressive model is derived as follows.
  • the probability density function is represented by parameters of aweighted sum of Gaussian functions.
  • the density of the residuals e is a i ⁇ -component Gaussian mixture distribution 14
  • Step 5 there is derived a measure of the shape of the probability density function of the residual error between the physiological data and the derived autoregressive model.
  • Two different measures are the third or higher order moments of the probability density function or the entropy of the probability density function. The derivation and use of both of these measures will now be described but it is possible to use only a single measure, or else more measures.
  • entropy the dynamic range of moments of a distribution are likely to have serious effects on the numerical stability of any classifier. Further, four moments describe only four shape characteristics of the density.
  • Shannon's entropy may be considered a descriptor of the entire probability density. ' Because of the summation tenia inside the logarithms argument, however, the entropy of mixture densities can only approximated. Therefore the value of entropy is calculated value empirically, through a sampling procedure. For example, 1000 samples are drawn from the mixture density
  • the entropy value is function of the sample size.
  • the decision as to its size was based upon the convergence behaviour of equation (39) as the sample was increased.
  • the calculated measure of the shape of the probability distribution function of the residual error is the relative entropies between components of the probability density function.
  • Step 6 is illustrated as being performed in parallel with Step 5 as it uses the results of Step 4. Of course, in practice Steps 5 and 6 may be performed in series in either order. It is also to be noted that Step 6 is optional.
  • Step 7 the physiological state of the human or animal subject is classified. This may be discrete classification as one of a plurality of predetermined physiological states.
  • the classification may continuous, that is by providing a numerical measure on a continuous scale between a plurality of predetermined physiological states, for example as performed by a regressor.
  • the numerical measure is thus a measure of the similarity of the physiological state of the subject to the predetermined physiological states.
  • the measure may for example describe the probability of the subject being in respective ones of the predetermined physiological states.
  • the measure classifies the physiological state of the subject with respect to the predetermined physiological states, but does not go as far as making a discrete classification of one particular predetermined physiological states.
  • Such a type of measure is often preferred by clinicians because it can provide more information in practical terms, for example by giving the clinician a better feel for the likelihood of the subject being in a given physiological state or by giving a feel for changes in the physiological state of the subject.
  • the classification in Step 7 is done on the basis of the measure describing the shape of the probability density function, for example either one or both of the third and higher order moments and the entropy.
  • the classification is also done on the basis of the derived power spectrum in combination with the measure describing the shape of the probability density function.
  • the classification in Step 7 is performed using a feature recognition system operating on the measure describing the shape of the probability density function and optionally also the power spectrum as features.
  • the feature recognition system is trained using feature data derived using Steps 2 to 6 from a plurality of sets of physiological data measured in Step 1 from a human or animal subject having different, known physiological states.
  • Knowledge of the physiological state of the subject used in the training phase requires the input of a clinician or the use of clinically accepted data.
  • the training phase effectively stores relevant information about the known physiological states in the feature recognition system.
  • the feature recognition system may be operated in a recognition phase to perform the classification of the physiological state of a human or animal subject as one of a plurality of known physiological states in Step 7.
  • the recognition phase effectively uses the relevant information about the known physiological states obtained in the training phase. Thus it allows the physiological data from a patient in an initially unknown physiological states to be classified as one of the known physiological states.
  • the feature recognition system used in Step 7 maybe of the discrete type, such as a neural networks, or of the continuous type, such as a regressor. Discrete classification leads to a crisp classification into any of the known physiological states. Alternatively, quantification of the patient state can be on a continuous state between two extreme states, such as wake and anaesthetised.
  • the physiological state classified in Step 7 may be displayed on a display of the computer system 8 or output from the computer system 8 in some other way.
  • the output physiological state may be used as the basis for a clinical decision in . respect of the subject, for example to select a treatment.
  • the validity of the classification of the present method may be understood as follows. While the autoregressive model is not that of underlying the commercial BIS measure which uses a harmonic model, it is mathematically impossible to distinguish between the two. In other words, it is essentially impossible to distinguish between the harmonic and autoregressive model on the basis of probability distributions alone. Whilst the autoregressive model encodes deviation from a Gaussian distribution in its residual error term (noise term), the harmonic model has no residual error term and encodes deviation from a Gaussian distribution as originating directly from the data. However, the autoregressive model is simply a linear function of the noise term. Linear functions are not capable of changing the skewness of the input distribution, they can merely change the distribution's centre and scale.
  • the input probability distribution must be skewed from the outset for the resulting observations have a skewed distribution also.
  • the harmonic model simply abandons the concept of a residual having a skewed distribution and imposes the skewness directly on the data.
  • Gaussian distribution is introduced anywhere else in the model, for instance by allowing the model parameters to have a non-Gaussian distributions themselves.
  • the parameters of both models are basically fitted using least square estimators the parameters' distributions are implicitly Gaussian.
  • model complexity i.e. the number of parameters required to explain the observations.
  • Standard autoregressive models require 2 coefficients to produce an almost harmonic oscillation, and thus 3 in total to reproduce a signal having the same power-spectral density as that resulting from the harmonic model.
  • the present method using an autoregressive model to assess the probability distributional characteristics of anaesthesia EEG data allows one to draw conclusions as to the properties of the bispectral part of the bispectral index.
  • Higher order statistical properties in the autoregressive model imply the same for the harmonic model, so the present method is extracting the same information from the physiological data.
  • the present method does this in a more efficient and robust manner, because the spectrum derived from the autoregressive model is much smoother and well-known for its robustness against noise, unlike the periodogram estimator used in the BIS index.
  • the present method does not require estimation of third order moments from the data. The most it requires is sample estimations of weighted mean and variance terms. Thus, it is not forced to assume a constant phase for each block of the data segment, which can then be averaged. This makes our estimation considerably faster and more robust to noise. All moments of the mixture distribution can be calculated analytically and to a much higher degree of accuracy.
  • the mixture model for the residual noise can approximate any distribution to arbitrary accuracy. Further, since only Gaussian densities are used for the error term, only first and second moment estimation is required. Any higher moment can be computed by simply plugging in the first moments in a moment integral. This adds significantly to stability and reduces the amounts of data needed for estimation, hi turn, the algorithm becomes faster.
  • the estimation method is Bayesian which uses prior information. One can think of the prior as encoding "previously seen data" augmenting the current data set.
  • the first data set was EEG data obtained from the Dipartimento Area Critica Medico Chirurgica, Sez. di Anestesiologia e Rianimazione, of the University of
  • anaesthesia consisted of i.v. administration of remifentanil and propofol (15mg/kg/h and 1.5mg/kg respectively) followed by an infusion of 10 mg/kg/h, pancuronium 0.1 mg/kg.
  • Remifentanil and propofol were constantly monitored and varied according to concurrently monitored BIS values and anaesthetist's judgement. Patients requiring more than 10% from the initial infusion rates adjustment were excluded from the study. Thus, a total of 35 patients were excluded of the original 235 large patient group.
  • EEG data was recorded from 2 frontal electrodes place at either side of the outer malar bone (AtI and At2) with reference and ground electrodes placed at Fpz and FpI , respectively.
  • the original recordings which had an average length of 25 minutes, where heavily contaminated the data. Corrupted sections lead to failures in the model initialisation and had to be excluded from the analysis.
  • the resulting recording lengths were on average 8 minutes long (with the shortest duration being 5 minutes and the longest being 13 minutes).
  • the second and third sets of experiment data were obtained from the
  • the EEG signal was recorded from the forehead to the left mastoid (with the right mastoid as common) using a 82 dB preamplifier (with a bandwidth of .5- 400Hz, a 1st order high-pass and 3rd order low-pass Butterworth pre-filter) and digitised to a 12bit accuracy at 1 kHz sampling rate. Post digitising, the signal was down-sampled to 250Hz after low-pass removal of frequencies above 100Hz cut-off using an finite impulse response filter (47th order).
  • the second data set was obtained from ten patients, pre-medicated with lOmg morphine and 0.04mg andatrophine, underwent anaesthetic sedation through inhalation of desflurane. Muscle relaxant Vecuronium was also administered by infusion to maintain neuro-muscular paralysis throughout.
  • the sedation procedure consisted of 3 consecutive applications of various desflurane dosages (1.5, 3, and 6 %). The sequence of concentrations was selected randomly in order to limit any carryover effects from one concentration to another.
  • the end-expiratory concentration of desflurane, N 2 O and CO 2 was measured (using a calibrated Datex Ultima gas analyser) to determine whether the patients was in a low, medium or deep state of anaesthesia.
  • the EEG was recorded for 10 minutes while the desflurane concentration was near constant (monitored by the gas analyser). However, only the final 5 minutes of each recording period were used for experimental analysis.
  • the third data set was obtained from five patients who were pre-medicated with lOmg of morphine and 0.04mg of andatrophine, followed by induction of anaesthesia with thiopentone (2-4mg/kg) and who underwent anaesthetic sedation through intravenous infusion of propofol. Seven to ten minutes after induction with thiopentone, propofol was infused in five equal 10 minute steps, starting with concentrations of 40 mg/kg/min and ending with 200 mg/kg/min. The propofol blood concentration was measured by taking venous blood measurements the arm contralateral to the infused arm. The propofol blood concentration as used as the indicator of patient level of anaesthesia (levels low, three distinct medium and high). These labels were also the classification given to the EEG which was recorded concurrent to the administration. The full 11 minute long EEG recording periods were used for subsequent analysis during which agent concentration was near constant.
  • Steps 2 to 6 of the method of Fig. 1 were performed and the performance of recognition by the feature recognition system used in Step 7 was examined.
  • the classification rates and confusion matrices have been calculated and are presented here as the cross-validation average.
  • Classification standard deviation which is also reported in the results, is the sample estimate.
  • AU experiments here are the results of 10-fold cross-validation experiments using data from all subjects.
  • Table 1 It is evident from the classification rates that for all data sets spectral characteristics are significantly affected by the anaesthetic agents. The variability of the classification rate, however, is high throughout, but particularly elevated for Desflurane. For this agent in particular, the confusion matrix is disappointing, in that many more data points which should be classified as wake (bottom row in the matrix) are classified as anaesthetised (first column of bottom row in the matrix).
  • Table 1 :
  • the method may be applied to EMG data measured by performing an electromyo graph.
  • the classified physiological state may be the state of muscle activity, the state of muscle contractability and/or the state of muscle degeneration of the human or animal subject.
  • the method may be applied to image data measured by performing an imaging technique, including but not exclusively magnetic resonance imaging (MRI), computed tomography (CT) imaging, X-ray imaging and ultrasound imaging.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • X-ray imaging X-ray imaging
  • ultrasound imaging X-ray imaging
  • the classified physiological state may be any state apparent in the derived image.
  • the method may be applied to ECG data measured by performing an electrocardiogram, blood pressure data or blood oxygenation data.
  • the classification may be performed to monitor changes in the physiological state which occur over time either spontaneously or evoked by an external stimulus.
  • the method may be used to monitor anaesthesia, consciousness, sleep, cerebral and muscular ischemia, cerebral intoxication, evoked potential analysis, cognitive state evaluation, neuropathology, or muscle tremor.
  • the specific method described above assumes that the physiological data is 1- dimensional data.
  • the method may however be generalised to 2-dimensional or 3- dimensional data, as for example in the case of image data.
  • the probability density function of the residual error between the physiological data and the derived autoregressive model is represented by parameters of aweighted sum of Gaussian functions, which is preferred as such a sum is known to be capable of approximating any kind of probability distribution function and hence is powerful and robust.
  • any functions in the exponential family may be used as an alternative to Gaussian functions.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Psychiatry (AREA)
  • Surgery (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Psychology (AREA)
  • Evolutionary Computation (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Fuzzy Systems (AREA)
  • Complex Calculations (AREA)

Abstract

Physiological data measured from a human or animal subject is analysed to classify the physiological state of the human or animal subject. The following computer-implemented steps are performed. An autoregressive analysis of the physiological data is performed. This derives an autoregressive model fitting the physiological data and derives the probability density function of the residual error between the physiological data and the derived autoregressive model. The probability density function being represented by parameters of aweighted sum of Gaussian functions or other functions in the exponential family. A measure describing the shape of the probability density function is calculated. The physiological state of the human or animal subject is classified, for example as one of a plurality of predetermined physiological states, on the basis of at least the measure describing the shape of the probability density function. The analysis has the advantages of being fast and robust.

Description

Physiological Data Classification
The present invention relates to the analysis of physiological data measured from a human or animal subject. The analysis is performed to classify the physiological state of the human or animal subject. An example of the type of physiological data which may be analysed is EEG data measured by performing an electroencephalogram. In this case, the cognitive state of the subject may be classified.
Hypnosis, besides analgesia and areflexia, forms an integral part of anaesthesia. To assess it, there has been increasing use of analysis of the electroencephalogram using various statistical methods, such as spectral entropy and bispectral index. The bispectral index (BIS index for short) monitor is probably the longest commercially available and has been used in over 200 studies on a range of agents. The chief feature of the BIS index is the bispectrum. It is capable of quantifying interactions in the EEG data that standard spectral estimation cannot since it extracts higher order statistics, in particular of 3rd order.
Bispectral estimation is a particular measure of the class of higher order spectral estimators which measure a probability distribution's third-moment, or skewness. A review article on bispectral estimation is Raghuveer & Nikias, "Bispectral Estimation: A Parametric Approach", IEEE Transactions on Acoustics, Speech, ans Signal Processing, 33(4): 1213-1230, 1985. Commonly, there are two underlying models assumed to generate the skewed probability distribution, the harmonic model and the autoregressive model.
In the harmonic model, the signal is the result of a superposition of three sinusoidal components, yt = sin(/j + φ}) + sin(f2 + φ2) +sin(/~3 + φ3), each with its own frequency, /^f2, fs, where /3 =/, +f2, and phase, φu φ2, φy The only random variables which give rise to probability distributions are the phases. Phases φuφ2 are always assumed to be independently and uniformly distributed, hi the case of quadratic phase coupling, the third phase φ3 is not independently drawn, but ψz = φ] + φ2. Otherwise, it is drawn independently from a uniform distribution. The non-linear nature of the sine function and the dependency of the one of the phases on the other two gives rise to skewed probability distributions. In the absence of phase coupling, Ψ3 Ψ\ + Ψi > the signal's probability density is symmetric and its skewness therefore zero.
The autoregressive model is very similar to the standard autoregressive model is identical to our model, in which the current data sample is assumed to be the result of a linearly weighted mixture of past samples and some added noise component. The crucial difference between a standard autoregressive model and that underlying bispectral estimators is the residual error or noise component. The residual noise is no longer assumed to be Gaussian, merely characterised by its mean and variance, but non-Gaussian, possibly with some skewness, which is measured by bispectral estimation. Other moments can also be used, such as kurtosis for trispectral estimation.
There are two major drawbacks of bispectral estimators as they are most commonly used. The first is its large sample assumption required for a stable estimation of the third order moments from data. The second is the stationarity within the blocks of data the size of which are chosen ad-hoc. As to the first assumption, it should be noted that triple products, as they arise in third order moment estimation, are naturally more sensitive to outliers, simply because estimation involves raising to a higher power. By simply using more data, the impact of this assumption can be minimised. However, the block-stationarity assumption is, in our opinion, much harder to justify. For one, there is no a priori knowledge of the block length. Further, it is a very rigid assumption which is supposedly valid for all varieties of clinical conditions and across all patient groups.
It would be desirable to analyse EEG data and other physiological data in a way which overcomes the drawbacks of the known bispectral estimation techniques as summarised above.
According to the present invention, there is provided a method of analysing physiological data measured from a human or animal subject to classify the physiological state of the human or animal subject, the method comprising the computer-implemented steps of: performing an autoregressive analysis of the physiological data which derives an autoregressive model fitting the physiological data and derives the probability density function of the residual error between the physiological data and the derived autoregressive model, the probability density function being represented by parameters of a weighted sum of functions in the exponential family; calculating a measure describing the shape of the probability density function; classifying the physiological state of the human or animal subject on the basis of at least the measure describing the shape of the probability density function.
This method fits the physiological data with an autoregressive model which is one of the two most frequently used models in relation to bispectral estimation.
However, unlike the old formulation of the bispectrum which estimates only the third or fourth order statistical properties of an otherwise abstract probability distribution of the model, the present method derives the full probability density function of the residual error between the physiological data and the derived autoregressive model. The method then calculates a measure of the shape of the probability density function, for example the third order and higher order moments and/or the entropy. Given the exact knowledge of the actual distribution this may be done analytically and this is a particular advantage of the present invention. Furthermore, the full knowledge of the probability distributions of all model parameters means that the significance of measures derived from them can be properly assessed. This is in contrast to standard higher order statistical methods which derive such measures on assumed statistical properties, ignorant of whether or not these assumptions hold.
It is also to be noted that the present method differs from the standard approach to modelling of first considering the physics which gave rise to the observations and then to describe them with a suitable model, hi this standard approach, variations of the model from the real world are typically captured by the residuals which are used to decide on the best fitting model, but otherwise ignored. In contrast, the present method uses them as the basis for classification.
One of the known, underlying models assumed to generate the skewed probability distribution is an autoregressive model which is driven by a non-Gaussian noise signal. However, in the known techniques, there is no explicit modelling of the density of the noise signal. The focus is only on the third moment property of the density. Since the residual error density is never explicitly modelled, any estimation of the shape of the error density, for example of a trispectrum, requires repeating the procedure but using fourth moment properties of the density. At all stages, averaging needs to be performed to obtain robust estimates of the moments. It is this use of moment estimators which only converge to the true value in the limit of large numbers which we believe is the primary candidate for the criticism that bispectral analysis of anaesthesia EEG data has attracted. The present method, through a different approach, does however lend support to the use of residual noise properties in order to classify anaesthetic depth.
The present invention deals with the above-mentioned shortcomings of the known bispectral estimation techniques as follows.
Firstly, the present method overcomes the shortcomings of using large sample assumptions and the stationarity of the blocks of data. It does so by not directly estimating the third or higher order moments, or other measures of the shape of the probability distribution, but instead by fitting a mixture distribution to the residual error. This is a flexible approach which improves the estimation accuracy. It allows the third or higher order moments, or other measures of the shape of the probability distribution, to be computed analytically rather than being repeatedly sampled. Similarly the present method is not forced to assume constant phase.
Secondly, the present method is relatively quick to perform. The model estimator used in the present invention requires less data for estimation compared to a bispectral estimator and avoids lengthy averaging. Considering classification of cognitive states of a subject using EEG data by way of example, common commercially available systems using the bispectral estimator have a lag of the order of 5s-10s. In contrast, the present method may be performed with a lag of the order of ls-2s. This increase in speed enhances the possibility of timely correlating the output classification with actual changes in state of consciousness. The present method also has the advantage that the probability density function of the residual error between the physiological data and the derived autoregressive model may be derived using Bayesian techniques. These can be set up so as to return only statistically significant results and thus avoid the need for an additional significance test. Similarly, the present method allows the use of statistical computations which are fully reversible in the sense that every step may be backtracked to demonstrate the validity of the measure. Rather than being a "black box" method, the possibility of reversing all the computations allows for clinical validation.
According to a more general aspect of the present invention, there is provided a method of analysing physiological data measured from a human or animal subject to classify the physiological state of the human or animal subject, the method comprising the computer-implemented steps of: performing an analysis of the physiological data which derives a model fitting the physiological data and derives the probability density function of the residual error between the physiological data and the derived model, the probability density function being represented by parameters of a weighted sum of functions in the exponential family; calculating a measure describing the shape of the probability density function; classifying the physiological state of the human or animal subject on the basis of at least the measure describing the shape of the probability density function.
To allow better understanding, an embodiment of the present invention will now be described by way of non-limitative example with reference to the accompanying drawings, in which:
Fig. 1 is a flow chart of a method of measuring and analysing physiological data; and
Figs. 2(a) and 2(b) are flow diagrams illustrating the usage of data in the known direct estimation technique and in the present method, respectively.
A method of implementing the present invention is illustrated in the flow chart of Fig. 1. The overall method including measurement and analysis of physiological data is shown in Fig. 1 but as described below the analysis is performed by a computer system miming a computer program.
In Step 1, measurements are performed on a human or animal subject to obtain physiological data. This may be any type of physiological data obtained using appropriate techniques which are known in themselves. One possible type of physiological data is EEG obtained by performing an electroencephalogram.
In Steps 2 to 7, the physiological data is analysed. Steps 2 to 7 are conveniently performed by a computer program executed on a computer system 8 illustrated schematically by a dotted line in Fig. 1. The computer system 8 may be any type of computer system but is typically a conventional personal computer. The computer program may be written in any suitable programming language. The computer program may be stored on a computer-readable storage medium, which may be of any type, for example: a recording medium which is insertable into a drive of the computing system and which may store information magnetically, optically or opto-magnetically; a fixed recording medium of the computer system such as a hard drive; or a computer memory.
As an alternative, the computer system 8 could be a system dedicated to the present analysis, for example associated with the system used to perform the measurements in Step 1. In this case the computer system 8 could be optimised for performing the analysis, for example by running various processes in parallel. In Step 2, the physiological data is input into the computer system 8.
In Step 3, the input physiological data is segmented into blocks. The length of blocks is typically 1 second in the case of EEG data sampled at 128Hz. The following Steps 4 to 7 are performed on each of the blocks. As will be described below, classification of the physiological state of the subject is performed on the basis of each of the blocks individually. This contrasts with the known direct estimation techniques which average the triple products (corresponding to a high order moment) derived from each of a plurality of blocks. A schematic of the estimation steps is shown for the known direct estimation techniques in Fig. 2(a) and for the present method in Fig. 2(b), showing the difference and how there is a significant increase in data utilisation. In Step 4, an autoregressive analysis of the physiological data is performed to derive an autoregressive model fitting the physiological data and to derive the probability density function of the residual error between the physiological data and the derived autoregressive model. Step 4 is performed as follows. The mathematics used in the analysis of Step 4 will be explained first.
The intention is to explain the data in terms of a linear autoregressive model and temporally correlated residuals which follow an arbitrary distribution. Thus, at each time instant, t, the data sample, X1, is a weighted combination of past samples, yt. „ yt_2, ..., y,.p with residual error et
Vt - xto. = et. (1)
In our notation, yt is the observation at time instant t, a e W, is the vector of p autoregressive coefficients, x, = [ yt_,, yt_2, ..., yt_p] e W is the vector containing p past observations and e, is the autoregressive driving noise
In order to keep the residual probability distribution as flexible as possible we use a mixture model approach and model them as arising from one of K 1- dimensional Gaussian components, each parameterised by a mean xμ + μk and precision βA. An indicator variable s, is used to select one of the K components at time t,
where the means and variances of the components are collated in the parameter vectors β = { β/f .... β* } and μ = { μ,, .... μk}.
We model the temporal dependence between the residuals by imposing a first order Markov dependence on the component indicator variables sr
Other choices of Markov models are also possible, such as hidden semi- Markov models. Such alternative Markov models are desirable whenever the ergodicity assumption in standard hidden Markov models is too strong and must be relaxed, i.e. when changes in the residuals are of only rare or irregular.
The probability of drawing component k at time t is thus dependent upon the component / drawn at time t-\. Since the component labels form a discrete set, the conditional probability is a AT if-dimensional Multinomial distribution with parameters πlh
Figure imgf000010_0001
with 7T= {%}, Vk1 I = I, -, K. The probability of the first component indicator, s0, is parameterised by a i^-dimensional Multinomial distribution with parameters πOk!
Figure imgf000010_0002
For data set Y= {y,,-,yτ} of length T, the full log- likelihood is thus give as
Figure imgf000010_0003
where we have introduced S - {s0, -,sτ} , the set of all component indicator variables, and the parameter vector θ
Figure imgf000010_0004
The estimation of the autoregressive model is done in a Bayesian framework, that is there are estimated posterior distributions of all model parameters, consisting of autoregressive coefficients and the means, variances and weights of the residual mixture distributions (although other estimation frameworks would be equally applicable, such as the Maximum Likelihood (ML) or Maximum Aposteriori (MAP) framework). The estimation technique per se is described in Penny & Roberts, "Variational Bayes for Generalised Autoregressive Models", IEEE Transactions on Signal Processing, 50(9):2245-2257, 2002 the contents of which are incorporated herein by reference. To summarise, the estimation involves recursive evaluation of update equations for these parameters until a measure for the posterior probability of the data no longer changes significantly. Each estimation is performed for a previously determined value for the number of mixture model components and number of autoregressive model coefficients. As to the optimal choice of autoregressive model order, P and number of noise mixture components M, the model with the best posterior fit to the data is chosen. This estimation technique will now be explained in more detail.
To perform the estimation of the generalised autoregressive model in the Bayesian framework, we require prior distributions for the model parameters, θ. The prior definitions, which are given below, and the likelihood set out in equation (3) form the exact posterior density for the model.
All prior distributions are assumed to be independent:
Figure imgf000011_0001
The densities we choose are either Gaussian, Gamma or Dirichlet and are defined in Bernardo & Smith, "Bayesian Theory", John Wiley and Sons, 1994. In detail, for the initial state probability π0, we use an ^-dimensional Dirichlet density
Figure imgf000011_0002
and, for the transition probabilities, π, if X-dimensional Dirichlet densities
Figure imgf000012_0001
The prior over the component regression coefficients is assumed to be &p- variate zero-mean Gaussian
'
Figure imgf000012_0002
where the precision α has itself a Gamma prior,
Figure imgf000012_0003
The component precisions βh we assume to be governed by a Gamma density, with scale parameter b0 and shape parameters c0,
Figure imgf000012_0004
For the priors for the component means μk, we assume a univariate Normal density given by
Figure imgf000012_0005
with mean m0 and precision T0. All hyper-parameters (the parameters of the priors) are fixed and typically set to values which reflect the fact that little is known a priori about parameters, i.e. the prior densities are set to very broad.
In the alternative case of using the M.A.P. framework, the same priors can be used, whilst for the M.L. only the log-likelihood (equation 3) is required. The procedures, explained in McLachlan & Krishnan, "The EM Algorithm and Extensions", John Wiley Sons, 1997, lead to different update equations and require further goodness-of-fϊt assessments. As the more encompassing estimation framework, we only describe below the Bayesian method.
The model estimation is done in the variational Bayesian learning paradigm, as disclosed in Jaakkola "Tutorial on Variational Approximation Methods", In Opper and Saad, editors, "Advanced Mean Field Methods: Theory and Practice", MIT Press, 2000 and also in Jordan, Ghahramani, Jaakkola, and Saul, "An Introduction to Variational Methods for Graphical Models", In Jordan, editor, "Learning in Graphical Models" Kluwer Academic Press, 1997, the contents of both of which are incorporated herein by reference. The aim of variational Bayesian learning is to minimise the Kullback-Leibler (KL) divergence between two distributions, say q and
P,
Figure imgf000013_0001
Here, p (S, θ, Y) is the exact posterior probability density over the model parameters, θ, and the component labels, S, and the data, Y. The exact posterior is intractable and thus approximated by a simpler but tractable density, q(θ, S | Y). By approximating the full posterior, one can enjoy some of the benefits of Bayesian analysis, such as full Bayesian model estimation and automatic penalties for over-complex models, and thus avoid over-fitting.
Herein, the component indicator variables for the residuals, {-V-jSy}, for a hidden Markov chain, whilst we impose the factorising assumption on all other model parameters, θ. (Hidden semi-Markov models can also be used and lead to different update schemes.) With these assumptions, the approximating density q(θ,S) may be expressed as
Figure imgf000013_0002
where θ = {ΘU-,ΘM} andA/is the dimension of the vector θ. Using this factorisation, known as the as the "mean-field" assumption, in the minimisation of the KL divergence leads to optimised posterior densities which have the form
Figure imgf000014_0001
for the model parameters i θt ~ {θι,... , θi_ι , θi+ι , ... , θM n and
Figure imgf000014_0002
for the component indicator variables.
Each of the equations (13) concern distribution of one parameter only, and so result in separate update equations for each, given in the following section. In contrast, update formula (14) refers to the entire set of component labels, S. Since q(S) has a chain-like structure, the update for the individual component labels, _?,■ is obtained using the well-known Baum- Welch, or forward/backward iterations. The update equations derived by the minimisation of the KL-divergence assume predetermined model order, i.e. number of components for residual density and autoregressive model coefficients is fixed. To perform model order estimation we assume that models of different order are independent from one another.
Denoting the set of all model orders by A and a particular model order by α, we can write the posterior probability of A in the form
Figure imgf000014_0003
Assuming that all model structures in A are equally likely, the posterior model probability can be computed as
Figure imgf000014_0004
where Fais the KL divergence of the entire model for a fixed model size, i.e.
Figure imgf000015_0001
In our variational setting, the full posterior is to be approximated by densities which factorise as in equation (12). The posterior densities of the parameters, θ, we assume to completely factorise (mean field assumption)
Figure imgf000015_0005
AU these densities are conjugate to the prior densities defined above and so are, again, either Gaussian, Gamma or Dirichlet (see appendix A). In detail, we will use the notation for their parameters as follows. The posterior over the component regression coefficients is assumed to be a/?-variate normal with mean a and precision
Figure imgf000015_0002
and the posterior over the weight precision is assumed to be a Gamma
Figure imgf000015_0003
For the posterior for the component precisions βk we assume a Gamma density
q(βk) = Qa(bk, ck) (20)
and the posterior over the component means a univariate normal
q(μk) = N(mkk). (21)
Finally, for the initial state probability π0, we use an ΛT-dimensional Dirichlet density
Figure imgf000015_0004
and, for the transition probabilities, π, iCK-dimensional Dirichlet densities
Figure imgf000016_0001
By minimising the KJL divergence between the exact and approximate distributions we obtain individual update equations for each of the model parameters θ and the set of all component labels S. The update equations are then iterated as in the sequence described below.
The solution to the exponentiation and integration in equation (14) results in a hidden Markov chain equivalent probability density
Figure imgf000016_0002
where
Figure imgf000016_0003
and
Figure imgf000016_0004
The terms
Figure imgf000016_0005
are the estimate of the initial state and state- transition probabilities of a hidden Markov chain, whilst
Figure imgf000016_0006
is the equivalent of the likelihood of the observation given the state. The forward/backward then provide the single and joint posterior distributions for the residual component labels
Figure imgf000017_0001
The update equations for the parameters of the posterior initial state and transition probabilities are obtained by solving equation (13) with respect to % and π0.
Thus, for #,- = π in the update integral (13), O1 is the empty set, 0 , while set of component labels, S, remains
;
Figure imgf000017_0004
For parameter for θt = π0 in equation (13), O1 = 0 and set of component labels, S, reduces to S = S0.
The parameters of q(π) and q(%0), respectively, are
Figure imgf000017_0005
In the particular form of (13) for the component mean density, q(μj, the integration needs to be performed with respect to
Figure imgf000017_0002
and all S. The
parameters of the posterior component mean density update equations are then
Figure imgf000017_0006
where the data-dependent values mdm and rdfl(a are given as
Figure imgf000017_0003
To update the posterior of the residual component precisions, q(βk), integration in (13) needs to be performed with respect to and all S,
Figure imgf000018_0001
resulting in the following update equations for the parameters ofq(β^)
Figure imgf000018_0002
Computing the mean and precision parameters of the posterior is done, after integration in (13) with respect to θ. = {α, μk, βk) and all S, via
Figure imgf000018_0003
where / is ap x p identity matrix.
Finally, integration with respect to θt = {a} only in (13), S - 0 , the updates
for the AR precisions result in
Figure imgf000018_0004
Thus, the autoregressive model is derived. Also in Step 4 the probability density function of the residual error between the physiological data and the derived autoregressive model is derived as follows. According to the model, the probability density function is represented by parameters of aweighted sum of Gaussian functions. In particular, the density of the residuals e is a i^-component Gaussian mixture distribution 14
-17-
Figure imgf000019_0001
with the parameters ωk, μk and/?k being the component weights, means and precisions, respectively. All parameters of the mixture (36), bar the component weights, ωh are readily available after model training. The weighting of each component is the equilibrium solution obtained by Eigen-decomposition of the state transition matrix π in equation (24).
In Step 5, there is derived a measure of the shape of the probability density function of the residual error between the physiological data and the derived autoregressive model. Two different measures are the third or higher order moments of the probability density function or the entropy of the probability density function. The derivation and use of both of these measures will now be described but it is possible to use only a single measure, or else more measures.
One way of describing the shape of any density is by its moments. In the case of the mixture density, these are easily computed analytically with the means of the moment integral
Figure imgf000019_0003
The central moments of the first four orders Cm1, ..., cm4, of the mixture density are
Figure imgf000019_0002
where, for each of the Gaussian components, mk = μk - cm, is the centred mean, and vk = l/βk is the variance. The third order and fourth order moments are normalised with respect to the second order moment to obtain a unit-less value.
As to entropy, the dynamic range of moments of a distribution are likely to have serious effects on the numerical stability of any classifier. Further, four moments describe only four shape characteristics of the density. On the other hand, Shannon's entropy may be considered a descriptor of the entire probability density. ' Because of the summation tenia inside the logarithms argument, however, the entropy of mixture densities can only approximated. Therefore the value of entropy is calculated value empirically, through a sampling procedure. For example, 1000 samples are drawn from the mixture density
Figure imgf000020_0002
and their probabilities p(χj evaluated under the mixture model. The empirical Shannon entropy is then calculated according to Shannon's definition.
Figure imgf000020_0001
n
The entropy value is function of the sample size. The decision as to its size was based upon the convergence behaviour of equation (39) as the sample was increased.
In summary, the calculated measure of the shape of the probability distribution function of the residual error is the relative entropies between components of the probability density function.
In Step 6, a power spectrum of the derived autoregressive model is derived. This is used to account for well-known effects anaesthetic agents have on the frequency spectrum of the EEG data. In particular, the spectral power in the/?-range of the spectrum is used as an indicator for changes the speed of the EEG wave-form. Since the autoregressive coefficients, a, are already estimated in the model training procedure in Step 4, they can be used to extract the power spectrum using the well-known relation
Figure imgf000020_0003
where z is the complex number in the z-plane, z = exp(2πi/Δ), μ is the sampling interval and i = /-1. This formula implies the residual noise power to be normalised to 1. Further, we only concentrate on the β-band of the EEG spectrum, i.e. between^ = 4Hz and T^ - 7.5Hz and compute the cumulative power is this band
Figure imgf000021_0001
at 1000 discrete frequencies within the β-range. Step 6 is illustrated as being performed in parallel with Step 5 as it uses the results of Step 4. Of course, in practice Steps 5 and 6 may be performed in series in either order. It is also to be noted that Step 6 is optional.
In Step 7, the physiological state of the human or animal subject is classified. This may be discrete classification as one of a plurality of predetermined physiological states.
Alternatively, the classification may continuous, that is by providing a numerical measure on a continuous scale between a plurality of predetermined physiological states, for example as performed by a regressor. The numerical measure is thus a measure of the similarity of the physiological state of the subject to the predetermined physiological states. The measure may for example describe the probability of the subject being in respective ones of the predetermined physiological states. Thus the measure classifies the physiological state of the subject with respect to the predetermined physiological states, but does not go as far as making a discrete classification of one particular predetermined physiological states. Such a type of measure is often preferred by clinicians because it can provide more information in practical terms, for example by giving the clinician a better feel for the likelihood of the subject being in a given physiological state or by giving a feel for changes in the physiological state of the subject.
The classification in Step 7 is done on the basis of the measure describing the shape of the probability density function, for example either one or both of the third and higher order moments and the entropy. Optionally the classification is also done on the basis of the derived power spectrum in combination with the measure describing the shape of the probability density function.
The classification in Step 7 is performed using a feature recognition system operating on the measure describing the shape of the probability density function and optionally also the power spectrum as features.
In a preliminary training phase, the feature recognition system is trained using feature data derived using Steps 2 to 6 from a plurality of sets of physiological data measured in Step 1 from a human or animal subject having different, known physiological states. Knowledge of the physiological state of the subject used in the training phase requires the input of a clinician or the use of clinically accepted data. The training phase effectively stores relevant information about the known physiological states in the feature recognition system.
Once the training phase has been performed, the feature recognition system may be operated in a recognition phase to perform the classification of the physiological state of a human or animal subject as one of a plurality of known physiological states in Step 7. The recognition phase effectively uses the relevant information about the known physiological states obtained in the training phase. Thus it allows the physiological data from a patient in an initially unknown physiological states to be classified as one of the known physiological states.
The feature recognition system used in Step 7 maybe of the discrete type, such as a neural networks, or of the continuous type, such as a regressor. Discrete classification leads to a crisp classification into any of the known physiological states. Alternatively, quantification of the patient state can be on a continuous state between two extreme states, such as wake and anaesthetised.
As an instance of the discrete kind it is possible to use a multi-layer perceptron software of Nabney "Netlab: Algorithms for Pattern Recognition", Springer Verlag, 2002. The classifier maybe trained using the evidence framework to avoid over-fitting. A example of a continuous characterisation of patient state is described in
Roberts, Rezek, Everson, Stone, Wilson & Alford, "Automated assessment of Vigilance using committees of Radial Basis Function Analysers", IEE Proceedings Science, Technology & Measurement. Vol. 147, issue 6, pp333-338, 2000.
The physiological state classified in Step 7 may be displayed on a display of the computer system 8 or output from the computer system 8 in some other way. Thus the output physiological state may be used as the basis for a clinical decision in . respect of the subject, for example to select a treatment.
The validity of the classification of the present method may be understood as follows. While the autoregressive model is not that of underlying the commercial BIS measure which uses a harmonic model, it is mathematically impossible to distinguish between the two. In other words, it is essentially impossible to distinguish between the harmonic and autoregressive model on the basis of probability distributions alone. Whilst the autoregressive model encodes deviation from a Gaussian distribution in its residual error term (noise term), the harmonic model has no residual error term and encodes deviation from a Gaussian distribution as originating directly from the data. However, the autoregressive model is simply a linear function of the noise term. Linear functions are not capable of changing the skewness of the input distribution, they can merely change the distribution's centre and scale. Thus, the input probability distribution must be skewed from the outset for the resulting observations have a skewed distribution also. The harmonic model simply abandons the concept of a residual having a skewed distribution and imposes the skewness directly on the data.
The above discussion rests on the assumption that no deviation from a
Gaussian distribution is introduced anywhere else in the model, for instance by allowing the model parameters to have a non-Gaussian distributions themselves. As the parameters of both models are basically fitted using least square estimators the parameters' distributions are implicitly Gaussian.
The one further criterion for model selection is model complexity, i.e. the number of parameters required to explain the observations. Standard autoregressive models require 2 coefficients to produce an almost harmonic oscillation, and thus 3 in total to reproduce a signal having the same power-spectral density as that resulting from the harmonic model.
The conclusion is that the present method using an autoregressive model to assess the probability distributional characteristics of anaesthesia EEG data allows one to draw conclusions as to the properties of the bispectral part of the bispectral index. Higher order statistical properties in the autoregressive model imply the same for the harmonic model, so the present method is extracting the same information from the physiological data. However, the present method does this in a more efficient and robust manner, because the spectrum derived from the autoregressive model is much smoother and well-known for its robustness against noise, unlike the periodogram estimator used in the BIS index. Also, the present method does not require estimation of third order moments from the data. The most it requires is sample estimations of weighted mean and variance terms. Thus, it is not forced to assume a constant phase for each block of the data segment, which can then be averaged. This makes our estimation considerably faster and more robust to noise. All moments of the mixture distribution can be calculated analytically and to a much higher degree of accuracy.
The mixture model for the residual noise can approximate any distribution to arbitrary accuracy. Further, since only Gaussian densities are used for the error term, only first and second moment estimation is required. Any higher moment can be computed by simply plugging in the first moments in a moment integral. This adds significantly to stability and reduces the amounts of data needed for estimation, hi turn, the algorithm becomes faster. The estimation method is Bayesian which uses prior information. One can think of the prior as encoding "previously seen data" augmenting the current data set.
The efficacy of the method shown in Fig. 1 and in particular the significance of the resultant classification has been demonstrated in the case of physiological data in the form of EEG data. The method was applied to the following three data sets.
The first data set was EEG data obtained from the Dipartimento Area Critica Medico Chirurgica, Sez. di Anestesiologia e Rianimazione, of the University of
Florence. 235 patients who were to undergo abdominal surgery agreed to take part in the study. Patients were of either sex, aged between 20 and 55 years, weighed 60kg to 75kg, were of normal size and categorised as ASA I. Of the original population, some had to be excluded as extreme heart rate, blood pressure or oxygen saturation values required intervention. Mechanically ventilated and premedicated with 00014
O.Olmg/kg atropine and 700ml ringer-lactate, anaesthesia consisted of i.v. administration of remifentanil and propofol (15mg/kg/h and 1.5mg/kg respectively) followed by an infusion of 10 mg/kg/h, pancuronium 0.1 mg/kg. Remifentanil and propofol were constantly monitored and varied according to concurrently monitored BIS values and anaesthetist's judgement. Patients requiring more than 10% from the initial infusion rates adjustment were excluded from the study. Thus, a total of 35 patients were excluded of the original 235 large patient group.
EEG data was recorded from 2 frontal electrodes place at either side of the outer malar bone (AtI and At2) with reference and ground electrodes placed at Fpz and FpI , respectively. The original recordings, which had an average length of 25 minutes, where heavily contaminated the data. Corrupted sections lead to failures in the model initialisation and had to be excluded from the analysis. The resulting recording lengths were on average 8 minutes long (with the shortest duration being 5 minutes and the longest being 13 minutes). The second and third sets of experiment data were obtained from the
Academic Department of Anaesthesia based at Northwick Park Hospital in Harrow, London. Fifteen Patients, aged between 31 and 61, formally agreed to take part in the study. The EEG signal was recorded from the forehead to the left mastoid (with the right mastoid as common) using a 82 dB preamplifier (with a bandwidth of .5- 400Hz, a 1st order high-pass and 3rd order low-pass Butterworth pre-filter) and digitised to a 12bit accuracy at 1 kHz sampling rate. Post digitising, the signal was down-sampled to 250Hz after low-pass removal of frequencies above 100Hz cut-off using an finite impulse response filter (47th order).
The second data set was obtained from ten patients, pre-medicated with lOmg morphine and 0.04mg andatrophine, underwent anaesthetic sedation through inhalation of desflurane. Muscle relaxant Vecuronium was also administered by infusion to maintain neuro-muscular paralysis throughout. The sedation procedure consisted of 3 consecutive applications of various desflurane dosages (1.5, 3, and 6 %). The sequence of concentrations was selected randomly in order to limit any carryover effects from one concentration to another. The end-expiratory concentration of desflurane, N2O and CO2 was measured (using a calibrated Datex Ultima gas analyser) to determine whether the patients was in a low, medium or deep state of anaesthesia. The EEG was recorded for 10 minutes while the desflurane concentration was near constant (monitored by the gas analyser). However, only the final 5 minutes of each recording period were used for experimental analysis.
The third data set was obtained from five patients who were pre-medicated with lOmg of morphine and 0.04mg of andatrophine, followed by induction of anaesthesia with thiopentone (2-4mg/kg) and who underwent anaesthetic sedation through intravenous infusion of propofol. Seven to ten minutes after induction with thiopentone, propofol was infused in five equal 10 minute steps, starting with concentrations of 40 mg/kg/min and ending with 200 mg/kg/min. The propofol blood concentration was measured by taking venous blood measurements the arm contralateral to the infused arm. The propofol blood concentration as used as the indicator of patient level of anaesthesia (levels low, three distinct medium and high). These labels were also the classification given to the EEG which was recorded concurrent to the administration. The full 11 minute long EEG recording periods were used for subsequent analysis during which agent concentration was near constant.
Steps 2 to 6 of the method of Fig. 1 were performed and the performance of recognition by the feature recognition system used in Step 7 was examined. The classification rates and confusion matrices have been calculated and are presented here as the cross-validation average. Classification standard deviation, which is also reported in the results, is the sample estimate. AU experiments here are the results of 10-fold cross-validation experiments using data from all subjects. As our benchmark, we first study the informativeness of the power spectrum only. Classification results using beta-band power spectrum are shown in Table 1. It is evident from the classification rates that for all data sets spectral characteristics are significantly affected by the anaesthetic agents. The variability of the classification rate, however, is high throughout, but particularly elevated for Desflurane. For this agent in particular, the confusion matrix is disappointing, in that many more data points which should be classified as wake (bottom row in the matrix) are classified as anaesthetised (first column of bottom row in the matrix). Table 1:
Figure imgf000027_0001
Classification results using only the entropy of the probability distribution as the measure of shape used for classification are shown in Table 2. Classification results using only third order moments as the measure of shape used for classification are shown in Table 3. Table 2:
Figure imgf000027_0002
Table 3:
Figure imgf000027_0003
Using only the entropy (Table 2), the overall classification rate is almost in the range of that using spectral information only (Table 1) for propofol, lower for remifentanil/propfol and higher for desflurane. The classification variance is also interesting, being lower in comparison to the previous case.
There seems to be little difference between entropy or moment descriptors. The most notably difference ( Table 3) seems to be in confusion matrices which is somewhat more diagonal in the case of remifentanil/propfol and that classification error variance is increased. The result clearly shows the strength of the model on the desflurane data. Classification rates are considerably higher using only residual information. Thus, desflurane's effect on EEG power spectrum is minimal.
Thus, use of either one of the measures of the shape of the probability distribution, that is entropy or high order moments allows classification at a high rate.
Classification results using the entropy of the probability distribution as the measure of shape used for classification in combination with the power spectrum in the beta band are shown in Table 4. Table 4:
Figure imgf000028_0001
Once we take into account the information not captured by the AR-part of the model, EEG classification becomes considerably more accurate or robust. The effect is a much more diagonally dominant confusion matrix as well as well as improved classification rates. Variations within the classification folds have also decreased for Desflurane and Propofol. Propofol classification has increased by 10% points and the improvement for Desflurane is nearly 40%. These results are obtained with significantly less amount of data than, say higher order spectral estimators would require.
These observations are statistically significant when tested using a 2-sided sign test. A summary of all possible pairwise comparisons is shown in Table 5. In it, the probabilities of the test statistic are shown only if the null-hypothesis could not be rejected. The significance level as set to 5\% throughout. A dash indicates that tests have not been performed for obvious redundancy reasons. Missing entries have not passed the test. The table re-iterates the classification results described earlier. EEG changes due to Propofol alone are almost fully captures using spectral information only. It is a "text-book" anaesthetic agent, which is the reason for it's preferred use in clinical studies. For the other data sets there are significant changes, as confirmed by our tests.
Table 5:
Figure imgf000030_0001
On the basis of these results, the use of the power spectrum in combination with a measure of the shape of the probability distribution function of the residual error is the preferred technique.
The above results demonstrate the efficacy of the present method for classifying the cognitive state, that is the activity of the brain, of a subject on the basis of EEG data. Similarly advantages are expected when the method is applied to other types of physiological data. Some examples are as follows but the list is not exclusive.
The method may be applied to EMG data measured by performing an electromyo graph. In this case, the classified physiological state may be the state of muscle activity, the state of muscle contractability and/or the state of muscle degeneration of the human or animal subject.
The method may be applied to image data measured by performing an imaging technique, including but not exclusively magnetic resonance imaging (MRI), computed tomography (CT) imaging, X-ray imaging and ultrasound imaging. In this case, the classified physiological state may be any state apparent in the derived image.
Similarly, the method may be applied to ECG data measured by performing an electrocardiogram, blood pressure data or blood oxygenation data. For all types of physiological data, the classification may be performed to monitor changes in the physiological state which occur over time either spontaneously or evoked by an external stimulus. Thus for example the method may be used to monitor anaesthesia, consciousness, sleep, cerebral and muscular ischemia, cerebral intoxication, evoked potential analysis, cognitive state evaluation, neuropathology, or muscle tremor.
It is also noted that there are envisaged numerous variations to the specific method shown in Fig. 1 and described above. Some possible variations are as follows.
The specific method described above assumes that the physiological data is 1- dimensional data. The method may however be generalised to 2-dimensional or 3- dimensional data, as for example in the case of image data.
In the method described above, the probability density function of the residual error between the physiological data and the derived autoregressive model is represented by parameters of aweighted sum of Gaussian functions, which is preferred as such a sum is known to be capable of approximating any kind of probability distribution function and hence is powerful and robust. However, in general any functions in the exponential family may be used as an alternative to Gaussian functions.

Claims

Claims
1. A method of analysing physiological data measured from a human or animal subject to classify the physiological state of the human or animal subject, the method comprising the computer-implemented steps of: performing an autoregressive analysis of the physiological data which derives an autoregressive model fitting the physiological data and derives the probability density function of the residual error between the physiological data and the derived autoregressive model, the probability density function being represented by parameters of a weighted sum of functions in the exponential family; calculating a measure describing the shape of the probability density function; and classifying the physiological state of the human or animal subject on the basis of at least the measure describing the shape of the probability density function.
2. A method according to claim 1, wherein said functions in the exponential family are Gaussian functions.
3. A method according to claim 1 or 2, wherein the measure describing the shape of the probability density function comprises at least one moment of third order or higher order.
4 A method according to any one of the preceding claims, wherein the measure describing the shape of the probability density function comprises the relative entropies between components of the probability density function.
5. A method according to any one of the preceding claims, further comprising the computer-implemented step of deriving a power spectrum of the autoregressive model derived in said step of performing an autoregressive analysis, said step of classifying the physiological state of the human or animal subject being performed on the basis of the measure describing the shape of the probability density function and the derived power spectrum.
6. A method according to any one of the preceding claims, wherein the physiological data is EEG data measured by performing an electroencephalogram.
7. A method according to claim 6, wherein the classified physiological state is the cognitive state of the human or animal subject.
8. A method according to any one of the preceding claims, wherein the step of performing an autoregressive analysis is performed to derive a plurality of autoregressive models of variable order to fit the physiological data.
9. A method according to any one of the preceding claims, wherein the step of deriving the probability density function is performed by deriving a plurality of probability density functions each represented by parameters of a weighted sum of functions of variable order to fit the physiological data.
10. A method according to any one of the preceding claims, wherein said probability density function is a probability density function of a plural number of consecutive values of said residual error.
11. A method according to any one of the preceding claims, wherein said method comprises segmenting a sequence of physiological data into blocks and performing said computer-implemented steps on each of the blocks of the physiological data.
12. A method according to any one of the preceding claims, wherein said step of classifying the physiological state of the subject comprises classifying the physiological state of the subject as one of a plurality of predetermined physiological states.
13. A method according to any one of the preceding claims, in combination with a preceding step of performing measurements on a human or animal subject to obtain the physiological data.
14. A computer program executable by a computer system, the computer program being capable, on execution by the computer system, of causing the computer system to perform a method according to any one of claims 1 to 12.
15. A storage medium storing in a form readable by a computer system a computer program according to claim 14.
PCT/GB2006/000014 2005-01-13 2006-01-05 Physiological data classification WO2006075133A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0500680.4A GB0500680D0 (en) 2005-01-13 2005-01-13 Physiological data classification
GB0500680.4 2005-01-13

Publications (1)

Publication Number Publication Date
WO2006075133A1 true WO2006075133A1 (en) 2006-07-20

Family

ID=34224552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2006/000014 WO2006075133A1 (en) 2005-01-13 2006-01-05 Physiological data classification

Country Status (2)

Country Link
GB (1) GB0500680D0 (en)
WO (1) WO2006075133A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434126A (en) * 2014-12-10 2015-03-25 中国科学院苏州生物医学工程技术研究所 Recording device for upper limp tremor during sleeping
CN110472595A (en) * 2019-08-20 2019-11-19 郑州大学 Identification model construction method, device and the recognition methods of EEG signals, device
CN111783848A (en) * 2020-06-15 2020-10-16 辽宁师范大学 Image classification method based on probability density distribution dictionary and Markov transfer characteristics
US20220233152A1 (en) * 2021-01-25 2022-07-28 Dexcom, Inc. Bayesian framework for personalized model identification and prediction of future blood glucose in type 1 diabetes using easily accessible patient data
CN116172522A (en) * 2023-05-04 2023-05-30 江南大学附属医院 Anesthesia depth monitoring method based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5010891A (en) * 1987-10-09 1991-04-30 Biometrak Corporation Cerebral biopotential analysis system and method
US20040082876A1 (en) * 2000-10-16 2004-04-29 Viertio-Oja Hanna E. Method and apparatus for determining the cerebral state of a patient with fast response

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5010891A (en) * 1987-10-09 1991-04-30 Biometrak Corporation Cerebral biopotential analysis system and method
US20040082876A1 (en) * 2000-10-16 2004-04-29 Viertio-Oja Hanna E. Method and apparatus for determining the cerebral state of a patient with fast response

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PENNY W D ET AL: "Variational bayes for non-gaussian autoregressive models", NEURAL NETWORKS FOR SIGNAL PROCESSING X, 2000. PROCEEDINGS OF THE 2000 IEEE SIGNAL PROCESSING SOCIETY WORKSHOP DECEMBER 11-13, 2000, PISCATAWAY, NJ, USA,IEEE, vol. 1, 11 December 2000 (2000-12-11), pages 135 - 144, XP010526358, ISBN: 0-7803-6278-0 *
REZEK I, ROBERTS J, SIVA E: "Depth of Anaesthesia Assessment with Generative Polyspectral Models", INTERNET CITATION. PASCAL NETWORK PUBLICATIONS., no. 1122, 13 October 2005 (2005-10-13), pages 1 - 6, XP002373205, Retrieved from the Internet <URL:http://eprints.pascal-network.org/archive/00001122/> [retrieved on 20060316] *
ROBERTS STEPHEN J ET AL: "Variational bayes for generalized autoregressive models", IEEE TRANS SIGNAL PROCESS; IEEE TRANSACTIONS ON SIGNAL PROCESSING SEPTEMBER 2002, vol. 50, no. 9, September 2002 (2002-09-01), pages 2245 - 2257, XP011080238 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104434126A (en) * 2014-12-10 2015-03-25 中国科学院苏州生物医学工程技术研究所 Recording device for upper limp tremor during sleeping
CN110472595A (en) * 2019-08-20 2019-11-19 郑州大学 Identification model construction method, device and the recognition methods of EEG signals, device
CN111783848A (en) * 2020-06-15 2020-10-16 辽宁师范大学 Image classification method based on probability density distribution dictionary and Markov transfer characteristics
CN111783848B (en) * 2020-06-15 2023-05-23 辽宁师范大学 Image classification method based on probability density distribution dictionary and Markov transfer characteristics
US20220233152A1 (en) * 2021-01-25 2022-07-28 Dexcom, Inc. Bayesian framework for personalized model identification and prediction of future blood glucose in type 1 diabetes using easily accessible patient data
CN116172522A (en) * 2023-05-04 2023-05-30 江南大学附属医院 Anesthesia depth monitoring method based on neural network
CN116172522B (en) * 2023-05-04 2023-06-30 江南大学附属医院 Anesthesia depth monitoring method based on neural network

Also Published As

Publication number Publication date
GB0500680D0 (en) 2005-02-23

Similar Documents

Publication Publication Date Title
Güler et al. Classification of EMG signals using PCA and FFT
Saint‐Pierre et al. The analysis of asthma control under a Markov assumption with use of covariates
Casaseca-de-la-Higuera et al. Weaning from mechanical ventilation: a retrospective analysis leading to a multimodal perspective
Kuhlmann et al. Neural mass model-based tracking of anesthetic brain states
Nagaraj et al. Neonatal seizure detection using atomic decomposition with a novel dictionary
Xie et al. Classification of ventricular tachycardia and fibrillation using fuzzy similarity-based approximate entropy
Namazi et al. Age-based analysis of heart rate variability (HRV) for patients with congestive heart failure
CN110840443B (en) Electrocardiosignal processing method, electrocardiosignal processing device and electronic equipment
Tajmirriahi et al. Modeling of seizure and seizure-free EEG signals based on stochastic differential equations
WO2006075133A1 (en) Physiological data classification
Wang et al. Inference of brain states under anesthesia with meta learning based deep learning models
Quintero-Rincón et al. A quadratic linear-parabolic model-based EEG classification to detect epileptic seizures
Kuhlmann et al. Tracking electroencephalographic changes using distributions of linear models: application to propofol-based depth of anesthesia monitoring
US10576224B2 (en) Method for detecting at least one anomaly in an observed signal, corresponding computer program product and device
Mohan et al. Automatic epileptic seizure prediction in scalp EEG
Wong et al. Probabilistic detection of vital sign abnormality with Gaussian process regression
Hosseini A computational framework to discriminate different anesthesia states from EEG signal
CN111466877B (en) LSTM network-based oxygen reduction state prediction method
Shuzan et al. Machine Learning-Based Respiration Rate and Blood Oxygen Saturation Estimation Using Photoplethysmogram Signals. Bioengineering 2023, 10, 167
Lee et al. Multi-phases and various feature extraction and selection methodology for ensemble gradient boosting in estimating respiratory rate
EP3014501B1 (en) Measuring respiration
Jovic et al. Classification of biological signals based on nonlinear features
Ni et al. Dynamic multivariate multiscale entropy based analysis on brain death diagnosis
CN115098832B (en) Anesthesia depth estimation method based on thalamus cortex model
Carvajal et al. Dynamical non-linear analysis of heart rate variability in patients with aortic stenosis

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06700185

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 6700185

Country of ref document: EP