WO2018109826A1 - Dispositif d'analyse, programme d'analyse et procédé d'analyse - Google Patents

Dispositif d'analyse, programme d'analyse et procédé d'analyse Download PDF

Info

Publication number
WO2018109826A1
WO2018109826A1 PCT/JP2016/087021 JP2016087021W WO2018109826A1 WO 2018109826 A1 WO2018109826 A1 WO 2018109826A1 JP 2016087021 W JP2016087021 W JP 2016087021W WO 2018109826 A1 WO2018109826 A1 WO 2018109826A1
Authority
WO
WIPO (PCT)
Prior art keywords
parameter
correlation
cell
calculation unit
unit
Prior art date
Application number
PCT/JP2016/087021
Other languages
English (en)
Japanese (ja)
Inventor
伸一 古田
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to PCT/JP2016/087021 priority Critical patent/WO2018109826A1/fr
Priority to JP2018556058A priority patent/JPWO2018109826A1/ja
Publication of WO2018109826A1 publication Critical patent/WO2018109826A1/fr

Links

Images

Classifications

    • CCHEMISTRY; METALLURGY
    • C12BIOCHEMISTRY; BEER; SPIRITS; WINE; VINEGAR; MICROBIOLOGY; ENZYMOLOGY; MUTATION OR GENETIC ENGINEERING
    • C12MAPPARATUS FOR ENZYMOLOGY OR MICROBIOLOGY; APPARATUS FOR CULTURING MICROORGANISMS FOR PRODUCING BIOMASS, FOR GROWING CELLS OR FOR OBTAINING FERMENTATION OR METABOLIC PRODUCTS, i.e. BIOREACTORS OR FERMENTERS
    • C12M1/00Apparatus for enzymology or microbiology
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing

Definitions

  • the present invention relates to an analysis apparatus, an analysis program, and an analysis method.
  • an analysis device that analyzes a correlation between feature quantities in a cell, and calculates a feature quantity of a component constituting the cell from a cell image in which the cell is captured.
  • a calculation unit a parameter selection unit that selects a parameter used to calculate a correlation between feature quantities, a first correlation using a first parameter selected by the parameter selection unit, and a second parameter selected by the parameter selection unit.
  • An analysis apparatus comprising: a parameter correlation calculation unit that obtains a second correlation to be used; and a correlation calculation unit that calculates a third correlation based on the first and second correlations calculated by the parameter correlation calculation unit It is.
  • the feature amount for calculating the feature amount of the component constituting the cell from the cell image in which the cell is captured by the computer of the analysis device that analyzes the correlation between the feature amounts in the cell.
  • an analysis method for analyzing a correlation between feature quantities in a cell wherein a feature quantity for calculating a feature quantity of a component constituting the cell from a cell image obtained by imaging the cell.
  • FIG. 1 is a diagram illustrating an example of a configuration of a microscope observation system 1 according to an embodiment of the present invention.
  • the microscope observation system 1 performs image processing on an image acquired by imaging a cell or the like.
  • an image acquired by imaging a cell or the like is also simply referred to as a cell image.
  • the microscope observation system 1 includes an analysis device 10, a microscope device 20, and a display unit 30.
  • the microscope apparatus 20 is a biological microscope and includes an electric stage 21 and an imaging unit 22.
  • the electric stage 21 can arbitrarily operate the position of the imaging object in a predetermined direction (for example, a certain direction in a two-dimensional horizontal plane).
  • the imaging unit 22 includes an imaging element such as a charge-coupled device (CCD) and a complementary MOS (CMOS), and images an imaging target on the electric stage 21.
  • the microscope apparatus 20 may not include the electric stage 21 and may be a stage in which the stage does not operate in a predetermined direction.
  • the microscope apparatus 20 includes, for example, a differential interference microscope (DIC), a phase contrast microscope, a fluorescence microscope, a confocal microscope, a super-resolution microscope, a two-photon excitation fluorescence microscope, and a light sheet. Functions as a microscope and a light field microscope.
  • the microscope apparatus 20 images the culture vessel placed on the electric stage 21. Examples of the culture container include a well plate WP and a slide chamber.
  • the microscope apparatus 20 captures transmitted light that has passed through the cells as an image of the cells by irradiating the cells cultured in the many wells W of the well plate WP with light.
  • the microscope apparatus 20 can acquire images such as a transmission DIC image of a cell, a phase difference image, a dark field image, and a bright field image. Furthermore, by irradiating the cell with excitation light that excites the fluorescent substance, the microscope apparatus 20 captures fluorescence emitted from the biological substance as an image of the cell.
  • cells are dyed while they are alive, and time-lapse imaging is performed to acquire a cell change image after cell stimulation.
  • a cell image is obtained by expressing a fluorescent fusion protein or staining a cell with a chemical reagent or the like while alive.
  • the cells are fixed and stained to obtain a cell image.
  • the fixed cells stop metabolizing. Therefore, in order to observe changes with time in fixed cells after stimulating the cells, it is necessary to prepare a plurality of cell culture containers seeded with the cells. For example, there may be a case where it is desired to observe the change of the cell after the first time and the change of the cell after the second time different from the first time by applying stimulation to the cells. In this case, after stimulating the cells and passing the first time, the cells are fixed and stained to obtain a cell image.
  • a cell culture container different from the cells used for the observation at the first time is prepared, and after stimulating the cells for a second time, the cells are fixed and stained to obtain a cell image.
  • the time-dependent change in a cell can be estimated by observing the change of the cell in 1st time, and the change of the cell in 2nd time.
  • the number of cells used for observing the intracellular change between the first time and the second time is not limited to one. Therefore, images of a plurality of cells are acquired at the first time and the second time, respectively. For example, if the number of cells for observing changes in the cells is 1000, 2000 cells are photographed at the first time and the second time. Therefore, in order to acquire details of changes in cells with respect to a stimulus, a plurality of cell images are required at each timing of imaging from the stimulus, and a large amount of cell images are acquired.
  • the microscope apparatus 20 captures, as the above-described cell image, luminescence or fluorescence from the coloring material itself taken into the biological material, or luminescence or fluorescence generated when the substance having the chromophore is bound to the biological material. May be.
  • the microscope observation system 1 can acquire a fluorescence image, a confocal image, a super-resolution image, and a two-photon excitation fluorescence microscope image.
  • the method of acquiring the cell image is not limited to the optical microscope.
  • an electron microscope may be used as a method for acquiring a cell image.
  • an image obtained by a different method may be used to acquire the correlation. That is, the type of cell image may be selected as appropriate.
  • the cells in this embodiment are, for example, primary culture cells, established culture cells, tissue section cells, and the like.
  • the sample to be observed may be observed using an aggregate of cells, a tissue sample, an organ, an individual (animal, etc.), and an image containing the cells may be acquired.
  • the state of the cell is not particularly limited, and may be a living state or a fixed state.
  • the state of the cell may be “in-vitro”. Of course, you may combine the information of the living state and the fixed information.
  • the cells may be treated with chemiluminescent or fluorescent protein (for example, chemiluminescent or fluorescent protein expressed from an introduced gene (such as green fluorescent protein (GFP))) and observed.
  • chemiluminescent or fluorescent protein for example, chemiluminescent or fluorescent protein expressed from an introduced gene (such as green fluorescent protein (GFP)
  • the cells may be observed using immunostaining or staining with chemical reagents. You may observe combining them. For example, it is possible to select a photoprotein to be used according to the type for discriminating the intracellular nuclear structure (eg, Golgi apparatus).
  • pretreatment for analyzing correlation acquisition such as a means for observing these cells and a method for staining cells, may be appropriately selected according to the purpose.
  • cell dynamic information is obtained by the most suitable method for obtaining the dynamic behavior of the cell
  • information on intracellular signal transmission is obtained by the optimum method for obtaining the intracellular signal transmission. It doesn't matter.
  • These pre-processing selected according to the purpose may be different.
  • the well plate WP has one or a plurality of wells W.
  • the well plate WP has 8 ⁇ 12 96 wells W as shown in FIG.
  • the number of well plates WP is not limited to this, and 54 wells 6 ⁇ 9 may be provided.
  • Cells are cultured in wells W under certain experimental conditions. Specific experimental conditions include temperature, humidity, culture period, elapsed time since stimulation was applied, type and intensity of stimulation applied, concentration, amount, presence or absence of stimulation, induction of biological characteristics, etc. Including.
  • the stimulus is, for example, a physical stimulus such as electricity, sound wave, magnetism, or light, or a chemical stimulus caused by administration of a substance or a drug.
  • Biological characteristics include the stage of cell differentiation, morphology, number of cells, behavior of molecules in cells, morphology and behavior of organelles, behavior of each form, structure of nucleus, behavior of DNA molecules, etc. It is a characteristic to show.
  • FIG. 2 is a block diagram illustrating an example of a functional configuration of each unit included in the analysis apparatus 10 of the present embodiment.
  • the analysis device 10 is a computer device that analyzes an image acquired by the microscope device 20.
  • the analysis device 10 includes a calculation unit 100, a storage unit 200, and a result output unit 300.
  • the image processed by the analysis apparatus 10 is not limited to the image captured by the microscope apparatus 20, for example, an image stored in advance in the storage unit 200 included in the analysis apparatus 10 or an external storage (not illustrated). It may be an image stored in advance in the apparatus.
  • the calculation unit 100 functions when the processor executes a program stored in the storage unit 200. Also, some or all of the functional units of the arithmetic unit 100 may be configured by hardware such as LSI (Large Scale Integration) or ASIC (Application Specific Integrated Circuit).
  • the calculation unit 100 includes a cell image acquisition unit 101, a feature amount calculation unit 102, a parameter selection unit 103, a parameter correlation calculation unit 104, an edge weighting unit 105, and a correlation calculation unit 106.
  • the cell image acquisition unit 101 acquires the cell image captured by the imaging unit 22 and supplies the acquired cell image to the cell 102.
  • the cell image acquired by the cell image acquisition unit 101 includes a plurality of images in which the cell culture state is captured in time series, and a plurality of images in which cells are cultured under various experimental conditions.
  • the feature amount calculation unit 102 calculates a plurality of types of feature amounts of the cell image supplied by the cell image acquisition unit 101. This feature amount includes the brightness, area, variance, and the like of the cell image.
  • the luminance distribution in the acquired image is calculated.
  • multiple images that differ in time-series or cell-state changes such as differentiation, etc. from the change of the calculated luminance distribution for a predetermined time, or the change accompanying the cell state change such as differentiation of the calculated luminance distribution, etc.
  • position information indicating a change in brightness different from that and use the change in brightness is also possible.
  • the image is not limited to a change in time, and a plurality of images having different cell state changes such as differentiation may be used. Further, information on a position indicating a change in different luminance may be used as the feature amount.
  • it may be a behavior of a cell within a predetermined time or a behavior accompanying a cell state change such as cell differentiation, or a change of a cell shape within a predetermined time or a cell state change such as differentiation of a cell shape. It does not matter if the change is accompanied. Further, if no change within a predetermined time or a change accompanying a cell state change such as differentiation is recognized from the cell image to be picked up, the change may not be changed as a feature amount.
  • the parameter selection unit 103 stores predetermined parameters, reference values for selecting parameters, and the like.
  • the parameter selection unit 103 selects a parameter used for calculating an edge between feature amounts.
  • the parameter is information for adjusting a value used by the parameter correlation calculation unit 104 to calculate an edge.
  • the density of the network changes.
  • parameters selected by the parameter selection unit 103 include a criterion for selecting feature amounts used for edge calculation, and a regularization parameter used for edge calculation. Therefore, when the parameter selected by the parameter selection unit 103 is changed, the number of edges included in the calculated network can be changed.
  • the parameter selected by the parameter selection unit 103 may be determined by an operation of a user who operates the analysis device 10.
  • the parameter may be input with a value used for the regularization parameter by a user operation.
  • the parameter may be a feature amount used for edge calculation by a user operation.
  • the parameter selection unit 103 supplies the selected parameter to the parameter correlation calculation unit 104.
  • the parameter correlation calculation unit 104 acquires parameters from the parameter selection unit 103.
  • the parameter correlation calculation unit 104 calculates an edge between feature amounts based on the parameters acquired from the parameter selection unit 103.
  • the parameter correlation calculation unit 104 can calculate a network by calculating edges between feature amounts. Network elements include nodes, edges, and the like.
  • the parameter correlation calculation unit 104 supplies the calculated edge between the feature amounts to the edge weighting unit 105 and the correlation calculation unit 106.
  • the parameter selection unit 103 can select a plurality of parameters. At this time, the parameter correlation calculation unit 104 selects an edge between a plurality of feature amounts based on the plurality of parameters acquired from the parameter selection unit 103.
  • the included network is calculated and supplied to the edge weighting unit 105 and the correlation calculation unit 106.
  • the edge weighting unit 105 acquires from the parameter correlation calculation unit 104 an edge between feature amounts and a plurality of parameters used for the calculation of the edge.
  • the edge weighting unit 105 weights the edges of the feature amounts based on the plurality of networks acquired from the parameter correlation calculation unit 104 and the respective parameters used for calculation of the network. For each of the plurality of parameters selected by the parameter selection unit 103, each network is calculated by the parameter correlation calculation unit 104.
  • the edge weighting unit 105 weights edges using a plurality of calculated networks.
  • the weighted network is supplied to the correlation calculation unit 106.
  • the correlation calculation unit 106 acquires a network from the parameter correlation calculation unit 104 or the edge weighting unit 105.
  • the correlation calculation unit 106 calculates a network to be supplied to the result output unit 300 from the acquired networks. That is, the correlation calculation unit 106 selects a network to be supplied to the result output unit 300 from the acquired networks.
  • the result output unit 300 outputs the calculation result by the calculation unit 100 to the display unit 30.
  • the result output unit 300 may output the calculation result by the calculation unit 100 to an output device other than the display unit 30, a storage device, or the like.
  • the display unit 30 displays the calculation result output by the result output unit 300.
  • FIG. 3 is a flowchart illustrating an example of a calculation procedure of the calculation unit 100 according to the present embodiment. Note that the calculation procedure shown here is an example, and the calculation procedure may be omitted or added.
  • the cell image acquisition unit 101 acquires a cell image (step S10).
  • This cell image includes images of a plurality of types of biological tissues having different sizes such as genes, proteins, and organelles.
  • the cell image includes cell shape information. Since cell images contain information on phenotypes, metabolites, proteins, and genes, the correlation between them can be acquired.
  • the feature amount calculation unit 102 extracts the cell image included in the cell image acquired in step S10 for each cell (step S20).
  • the feature amount calculation unit 102 extracts a cell image by performing image processing on the cell image.
  • the feature amount calculation unit 102 extracts a cell image by performing image contour extraction, pattern matching, and the like.
  • the feature quantity calculation unit 102 determines the type of cell for the cell image extracted in step S20 (step S30). Further, the feature amount calculation unit 102 determines the constituent elements of the cells included in the cell image extracted in step S20 based on the determination result in step S30 (step S40).
  • the cell components include cell organelles (organelles) such as cell nucleus, lysosome, Golgi apparatus, mitochondria, and proteins constituting organelles.
  • the cell type is determined, but the cell type may not be determined. In this case, if the type of cell to be introduced is determined in advance, the information may be used. Of course, the type of cell need not be specified.
  • the feature quantity calculation unit 102 calculates the feature quantity of the image for each cell component determined in step S40 (step S50).
  • the feature amount includes a luminance value of the pixel, an area of a certain area in the image, a variance value of the luminance of the pixel, and the like. Further, there are a plurality of types of feature amounts according to the constituent elements of the cells.
  • the feature amount of the image of the cell nucleus includes the total luminance value in the nucleus, the area of the nucleus, and the like.
  • the feature amount of the cytoplasm image includes the total luminance value in the cytoplasm, the area of the cytoplasm, and the like.
  • the feature amount of the image of the whole cell includes the total luminance value in the cell, the area of the cell, and the like.
  • the feature amount of the mitochondrial image includes the fragmentation rate. Note that the feature amount calculation unit 102 may calculate the feature amount by normalizing it to a value between 0 (zero) and 1, for example.
  • the feature amount calculation unit 102 may calculate the feature amount based on information on the condition of the experiment for the cell associated with the cell image. For example, in the case of a cell image captured when an antibody is reacted with a cell, the feature amount calculation unit 102 may calculate a characteristic amount that is unique when the antibody is reacted. In addition, in the case of a cell image captured when cells are stained or when fluorescent proteins are added to cells, the feature amount calculation unit 102 is used when the cells are stained or when fluorescent proteins are added to the cells A characteristic amount peculiar to each may be calculated.
  • the storage unit 200 may include an experimental condition storage unit 202.
  • the experimental condition storage unit 202 stores information on experimental conditions for cells associated with cell images for each cell image.
  • FIG. 4 is a diagram illustrating an example of a feature amount calculation result by the feature amount calculation unit 102 of the present embodiment.
  • the feature amount calculation unit 102 calculates a plurality of feature amounts for the protein 1 for each cell and for each time. In this example, the feature amount calculation unit 102 calculates feature amounts for N cells from cell 1 to cell N. In this example, the feature amount calculation unit 102 calculates feature amounts for seven times from time 1 to time 7. In this example, the feature amount calculation unit 102 calculates K types of feature amounts from the feature amount k1 to the feature amount kK.
  • the feature amount calculation unit 102 calculates feature amounts in the directions of the three axes.
  • an axis in the cell direction is described as axis Nc
  • an axis in the time direction is described as axis N
  • an axis in the feature quantity direction is described as axis d1.
  • the K types of feature quantities from the feature quantity k1 to the feature quantity kK are combinations of feature quantities for protein 1.
  • the types and combinations of feature amounts may differ.
  • the feature amount calculation unit 102 supplies the feature amount calculated in step S50 to the parameter selection unit 103.
  • the parameter selection unit 103 selects a parameter used for calculating the correlation between the feature amounts calculated in step S50 (step S60).
  • the parameter selection unit 103 supplies the selected parameter to the parameter correlation calculation unit 104.
  • the parameter correlation calculation unit 104 calculates a partial correlation coefficient between feature amounts using the parameter selected in step S60 (step S70).
  • Correlation calculator 106 uses the partial correlation coefficient calculated in step 107 to calculate a network representing the correlation between the constituent elements of the cell.
  • FIG. 4 is a diagram illustrating an example of a feature amount for each cell according to the present embodiment.
  • FIG. 5 shows a matrix X of feature amounts that summarizes the feature amounts for each cell.
  • the matrix X is a matrix having an axis N in the row direction and an axis d in the column direction.
  • each element of the matrix X is shown by the average value of the cell population, but a statistic such as a median value or a mode value can also be used.
  • a matrix X of feature values for each cell may be used.
  • FIG. 6 is a diagram illustrating an example of a network according to the present embodiment.
  • the network is represented by a plurality of nodes ND and a plurality of edges ED connecting the nodes.
  • the node indicates a feature amount used for edge calculation.
  • the edge indicates the correlation between the nodes connected by the edge.
  • the regularization parameter is used when the parameter correlation calculation unit 104 calculates the correlation.
  • the regularization parameter is a parameter ⁇ representing the strength of regularizing the elements of the matrix X described above.
  • the regularization parameter ⁇ is larger, the components of the accuracy matrix tend to be sparse.
  • the calculated network tends to be sparse. That is, the larger the parameter ⁇ , the smaller the number of edges included in the network.
  • the smaller the parameter ⁇ the greater the number of edges included in the network. Therefore, the network changes in a sparse and dense manner according to the value of ⁇ .
  • the regularization parameter is a function form by the Graphic Lasso method. That is, the regularization parameter ⁇ used in the Graphic Lasso method will be described as a regularization parameter.
  • the Graphical Lasso method is an efficient algorithm for estimating an accuracy matrix from a Gaussian model with L1 regularization. For example, it is described in “Sparse inverse covariance estimation with the” in Biostatistics (2008), 9, 432-441 by JEROME FREEDMAN, TREVOR HASTIE, and ROBERT TIBSHIRANI.
  • FIG. 7 is a flowchart showing an example of detailed processing from step S60 to step S80 shown in FIG.
  • the parameter selection unit 103 selects regularization parameters having different sizes (step S110).
  • the parameter correlation calculation unit 104 acquires a matrix X whose elements are the feature amounts calculated from the feature amount calculation unit 102 (step S120).
  • the parameter correlation calculation unit 104 acquires the regularization parameter ⁇ selected from the parameter selection unit 103 (step S130).
  • the parameter correlation calculation unit 104 calculates the partial correlation coefficient of the matrix X by the Graphic Lasso method using the regularization parameter acquired from the parameter selection unit 103 (step S140).
  • FIG. 8 is a diagram illustrating an example of a network and a partial correlation coefficient calculated by the parameter correlation calculation unit 104.
  • the network shown in FIG. 8 includes node A, node B, node C, and node D, and an edge connecting the respective nodes.
  • the network shown in FIG. 8A is calculated based on the partial correlation coefficient calculated from the matrix X when the regularization parameter ⁇ is 0.1.
  • the value of the partial correlation coefficient indicating the edge A- (1) connecting the node A and the node B is 0.7.
  • the value of the partial correlation coefficient indicating the edge A- (4) connecting the node A and the node C is 0.3.
  • the value of the partial correlation coefficient between node A and node D is 0, and there is no edge A- (2).
  • the value of the partial correlation coefficient indicating the edge A- (3) connecting the node B and the node C is 0.3.
  • the value of the partial correlation coefficient between the node D, the node B, and the node C is 0, and there is no edge.
  • the network shown in FIG. 8B is calculated based on the partial correlation coefficient calculated from the matrix X when the regularization parameter ⁇ is 0.5.
  • the value of the partial correlation coefficient indicating edge B- (1) connecting node A and node B is 0.6.
  • the value of the partial correlation coefficient indicating the edge B- (4) connecting the node A and the node C is 0, and the edge B- (4) does not exist.
  • the value of the partial correlation coefficient indicating the edge B- (2) connecting the node A and the node D is 0.2.
  • the value of the partial correlation coefficient indicating the edge B- (3) connecting the node B and the node C is 0.3.
  • the network shown in FIG. 8 (C) is calculated based on the partial correlation coefficient calculated from the matrix X when the regularization parameter ⁇ is 0.9.
  • the value of the partial correlation coefficient indicating the edge C- (1) connecting the node A and the node B is 0.5.
  • the edge shown when the value of the regularization parameter ⁇ is large is strong, which means that the correlation between the feature quantities forming the edge is strong. For example, edge C- (1) shown in FIG. 8C is strong.
  • the parameter correlation calculation unit 104 may acquire a plurality of regularization parameters at a time from the parameter selection unit 103, and may calculate a plurality of partial correlation coefficients for each acquired regularization parameter.
  • the parameter correlation calculation unit 104 may acquire regularization parameters one by one and calculate a partial correlation coefficient each time the parameters are acquired.
  • the parameter correlation calculation unit 104 may compare the calculated networks (step S150) and determine the size of the regularization parameter acquired from the parameter selection unit 103. Note that step S150 is not essential and may be omitted.
  • the present invention is not limited to this.
  • the regularization parameter ⁇ selected by the parameter correlation calculation unit 104 any value may be selected as long as it is within a range of values that can be set as the regularization parameter.
  • the parameter correlation calculation unit 104 supplies the calculated plurality of partial correlation coefficients to the edge weighting unit 105.
  • the edge weighting unit 105 acquires a plurality of partial correlation coefficients from the parameter correlation calculation unit 104.
  • the edge weighting unit 105 weights the edges based on the regularization parameters used for calculating the acquired partial correlation coefficients (step S160).
  • FIG. 9 is a diagram illustrating an example of a result obtained by weighting the network illustrated in FIG. 8 by the edge weighting unit 105.
  • the correlation coefficient and the partial correlation coefficient calculated using the regularization parameter ⁇ 0.9 were calculated.
  • the edge weighting unit 105 weights the acquired edges where the partial correlation coefficient is not 0.
  • the edge weighting unit 105 performs regularization used for calculating the partial correlation coefficient for the edge where the partial correlation coefficient from the node A to the node B shown in FIGS. 8A to 8C is not 0.
  • the weighted edge is calculated by adding the parameter values.
  • the edge weighting unit 105 calculates the weighted network and supplies it to the correlation calculation unit 106.
  • the correlation calculation unit 106 acquires the weighted network from the edge weighting unit 105.
  • the correlation calculation unit 106 supplies the weighted correlation to the result output unit 300 (step S170).
  • the edge weighting unit 105 performs a new correlation based on the regularization parameter ⁇ used to calculate the partial correlation coefficient. Calculate the relationship. This is because the presence or absence of correlation between feature quantities changes in the process of calculating the partial correlation coefficient by the Graphic Lasso method depending on the value of the regularization parameter ⁇ . As described above, when the regularization parameter ⁇ is a small value such as 0.1, a weak correlation is also calculated. Further, when the regularization parameter ⁇ is a large value such as 0.9, a stronger correlation remains.
  • the edge weighting unit 105 can calculate a new correlation in consideration of a plurality of correlation features calculated from the parameter correlation calculation unit 104 by weighting.
  • the correlation can be calculated in consideration of an unstable edge. Therefore, it has been difficult to perform analysis while calculating the correlation between feature quantities within or between cells and adjusting the density of the network.
  • it is possible to calculate a plurality of networks having different densities, weight the calculated plurality of networks having different densities, and calculate a weighted network. This makes it possible to uniquely determine a weighted network. Therefore, it is possible to save the user's trouble of comparing a plurality of networks in which density is adjusted.
  • FIG. 10 is a diagram illustrating an example of the correlation and partial correlation coefficient calculated by the parameter correlation calculation unit 104.
  • the network shown in FIG. 10 includes node A, node B, node C, and node D, and an edge connecting the respective nodes.
  • the network shown in FIG. 10A is calculated based on the partial correlation coefficient calculated from the matrix X when the regularization parameter ⁇ is 0.2.
  • the value of the partial correlation coefficient indicating the edge A- (1) connecting the node A and the node B is 0.4.
  • the value of the partial correlation coefficient indicating the edge A- (4) connecting the node A and the node C is 0.1.
  • the value of the partial correlation coefficient between node A and node D is 0, and no edge exists.
  • the value of the partial correlation coefficient indicating the edge A- (2) connecting the node B and the node C is 0.3.
  • the value of the partial correlation coefficient between node B and node D is 0, and no edge exists.
  • the value of the partial correlation coefficient indicating the edge A- (3) connecting the node C and the node D is 0.2.
  • the network shown in FIG. 10B is calculated based on the partial correlation coefficient calculated from the matrix X when the regularization parameter ⁇ is 0.7.
  • the value of the partial correlation coefficient indicating the edge B- (1) connecting the node A and the node B is 0.3.
  • the value of the partial correlation coefficient indicating the edge B- (5) connecting the node B and the node D is 0.1.
  • the value of the partial correlation coefficient between other nodes is 0, and there is no edge.
  • the network shown in FIG. 10C is calculated based on the partial correlation coefficient calculated from the matrix X when the regularization parameter ⁇ is 0.9.
  • the value of the partial correlation coefficient indicating the edge C- (1) connecting the node A and the node B is 0.2.
  • the value of the partial correlation coefficient indicating the edge C- (2) connecting the node B and the node C is 0.1.
  • the value of the partial correlation coefficient between other nodes is 0, and there is no edge.
  • FIG. 11 is a diagram illustrating an example of a result of weighting by the edge weighting unit 105 on the network illustrated in FIG.
  • the edge weighting unit 105 multiplies each acquired partial correlation coefficient value by the value of the regularization parameter used to calculate the partial correlation coefficient, and adds the values for each edge.
  • the partial correlation coefficient calculated under the same regularization parameter is obtained. It becomes possible to take into account the difference in size.
  • the edge weighting unit 105 calculates the weight of the edge ED2- (1) connecting the node A and the node B as 0.47.
  • the edge weighting unit 105 calculates the weight of the edge ED2- (2) connecting the node B and the node C as 0.15.
  • the edge weighting unit 105 calculates the weight of the edge ED2- (3) connecting the node C and the node D as 0.04.
  • the edge weighting unit 105 calculates the weight of the edge ED2- (4) connecting the node A and the node C as 0.02.
  • the edge weighting unit 105 calculates the weight of the edge ED2- (5) connecting the node B and the node D as 0.07.
  • the edge weighting unit 105 can calculate a new correlation in which the features of the plurality of correlations calculated from the parameter correlation calculation unit 104 are emphasized by weighting. Further, as shown in FIG. 10A to FIG. 10C, for example, when the presence or absence of an edge changes between the node B and the node D according to the value of the regularization parameter ⁇ , The correlation can be calculated in consideration of an edge whose appearance is unstable.
  • the edge weighting unit 105 of the present embodiment multiplies the value of the partial correlation coefficient by the value of the regularization parameter used to calculate the partial correlation coefficient.
  • the edge weighting unit 105 performs weighting by adding the values multiplied for each edge. Thereby, the analysis apparatus 10 can calculate the correlation according to the strength of the edge.
  • the partial correlation coefficient to be multiplied by the regularization parameter value may be multiplied after normalization with the maximum value being 1.0.
  • the parameter selected by the parameter selection unit 103 may be any parameter that changes the value of the feature amount included in the matrix X when calculating the partial correlation coefficient.
  • the parameter selection unit 103 selects, from among the feature amounts calculated by the feature amount calculation unit 102, the feature amount used for correlation calculation by the parameter correlation calculation unit 104.
  • the parameter selection unit 103 selects a feature amount used for correlation calculation based on a feature amount that changes due to a stimulus applied to a cell. Specifically, the parameter selection unit 103 selects a feature amount used for correlation calculation according to the magnitude of the feature amount change calculated by the feature amount calculation unit 102.
  • FIG. 12 is a flowchart illustrating an example of detailed processing when the parameter selection unit 103 selects an element of the matrix X.
  • the flow shown in FIG. 12 is an example of a detailed flow of processing from steps S60 to S80 of the flow shown in FIG.
  • the parameter selection unit 103 determines a reference value for determining whether or not to use the feature quantity acquired from the feature quantity calculation unit 102 for correlation calculation (step S310).
  • the reference value is a predetermined value as the reference value.
  • the reference value may be selected by the user.
  • the parameter selection unit 103 calculates an evaluation value based on the temporal change of the feature amount acquired from the feature amount calculation unit 102.
  • the parameter selection unit 103 compares the determined reference value with the evaluation value, and selects a feature amount used for correlation calculation (step S320).
  • the parameter selection unit 103 supplies the feature quantity used for calculation of the selected correlation to the parameter correlation calculation unit 104.
  • the parameter correlation calculation unit 104 acquires a feature amount used for correlation calculation from the parameter selection unit 103.
  • the parameter correlation calculation unit 104 calculates a partial correlation coefficient using the feature amount used for calculating the correlation acquired from the parameter selection unit 103 as an element of the matrix X (step S330).
  • the parameter correlation calculation unit 104 calculates a partial correlation coefficient by the Graphic Lasso method.
  • the method of calculating the partial correlation coefficient of the parameter correlation calculation unit 104 is not limited to the Graphic Lasso method, and any calculation method may be used.
  • the parameter correlation calculation unit 104 supplies the calculated partial correlation coefficient to the correlation calculation unit 106.
  • the correlation calculation unit 106 calculates a correlation based on the partial correlation coefficient acquired from the parameter correlation calculation unit 104, and outputs the correlation to the result output unit 300 (step S340).
  • FIG. 13 is a diagram illustrating an example of a change in feature amount over time of a stimulated cell.
  • FIG. 13A is a graph in which the time change between the value of the feature value D1 and the value of the feature value D2 is plotted as a line L1 and a line L2.
  • the feature quantity D1 corresponds to elements x 1 (1) to x 1 (N) in the vertical direction of the matrix X shown in FIG.
  • the feature amount D2 corresponds to the elements x 2 (1) to x 2 (N) in the vertical direction of the matrix X shown in FIG.
  • the parameter selection unit 103 calculates an evaluation value based on the feature amount.
  • the evaluation value is, for example, a value obtained by dividing the difference between the maximum value and the minimum value among the time change of the feature value by the average value of the time change of the feature value.
  • the evaluation value of the feature quantity D1 shown in FIG. 13 (A) is 0.96.
  • the evaluation value of the feature amount D2 is 0.33.
  • the parameter selection unit 103 compares the calculated evaluation value with the reference value, and supplies a feature amount indicating an evaluation value exceeding the reference value to the parameter correlation calculation unit 104 as a feature amount used for correlation calculation. As an example, when the reference value is 0.25, the parameter selection unit 103 outputs a feature value D1 and a feature value D2 indicating evaluation values exceeding the reference value to the parameter correlation calculation unit 104. Further, when the reference value is 0.5, the parameter selection unit 103 outputs a feature quantity D1 indicating an evaluation value exceeding the reference value to the parameter correlation calculation unit 104.
  • the parameter correlation calculation unit 104 acquires the feature amount used for calculating the partial correlation coefficient from the parameter selection unit 103, and calculates the correlation.
  • FIG. 14 is a diagram illustrating an example of the correlation calculated by the parameter correlation calculation unit 104.
  • the correlation shown in FIG. 14A is a correlation when the parameter selection unit 103 selects the feature amount D.
  • the correlation shown in FIG. 14B is a correlation when the parameter selection unit 103 does not select the feature amount D.
  • the edge ED15 connecting the node A and the node D (feature amount D) illustrated in FIG. 14A does not appear in FIG. 14B. That is, the density of the network indicating the correlation can be changed according to the feature amount selected by the parameter selection unit 103.
  • the parameter selection unit 103 can change the feature value used for calculating the correlation by changing the reference value for selecting the feature value, and can change the density of the network calculated by the parameter correlation calculation unit 104. Can do.
  • the edge weighting unit 105 may perform edge weighting by using the fact that the correlation calculated by the parameter correlation calculation unit 104 changes as the parameter selection unit 103 changes the selection criterion of the feature amount. Is possible. A case where 0.25 and 0.5 are taken as reference values will be described. Assume that the network when the reference value is 0.25 is FIG. 14A and the network when the reference value is 0.5 is FIG. 14B. In this case, the edge weighting unit 105 multiplies the edge of FIG. 14A by 0.25, multiplies the edge of FIG. 14B by 0.5, and adds each edge. Is a weighted network.
  • the analysis apparatus 10 includes the parameter selection unit 103 and the parameter correlation calculation unit 104.
  • the parameter selection unit 103 selects a feature amount used for correlation calculation based on the reference value.
  • the parameter correlation calculation unit 104 calculates the correlation between the feature amounts based on the feature amount selected by the parameter selection unit 103. Thereby, the analysis apparatus 10 can adjust the density of the calculated correlation. Moreover, since the analysis apparatus 10 can calculate the correlation between feature quantities having a large time change, it can calculate the correlation of only those having a large influence on the cells by the stimulus.
  • the correlation is calculated using the stimulated cell, but the present invention is not limited to this. You may calculate a correlation with respect to the cell which does not receive irritation
  • the feature amount is calculated using the cell image supplied by the image acquisition unit 101, but the cell image to be used is not limited to this.
  • the feature amount may be calculated using a cell image that has already been acquired and stored.
  • the feature amount may be calculated using a cell image acquired by the image acquisition unit 101 that is not directly connected to the analysis apparatus. In this case, if a cell image is supplied from the outside, the analysis device 10 can acquire the correlation from the cell image.
  • a program for executing each process of the analysis apparatus 10 according to the embodiment of the present invention is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed.
  • the various processes described above may be performed.
  • the “computer system” referred to here may include an OS and hardware such as peripheral devices. Further, the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
  • the “computer-readable recording medium” means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a CD-ROM, a hard disk built in a computer system, etc. This is a storage device.
  • the “computer-readable recording medium” refers to a volatile memory (for example, DRAM (Dynamic) in a computer system serving as a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc. that hold a program for a certain period of time.
  • the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
  • the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
  • the program may be for realizing a part of the functions described above. Furthermore, what can implement

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Zoology (AREA)
  • Wood Science & Technology (AREA)
  • Organic Chemistry (AREA)
  • Biotechnology (AREA)
  • Medicinal Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Biochemistry (AREA)
  • Hematology (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Analytical Chemistry (AREA)
  • Food Science & Technology (AREA)
  • Microbiology (AREA)
  • Sustainable Development (AREA)
  • Urology & Nephrology (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Genetics & Genomics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Apparatus Associated With Microorganisms And Enzymes (AREA)

Abstract

Un dispositif d'analyse selon l'invention analyse la corrélation entre des quantités caractéristiques intracellulaires, ledit analyseur comprenant : une unité de calcul de quantités caractéristiques qui calcule, à partir d'une image de cellule dans laquelle une cellule est imagée, une quantité caractéristique d'un élément constitutif constituant une partie d'une cellule ; une unité de sélection de paramètres qui sélectionne un paramètre à utiliser dans le calcul de la corrélation entre des quantités caractéristiques ; une unité de calcul de corrélation de paramètres qui détermine une première corrélation à l'aide d'un premier paramètre sélectionné par l'unité de sélection de paramètres et une deuxième corrélation à l'aide d'un second paramètre sélectionné par l'unité de sélection de paramètres ; et une unité de calcul de corrélation qui calcule une troisième corrélation sur la base des première et deuxième corrélations calculées par l'unité de calcul de corrélation de paramètres.
PCT/JP2016/087021 2016-12-13 2016-12-13 Dispositif d'analyse, programme d'analyse et procédé d'analyse WO2018109826A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2016/087021 WO2018109826A1 (fr) 2016-12-13 2016-12-13 Dispositif d'analyse, programme d'analyse et procédé d'analyse
JP2018556058A JPWO2018109826A1 (ja) 2016-12-13 2016-12-13 算出装置、算出プログラム、及び算出方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2016/087021 WO2018109826A1 (fr) 2016-12-13 2016-12-13 Dispositif d'analyse, programme d'analyse et procédé d'analyse

Publications (1)

Publication Number Publication Date
WO2018109826A1 true WO2018109826A1 (fr) 2018-06-21

Family

ID=62559431

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/087021 WO2018109826A1 (fr) 2016-12-13 2016-12-13 Dispositif d'analyse, programme d'analyse et procédé d'analyse

Country Status (2)

Country Link
JP (1) JPWO2018109826A1 (fr)
WO (1) WO2018109826A1 (fr)

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ATSUSHI NODA ET AL.: "Random Walk ni Motozuita Graph Kozo Modeling", IPSJ SIG NOTES, 2012, pages 1 - 6 *
KIM, H. ET AL.: "Detection of common pathways activated by anticancer drugs using regularized canonical correlation analysis", IPSJ SIG NOTES, 2010, pages 1 - 6 *
NAOTO YAMAOKA ET AL.: "Structure Estimation of Point Distribution Model using Procrustes Analysis and Graphical LASSO", IEICE TECHNICAL REPORT, vol. 111, 2012, pages 109 - 114 *
RYO YOSHIDA ET AL.: "Graphical modeling and statistical inferences of transcription regulatory networks using hybrid functional Petri net", IPSJ SIG NOTES, vol. 2008, no. 58, 2008, pages 9 - 11 *
TAKU YOSHIOKA ET AL.: "Prediction of protein localization sites by a support vector machine", IEICE TECHNICAL REPORT, vol. 101, 2001, pages 63 - 70 *
YUKI SHINMURA ET AL.: "Omomi-tsuki Lasso Path Tsuiseki to Graphical Model eno Oyo", INFORMATION PROCESSING SOCIETY OF JAPAN DAI 74 KAI ZENKOKU TAIKAI, 2012, pages 2 - 309 , 2-310 *

Also Published As

Publication number Publication date
JPWO2018109826A1 (ja) 2019-06-24

Similar Documents

Publication Publication Date Title
US11321836B2 (en) Image-processing device, image-processing method, and image-processing program for setting cell analysis area based on captured image
JP6756339B2 (ja) 画像処理装置、及び画像処理方法
JP2019530847A5 (fr)
JP2022105045A (ja) 解析装置
JP6818041B2 (ja) 解析装置、解析方法、及びプログラム
WO2018193612A1 (fr) Dispositif de calcul de corrélation, procédé de calcul de corrélation, et programme de calcul de corrélation
US20200372652A1 (en) Calculation device, calculation program, and calculation method
JP6777147B2 (ja) 画像選択装置、画像選択プログラム、演算装置、及び表示装置
WO2018066039A1 (fr) Dispositif d'analyse, procédé d'analyse et programme
WO2018109826A1 (fr) Dispositif d'analyse, programme d'analyse et procédé d'analyse
JP6897665B2 (ja) 画像処理装置、観察装置、及びプログラム
Brezovec et al. BIFROST: a method for registering diverse imaging datasets of the Drosophila brain
WO2018122908A1 (fr) Dispositif d'analyse, programme d'analyse et procédé d'analyse
JP6999118B2 (ja) 画像処理装置
WO2019159247A1 (fr) Dispositif de calcul, programme d'analyse et procédé d'analyse
WO2020090089A1 (fr) Dispositif, procédé et programme de détermination
WO2018142570A1 (fr) Dispositif de traitement d'image, dispositif d'analyse, procédé de traitement d'image, programme de traitement d'image et dispositif d'affichage
WO2020070885A1 (fr) Dispositif de détermination, programme de détermination et procédé de détermination

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16923899

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018556058

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16923899

Country of ref document: EP

Kind code of ref document: A1