CN107064019A - The device and method for gathering and splitting for dye-free pathological section high spectrum image - Google Patents

The device and method for gathering and splitting for dye-free pathological section high spectrum image Download PDF

Info

Publication number
CN107064019A
CN107064019A CN201710353158.0A CN201710353158A CN107064019A CN 107064019 A CN107064019 A CN 107064019A CN 201710353158 A CN201710353158 A CN 201710353158A CN 107064019 A CN107064019 A CN 107064019A
Authority
CN
China
Prior art keywords
layer
parameter
image
output
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710353158.0A
Other languages
Chinese (zh)
Other versions
CN107064019B (en
Inventor
张镇西
王森豪
王晶
陈韵竹
张璐薇
姚翠萍
王斯佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201710353158.0A priority Critical patent/CN107064019B/en
Publication of CN107064019A publication Critical patent/CN107064019A/en
Application granted granted Critical
Publication of CN107064019B publication Critical patent/CN107064019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • G01N2021/0181Memory or computer-assisted visual determination

Landscapes

  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The device and method for gathering and splitting for dye-free pathological section high spectrum image, mid-stent branch has section sample platform, and lesion region segmentation result is obtained by the be unstained automatic acquisition and processing of pathological section high spectrum image of computer;The present invention is based on the SPECTRAL DIVERSITY caused by lesion tissue, use PC Synchronization Control correlation module, gather the spectral sequence image for histopathologic slide of being unstained and pre-process the corresponding three-dimensional high-spectral data of superposition generation, and combine the identification segmentation that currently a popular neural network classification thought exploitation spectral classification algorithm carries out lesion region based on this data, accelerate the recognition rate and efficiency of histopathologic slide, avoid the human error that may be introduced in dyeing course, the time required to reducing microsection manufacture, and use machine algorithm automatic discrimination, reduce the subjectivity that artificial cognition is brought, preferable auxiliary can be provided for pathologist detection pathological section.

Description

The device and method for gathering and splitting for dye-free pathological section high spectrum image
Technical field
The invention belongs to detection means field, and in particular to a kind of dye-free pathological section based on high-spectral data is quick Collection and the device and method of segmentation.
Background technology
Pathologic finding (pathological examination) has been widely used in clinical position and scientific research. Clinicing aspect is substantially carried out corpse pathologic finding and surgery Pathology inspection.The purpose that surgery Pathology is checked, one is to clearly examine Break and verify preoperative diagnosis, improve clinical diagnostic level;Two be that after diagnosing clearly, can determine lower step therapeutic scheme and estimation Prognosis, and then improve clinical treatment level.By Clinical and Pathological Analysis, a large amount of extremely valuable scientific research datas can be also obtained.
Pathological section is one kind of Pathologic specimen.The tissue or internal organs that there be lesion part are passed through into various chemicals during making With the processing of burying storage, be allowed to fix hardening, thinly sliced on slicer, sticked on slide, dye with a variety of colors, for Test under microscope, to observe pathological change, makes pathological diagnosis, is that clinical diagnosis and treatment provide help.Pathology is routinely made Chip technology is the basis of pathological diagnosis, and chipping qualities refers to that guiding doctor teacher makes the important guarantee of Accurate Diagnosis.Haematine (Hematoxylin) Yihong (Eosin) decoration method, abbreviation HE decoration methods are that the cell and histology of biology and medical science are most wide The colouring method of general application.During actual fabrication, because the work of some pathology technicians system without strict is trained Instruction, lacks experience, misoperation section often occurs and comes off, dyes uneven, smudgy, gauffer and cell caryoplasm in work Contrast the phenomenon such as unobvious and influence diagnosis.If microsection manufacture process therefore can be simplified, for being aided in using pathological section Diagnosis is very significant.
The pathological process of tissue is usually associated with change of the institutional framework in cell level and subcellular fraction level, so as to cause The migration of tissue spectrum, by the way that traditional two-dimensional imaging technique and spectral technique are organically combined, sight can be provided simultaneously The two-dimensional space information and one-dimensional spectral information of target are surveyed, by spectral data analysis, the morphosis and chemistry of tissue is obtained Component.High light spectrum image-forming technology is applied to the auxiliary detection field of pathological section, the experience to pathologist can be effectively reduced It is required that, while reducing the subjective factors of pathological examination, and then improve pathological examination efficiency.
At present, Siddiqi Anwer M et al. obtain what dyeing cervical carcinoma was cut into slices using microspectrum imager in the world High-spectral data, by training least square method supporting vector machine algorithm to carry out machine recognition to section, sensitivity and specificity are equal Up to more than 90%.Also liver cancer section is identified using similar approach for domestic Qingdao Academy for Opto-electronics Engineering, together Sample has reached preferable recognition effect.But this two methods is both needed to dye cancer section, still has to pathologist Higher experiment and skill requirement, take more.
The content of the invention
In order to overcome the above-mentioned deficiencies of the prior art, it is used for dye-free pathological section it is an object of the invention to provide one kind The device and method that high spectrum image is gathered and split, can realize the high spectrum image automatic data collection of pathological section, and be based on The three-dimensional high-spectral data of sample slice carries out the training of artificial neural network algorithm, and dye-free liver cancer sectioning image is known Not, while the complexity of reduction tradition dye section detection, the subjective factors during differentiating is reduced, can be pathology doctor Raw diagnosis provides preferable reference role.
In order to achieve the above object, the technical scheme is that:
The device for gathering and splitting for dye-free pathological section high spectrum image, including alloy steel substrate 104, steel alloy Be equipped with substrate 104 on xenon source 105, alloy steel substrate 104 and be also vertically installed with fixed support 101, fixed support 101 from It is above past to be arranged with high spectrum image acquisition module 102 and example platform 103, high spectrum image acquisition module 102, example platform 103 is concentric with xenon source 105, the external computer 106 of high spectrum image acquisition module 102;
Described high spectrum image acquisition module 102 include CCD camera 110, by the side USB interface of CCD camera 110 outside Computer 106 is connect, CCD camera 110 is by relaying camera lens 112, relaying camera lens regulation ring 113 and liquid crystal tunable filter 114 It is connected and keeps concentric, liquid crystal tunable filter 114 is connected by side usb 1 15 with computer 106, and liquid crystal can The bottom of tunable filter 114 is connected by C mouthfuls of focusing rings 116, aperture 117 with object lens 118.
For the collection of dye-free pathological section high spectrum image and dividing method, comprise the following steps:
Step 1: device is built:The section that will be unstained, which is positioned on section sample platform 104 and adjusts xenon lamp 103, illuminates model Enclose so that uniform illumination of cutting into slices, and set CCD camera 110 and the parameter of liquid crystal tunable filter 114 to make by computer 106 Gradation of image is obtained clear, while adjusting relaying camera lens regulation ring 113, C mouthfuls of focusing rings 116, apertures 117 so that cutting into slices burnt with imaging Plane is overlapped;
Step 2: parameter setting gathers training sample data:In setting high spectrum image acquisition module on computer 106 102 image acquisition parameter, including drainage pattern, starting, termination wavelength, wavelength resolution and time for exposure, start collection and obtain Section each spectrum two-dimensional view data of sample of being unstained is taken, the acquisition parameter maintained like after the completion of collection gathers nothing respectively The completely black background of illumination and referred to each spectrum picture in the complete blank visual field under illumination parameter as background information;
Step 3:Data prediction, is pressed after being pre-processed to black/white reference information obtained by step 2 to each spectrum picture Spectral coverage its superimposition obtains three-dimensional bloom spectrum matrix, this matrix z-axis direction correspondence every, space spectral information, to each point curve of spectrum Make the processing of wide interval counting backward technique with prominent features;
Step 4:Differentiate the training of neutral net, cut into slices according to the gained training sample of stained slice result selecting step three Lesion/non lesion region wide interval curve of spectrum differentiates network as training data, training spectroscopic data;
Step 5:Be unstained section high-spectral data collection and region recognition, do not contaminated according to the mode of step 2 The collection of color section high-spectral data, identical pretreatment is done according to the mode of step 3, and input step four trains what is finished to sentence Other neutral net, show that every lesion/non lesion differentiates result, differentiate that result can generate the knowledge of dye-free section at comprehensive every Other result.
Described step three concrete scheme is as follows:
The pretreatment of spectrum picture uses pixel correction method, with collection section high spectrum image identical condition Under, using the high-spectral data and the high spectrum image in one group of complete blank visual field of one group of no light complete darkness, with correspondence ripple The difference of long high-spectrum image subtraction dark image pre-processes formula such as formula 1 than upper blank image and the difference of dark image It is shown.
Wherein, R is the Optical transmission spectrum value by pretreatment conversion, IimIt is the gray scale under each wavelength of original high spectrum image Value, IblIt is each wavelength image gray value, I under the conditions of complete darknesswhIt is the gray scale for having each wavelength image under the illumination blank visual field Value, then employs a kind of method of wide interval derivative, that is, increases independent variable interval and differentiate, as shown in Equation 2:
λ represents wavelength in formula, and Δ λ represents wavelength interval;
Described step four, detailed process is as follows:
(1) set up and differentiate neural network model
Input information represents the spectral information of a pixel, and past the right is followed by convolutional layer and maximum pond layer based on Series of features picture is calculated, output layer is obtained after feature image classification.Whole network includes input layer, convolutional layer C1, maximum pond Layer S1, convolutional layer C2, maximum pond layer S2, complete articulamentum F and output layer.The sample-size of input layer is (n1,1), wherein being N1 wave band numbers.First implicit convolutional layer C1 filters the input data of n1 × 1 using 24 sizes for the kernel function of k1 × 1. Implicit convolutional layer C1 includes 24 × n2 × 1 node, wherein n2=n1-k1+1.Have 24 between input layer and convolutional layer C1 × (k1+1) it is individual can training parameter, maximum pond layer S1 is second hidden layer, and the size of kernel function is (k2,1), maximum pond layer S1 bags Containing 24 × n3 × 1 node, wherein n3=n2/k2, this layer does not have parameter.Convolutional layer C2 includes 24 × n4 × 1 section Point, kernel function be have 24 between (k3,1) wherein n4=n3-k3+1, maximum pond layer S1 and convolutional layer C2 × (k3+1) is individual trains Parameter.Maximum pond layer S2 includes 24 × n5 × 1 node, kernel function size (k4,1), and wherein n5=n4/k4, this layer does not have There is parameter.Complete articulamentum F includes n6 node, has (24 × n6+1) individual training to join between this layer and maximum pond layer S2 Number.Finally output laminate contains n7 node, there is 24 × n6 × 1 node between this layer and complete articulamentum F, and 24 × n6 × 1 × n7 training parameter.Setting up the convolutional neural networks grader of above-mentioned parameter is used to distinguish EO-1 hyperion pixel, and wherein n1 is light The number of passage is composed, n7 is exported data category number, and n2, n3, n4, n5 are the dimension of feature image respectively, and n6 is to connect entirely Connect the dimension of layer.
(2) propagated forward
The depth convolutional neural networks used are 5 Rotating fields, and can also be regarded as by adding input and output layer by 7 layers, be expressed as (L+ 1) layer, wherein L=6, include n1 input block in input layer, include n7 output unit in output layer, and hidden layer is C1, S1, C2, S2 and F layers.Assuming that xiIt is the output of i-th layer input (i-1) layer, we can calculate xi+1For:
xi+1=fi(ui) (formula 3)
Wherein
The weight matrix for input data that to be i-th layer act on, biIt is i-th layer of additional Bayes's vector, fi() is I-th layer of activation primitive, selects hyperbolic function tanh (u) as convolutional layer C1, C2 and complete articulamentum F activation primitive, Max function max (u) is taken as maximum pond layer S1 and S2 activation primitive.Data are carried out by many classification using grader, Output classification is n7, and n7 category regression model is defined as follows:
The output vector y=x of output layerL+1Represent the probability in current iteration all categories.
(3) back-propagating
In the back-propagating stage, train obtained parameter to be adjusted using Decent Gradient Methods and update, by using minimum chemical conversion This function determines each parameter with cost function partial derivative is calculated, and the loss function used is defined as follows:
Wherein, m is training sample amount, and Y is output quantity.It is ith training sample reality output y(i)Jth time Value, vector dimension is n7.The desired output Y of i-th of sample(i), the probable value of label is 1, if other classifications, then probability It is worth for 0.1 { j=Y(i)It is meant that if j were equal to the expectation category of ith training sample, its value would be then 1;Otherwise, it Value be 0.We add negative sign before J (θ) so that calculate and more facilitate.
Loss function is to uiLocal derviation is asked to obtain;
Wherein ° represent element multiplication.f′(ui) can be expressed simply as
Therefore, in iteration each time, will all renewal be performed:
For training of judgement parameter, α is Studying factors (α=0.01), and
Due to θiInclude WiAnd bi, and
Wherein
By multiple repetitive exercise, the recurrence of cost function is less and less, this also means that actual output and expectation Output become closer to, when actual output and sufficiently small desired output difference, iteration stopping finally, is trained Depth convolutional neural networks model may be used for the image classification of EO-1 hyperion.
Compared with prior art, the present invention is synchronously controlled based on the SPECTRAL DIVERSITY caused by lesion tissue using PC Correlation module processed, gathers the spectral sequence image for histopathologic slide of being unstained and pre-processes the corresponding three-dimensional bloom of superposition generation Modal data, and carry out lesion region with reference to currently a popular neural network classification thought exploitation spectral classification algorithm based on this data Identification segmentation, accelerate the recognition rate and efficiency of histopathologic slide.After this device, pathologist is making pathology Without traditional HE dyeing courses during section, it is to avoid the human error that may be introduced in dyeing course, microsection manufacture is reduced Required time, and machine algorithm automatic discrimination is used, the subjectivity that artificial cognition is brought is reduced, disease can be detected for pathologist Reason section provides preferably auxiliary.
Brief description of the drawings
Fig. 1 is the general structure schematic diagram of apparatus of the present invention.
Fig. 2 is the structural representation of high spectrum image acquisition module of the present invention.
Fig. 3 is the pictorial diagram of apparatus of the present invention.
Fig. 4 is supporting acquisition software sectional drawing of the invention.
Fig. 5 is present invention section spectroscopic data recognizer flow chart.
Fig. 6 is used spectroscopic data to differentiate neural network structure schematic diagram by the present invention.
Fig. 7 a extract histotomy multiple spot primary light spectrogram for the present invention;Spectrum after the processing of Fig. 7 b wide intervals derivative method is bent Line chart.
Fig. 8 is that be unstained hierarchical model prognostic chart picture and same position stained slice segmentation result of the present invention is contrasted.
Fig. 9 is that be unstained hierarchical model prognostic chart picture and algorithm of support vector machine recognition result of the present invention is contrasted.
Embodiment
The present invention will be further described below in conjunction with the accompanying drawings.
Referring to Fig. 1 to Fig. 3, the device for gathering and splitting for dye-free pathological section high spectrum image, including steel alloy It is equipped with substrate 104, alloy steel substrate 104 on xenon source 105, alloy steel substrate 104 and is also vertically installed with fixed support 101, fixed support 101 is provided with high spectrum image acquisition module 102 and example platform 103 from top to bottom, and high spectrum image is adopted Collect module 102, example platform 103 and xenon source 105 concentric, the external computer 106 of high spectrum image acquisition module 102;
Described high spectrum image acquisition module 102 include CCD camera 110, by the side USB interface of CCD camera 110 outside Computer 106 is connect, CCD camera 110 is by relaying camera lens 112, relaying camera lens regulation ring 113 and liquid crystal tunable filter 114 Be connected and keep concentric, liquid crystal tunable filter 114 by side usb 1 15 be connected with computer 106 realization with The Synchronization Control of CCD camera 110, the bottom of liquid crystal tunable filter 114 passes through C mouthfuls of focusing rings 116, aperture 117 and object lens 118 It is connected, relaying camera lens 112 is provided with adjusting the focusing ring of focal plane and the light to control light-inletting quantity size with object lens 118 Ring, can be cut by adjusting relaying camera lens regulation ring 113, C mouthfuls of focusing rings 116, apertures 117 to being put on section sample platform 104 Piece sample realizes accurate focusing and proper exposure, and sample stage side is provided with xenon lamp light path lens group, and xenon lamp light path lens group is led to Cross the control of xenon source drive circuit, liquid crystal tunable filter and scientific research camera connection computer.
For the collection of dye-free pathological section high spectrum image and dividing method, comprise the following steps:
Step 1: device is built:The section that will be unstained, which is positioned on section sample platform 104 and adjusts xenon lamp 103, illuminates model Enclose so that uniform illumination of cutting into slices, and set CCD camera 110 and the parameter of liquid crystal tunable filter 114 to make by computer 106 Gradation of image is obtained clear, while adjusting relaying camera lens regulation ring 113, C mouthfuls of focusing rings 116, apertures 117 so that cutting into slices burnt with imaging Plane is overlapped;
Step 2: reference picture 4, parameter setting collection training sample data:Adopted in setting high spectrum image on computer 106 Collect the image acquisition parameter of module 102, including drainage pattern, starting, termination wavelength, wavelength resolution and time for exposure, start Collection acquisition be unstained section each spectrum two-dimensional view data of sample, the acquisition parameter maintained like after the completion of collection, respectively Gather the completely black background of no light and referred to each spectrum picture in the complete blank visual field under illumination parameter as background information;
Step 3:Data prediction, is pressed after being pre-processed to black/white reference information obtained by step 2 to each spectrum picture Spectral coverage its superimposition obtains three-dimensional bloom spectrum matrix, this matrix z-axis direction correspondence every, space spectral information, to each point curve of spectrum Make the processing of wide interval counting backward technique with prominent features;
Described step three specific embodiment is as follows:
The pretreatment of spectrum picture uses pixel correction method, with collection section high spectrum image identical condition Under, using the high-spectral data and the high spectrum image in one group of complete blank visual field of one group of no light complete darkness, with correspondence ripple The difference of long high-spectrum image subtraction dark image pre-processes formula such as formula 1 than upper blank image and the difference of dark image It is shown.
Wherein, R is the Optical transmission spectrum value by pretreatment conversion, IimIt is the gray scale under each wavelength of original high spectrum image Value, IblIt is each wavelength image gray value, I under the conditions of complete darknesswhIt is the gray scale for having each wavelength image under the illumination blank visual field Value, then employs a kind of method of wide interval derivative, that is, increases independent variable interval and differentiate, as shown in Equation 2:
λ represents wavelength in formula, and Δ λ represents wavelength interval;
Step 4:Differentiate the training of neutral net, cut into slices according to the gained training sample of stained slice result selecting step three Lesion/non lesion region wide interval curve of spectrum differentiates network as training data, training spectroscopic data.Network parameter training is calculated Method such as Fig. 5
Shown, detailed process is as follows:
(1) set up differentiate neural network model as shown in fig. 6,
Input information represents the spectral information of a pixel, and past the right is followed by convolutional layer and maximum pond layer based on Series of features picture is calculated, output layer is obtained after feature image classification.Whole network includes input layer, convolutional layer C1, maximum pond Layer S1, convolutional layer C2, maximum pond layer S2, complete articulamentum F and output layer.The sample-size of input layer is (n1,1), wherein being N1 wave band numbers.First implicit convolutional layer C1 filters the input data of n1 × 1 using 24 sizes for the kernel function of k1 × 1. Implicit convolutional layer C1 includes 24 × n2 × 1 node, wherein n2=n1-k1+1.Have 24 between input layer and convolutional layer C1 × (k1+1) it is individual can training parameter, maximum pond layer S1 is second hidden layer, and the size of kernel function is (k2,1), maximum pond layer S1 bags Containing 24 × n3 × 1 node, wherein n3=n2/k2, this layer does not have parameter.Convolutional layer C2 includes 24 × n4 × 1 section Point, kernel function be have 24 between (k3,1) wherein n4=n3-k3+1, maximum pond layer S1 and convolutional layer C2 × (k3+1) is individual trains Parameter.Maximum pond layer S2 includes 24 × n5 × 1 node, kernel function size (k4,1), and wherein n5=n4/k4, this layer does not have There is parameter.Complete articulamentum F includes n6 node, has (24 × n6+1) individual training to join between this layer and maximum pond layer S2 Number.Finally output laminate contains n7 node, there is 24 × n6 × 1 node between this layer and complete articulamentum F, and 24 × n6 × 1 × n7 training parameter.Setting up the convolutional neural networks grader of above-mentioned parameter is used to distinguish EO-1 hyperion pixel, and wherein n1 is light The number of passage is composed, n7 is exported data category number, and n2, n3, n4, n5 are the dimension of feature image respectively, and n6 is to connect entirely Connect the dimension of layer.
(2) propagated forward
The depth convolutional neural networks used are 5 Rotating fields, and can also be regarded as by adding input and output layer by 7 layers, be expressed as (L+ 1) layer, wherein L=6, include n1 input block in input layer, include n7 output unit in output layer, and hidden layer is C1, S1, C2, S2 and F layers.Assuming that xiIt is the output of i-th layer input (i-1) layer, we can calculate xi+1For:
xi+1=fi(ui) (formula 3)
Wherein
The weight matrix for input data that to be i-th layer act on, biIt is i-th layer of additional Bayes's vector, fi() is I-th layer of activation primitive, selects hyperbolic function tanh (u) as convolutional layer C1, C2 and complete articulamentum F activation primitive, Max function max (u) is taken as maximum pond layer S1 and S2 activation primitive.Data are carried out by many classification using grader, Output classification is n7, and n7 category regression model is defined as follows:
The output vector y=x of output layerL+1Represent the probability in current iteration all categories.
(3) back-propagating
In the back-propagating stage, train obtained parameter to be adjusted using Decent Gradient Methods and update, by using minimum chemical conversion This function determines each parameter with cost function partial derivative is calculated, and the loss function used is defined as follows:
Wherein, m is training sample amount, and Y is output quantity.It is ith training sample reality output y(i)Jth time Value, vector dimension is n7.The desired output Y of i-th of sample(i), the probable value of label is 1, if other classifications, then probability It is worth for 0.1 { j=Y(i)It is meant that if j were equal to the expectation category of ith training sample, its value would be then 1;Otherwise, it Value be 0.We add negative sign before J (θ) so that calculate and more facilitate.
Loss function is to uiLocal derviation is asked to obtain;
Wherein ° represent element multiplication.f′(ui) can be expressed simply as
Therefore, in iteration each time, will all renewal be performed:
For training of judgement parameter, α is Studying factors (α=0.01), and
Due to θiInclude WiAnd bi, and
Wherein
By multiple repetitive exercise, the recurrence of cost function is less and less, this also means that actual output and expectation Output become closer to, when actual output and sufficiently small desired output difference, iteration stopping finally, is trained Depth convolutional neural networks model may be used for the image classification of EO-1 hyperion.
Step 5:Be unstained section high-spectral data collection and region recognition, do not contaminated according to the mode of step 2 The collection of color section high-spectral data, identical pretreatment is done according to the mode of step 3, and input step four trains what is finished to sentence Other neutral net, show that every lesion/non lesion differentiates result, differentiate that result can generate the knowledge of dye-free section at comprehensive every Other result.
Section is dyed/is unstained using same position to verify the present apparatus and method
To verify the feasibility of book device and method, so that liver cancer is cut into slices as an example, we use Medical University Of Fujian Meng Chao Dyeing/the section of being unstained at the same position that liver and gall hospital provides has carried out system checking.Microsection manufacture process is:Suffer from from liver cancer Pathological tissues are obtained in person's body, is immersed in formalin solution and fixes, FFPE are used after tissue is dehydrated completely, with section Machine-cut slice, it is a piece of with conventional Hematoxylin-eosin colouring method to the plate sheet of same tissue excisions thickness identical two H&E stained slices (being used for theoretical control) are made in dyeing, and another is directly placed on slide, at dimethylbenzene dewaxing Reason, is made undyed histotomy.High spectrum image acquisition condition is:Xenon source 30mW/cm2Light intensity, wave-length coverage 400 ~718nm, wavelength interval 3nm.
For this lot sample sheet, we have chosen wavelength interval is used between further component for the derivative spectrum under 177nm Distinguish, after original spectrum such as Fig. 7 a and wide interval derivative method are handled curve as shown in Figure 7b, by the normal data after processing as Neural network classification algorithm input training training neural network parameter, after to standard section carry out segmentation checking, predict the outcome As shown in figure 8, segmentation result is contrasted with traditional convolution algorithm of support vector machine segmentation result, comparing result such as Fig. 9 institutes simultaneously Show, take precision calculation formula shown in formula 13, calculate two kinds of algorithm cancerous area segmentation precisions, as a result as shown in table 1
The depth convolutional neural networks model of table 1 is contrasted with support vector cassification accuracy
From experimental result, the present apparatus can effectively gather the liver cancer section high spectrum image that is unstained, and software kit is calculated Method preferably, relative to traditional haematoxylin eosin stains pathological section method, eliminates microscope to cancerous area segmentation result And complicated dyeing course, the artifact interference in conventional method is reduced, preferably diagnosis can be provided for pathologist Auxiliary.

Claims (4)

1. the device for gathering and splitting for dye-free pathological section high spectrum image, it is characterised in that including alloy steel substrate (104), it is equipped with alloy steel substrate (104) on xenon source (105), alloy steel substrate (104) and is also vertically installed with fixed branch Frame (101), fixed support (101) is provided with high spectrum image acquisition module (102) and example platform (103) from top to bottom, high Spectrum picture acquisition module (102), example platform (103) and xenon source (105) are concentric, high spectrum image acquisition module (102) external computer (106);
Described high spectrum image acquisition module (102) includes CCD camera (110), passes through CCD camera (110) side USB interface External computer (106), CCD camera (110) is by relaying camera lens (112), relaying camera lens regulation ring (113) and liquid crystal tunable Filter (114) is connected and keeps concentric, and liquid crystal tunable filter (114) passes through side USB interface (115) and computer (106) it is connected, liquid crystal tunable filter (114) bottom passes through C mouthfuls of focusing rings (116), aperture (117) and object lens (118) phase Even.
2. the segmentation side based on the device for being used for the collection of dye-free pathological section high spectrum image and segmentation described in claim 1 Method, it is characterised in that comprise the following steps:
Step 1: device is built:The section that will be unstained is positioned on section sample platform (104) and adjusts xenon lamp (103) illumination model Enclose so that uniform illumination of cutting into slices, and pass through computer (106) and CCD camera (110) and liquid crystal tunable filter (114) are set Parameter causes gradation of image clear, causes while adjusting relaying camera lens regulation ring (113), C mouthfuls of focusing rings (116), apertures (117) Section is overlapped with imaging focal plane;
Step 2: parameter setting gathers training sample data:In setting high spectrum image acquisition module on computer (106) (102) image acquisition parameter, including drainage pattern, starting, termination wavelength, wavelength resolution and time for exposure, start collection Acquisition is unstained section each spectrum two-dimensional view data of sample, and the acquisition parameter maintained like after the completion of collection gathers respectively The completely black background of no light and referred to each spectrum picture in the complete blank visual field under illumination parameter as background information;
Step 3:Data prediction, by spectral coverage after being pre-processed to black/white reference information obtained by step 2 to each spectrum picture Its superimposition obtains three-dimensional bloom spectrum matrix, this matrix z-axis direction correspondence every, space spectral information, makees wide to each point curve of spectrum Counting backward technique processing is spaced with prominent features;
Step 4:Differentiate neutral net training, according to the gained training sample of stained slice result selecting step three cut into slices lesion/ The non lesion region wide interval curve of spectrum differentiates network as training data, training spectroscopic data;
Step 5:Be unstained section high-spectral data collection and region recognition, according to the mode of step 2 be unstained cutting The collection of piece high-spectral data, identical pretreatment is done according to the mode of step 3, and input step four trains the differentiation god finished Through network, show that every lesion/non lesion differentiates result, differentiate that result can generate the identification knot of dye-free section at comprehensive every Really.
3. dividing method according to claim 2, it is characterised in that described step three concrete scheme is as follows:
The pretreatment of spectrum picture uses pixel correction method, under the same conditions, makes with collection section high spectrum image With the high-spectral data and the high spectrum image in one group of complete blank visual field of one group of no light complete darkness, with the height of corresponding wavelength Spectrum picture subtracts the difference of dark image than upper blank image and the difference of dark image, and pretreatment formula is as shown in Equation 1;
Wherein, R is the Optical transmission spectrum value by pretreatment conversion, IimIt is the gray value under each wavelength of original high spectrum image, IblIt is each wavelength image gray value, I under the conditions of complete darknesswhIt is the gray value for having each wavelength image under the illumination blank visual field, with A kind of method of wide interval derivative is employed afterwards, that is, is increased independent variable interval and differentiated, as shown in Equation 2:
λ represents wavelength in formula, and Δ λ represents wavelength interval.
4. dividing method according to claim 2, it is characterised in that described step four, detailed process is as follows:
(1) set up and differentiate neural network model
Input information represents the spectral information of a pixel, is used to calculate one followed by convolutional layer and maximum pond layer toward the right Series of features picture, output layer is obtained after feature image classification;Whole network includes input layer, convolutional layer C1, maximum pond layer S1, Convolutional layer C2, maximum pond layer S2, complete articulamentum F and output layer;The sample-size of input layer is (n1,1), wherein being n1 wave bands Number;First implicit convolutional layer C1 filters the input data of n1 × 1 using 24 sizes for the kernel function of k1 × 1;Implicit volume Lamination C1 includes 24 × n2 × 1 node, wherein n2=n1-k1+1;There is 24 × (k1+1) between input layer and convolutional layer C1 It is individual can training parameter, maximum pond layer S1 is second hidden layer, and the size of kernel function is (k2,1), and maximum pond layer S1 includes 24 × n3 × 1 node, wherein n3=n2/k2, this layer do not have parameter;Convolutional layer C2 includes 24 × n4 × 1 node, core letter Number is (k3,1) wherein n4=n3-k3+1, has 24 × (k3+1) is individual can training parameter between maximum pond layer S1 and convolutional layer C2;Most Great Chi layers of S2 include 24 × n5 × 1 node, kernel function size (k4,1), and wherein n5=n4/k4, this layer does not have parameter; Complete articulamentum F includes n6 node, has (24 × n6+1) individual training parameter between this layer and maximum pond layer S2;It is last defeated Go out laminate and contain n7 node, there is 24 × n6 × 1 node, 24 × n6 × 1 × n7 instruction between this layer and complete articulamentum F Practice parameter;Setting up the convolutional neural networks grader of above-mentioned parameter is used to distinguish EO-1 hyperion pixel, and wherein n1 is spectrum channel Number, n7 is exported data category number, and n2, n3, n4, n5 are the dimension of feature image respectively, and n6 is the dimension of full articulamentum Number;
(2) propagated forward
The depth convolutional neural networks used are 5 Rotating fields, and can also be regarded as by adding input and output layer by 7 layers, be expressed as (L+1) Layer, wherein L=6 includes n1 input block in input layer, includes n7 output unit in output layer, and hidden layer is C1, S1, C2, S2 and F layers;Assuming that xiIt is the output of i-th layer input (i-1) layer, we can calculate xi+1For:
xi+1=fi(ui) (formula 3)
Wherein
Wi TThe weight matrix for input data that to be i-th layer act on, biIt is i-th layer of additional Bayes's vector, fi() is i-th layer Activation primitive, selection hyperbolic function tanh (u) as convolutional layer C1, C2 and complete articulamentum F activation primitive, take maximum Value function max (u) as maximum pond layer S1 and S2 activation primitive;Data are carried out by many classification, output class using grader Not Wei n7, n7 category regression model be defined as follows:
The output vector y=x of output layerL+1Represent the probability in current iteration all categories;
(3) back-propagating
In the back-propagating stage, train obtained parameter to be adjusted using Decent Gradient Methods and update, by using minimum cost letter Count and calculate cost function partial derivative to determine each parameter, the loss function used is defined as follows:
Wherein, m is training sample amount, and Y is output quantity;It is ith training sample reality output y(i)Jth time value, to It is n7 to take measurements;The desired output Y of i-th of sample(i), the probable value of label is 1, if other classifications, then probable value is 0; 1 { j=Y(i)It is meant that if j were equal to the expectation category of ith training sample, its value would be then 1;Otherwise, its value is 0;We add negative sign before J (θ) so that calculate and more facilitate;
Loss function is to uiLocal derviation is asked to obtain;
Wherein ° represent element multiplication;f′(ui) can be expressed simply as
Therefore, in iteration each time, will all renewal be performed:
For training of judgement parameter, α is Studying factors (α=0.01), and
Due to θiInclude WiAnd bi, and
Wherein
By multiple repetitive exercise, the recurrence of cost function is less and less, this also means that actual output with it is desired defeated Go out to become closer to, when actual output and sufficiently small desired output difference, iteration stopping, finally, the depth trained Convolutional neural networks model may be used for the image classification of EO-1 hyperion.
CN201710353158.0A 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image Active CN107064019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710353158.0A CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710353158.0A CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Publications (2)

Publication Number Publication Date
CN107064019A true CN107064019A (en) 2017-08-18
CN107064019B CN107064019B (en) 2019-11-26

Family

ID=59610992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710353158.0A Active CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Country Status (1)

Country Link
CN (1) CN107064019B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107843593A (en) * 2017-10-13 2018-03-27 上海工程技术大学 A kind of textile material recognition methods and system based on high light spectrum image-forming technology
CN109034208A (en) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 A kind of cervical cell pathological section classification method of high-low resolution combination
CN109272492A (en) * 2018-08-24 2019-01-25 深思考人工智能机器人科技(北京)有限公司 A kind of processing method and system of cell pathology smear
CN109489816A (en) * 2018-10-23 2019-03-19 华东师范大学 A kind of micro- high light spectrum image-forming platform and the method for big area data cube acquisition
CN109815945A (en) * 2019-04-01 2019-05-28 上海徒数科技有限公司 A kind of respiratory tract inspection result interpreting system and method based on image recognition
CN110008836A (en) * 2019-03-06 2019-07-12 华东师范大学 A kind of feature extracting method of histopathologic slide's high spectrum image
CN110517258A (en) * 2019-08-30 2019-11-29 山东大学 A kind of cervical carcinoma pattern recognition device and system based on high light spectrum image-forming technology
CN110600106A (en) * 2019-08-28 2019-12-20 上海联影智能医疗科技有限公司 Pathological section processing method, computer device and storage medium
CN111325757A (en) * 2020-02-18 2020-06-23 西北工业大学 Point cloud identification and segmentation method based on Bayesian neural network
CN111727371A (en) * 2018-02-15 2020-09-29 国立大学法人新潟大学 System, program and method for discriminating hyper-mutant cancer
CN112712877A (en) * 2020-12-07 2021-04-27 西安电子科技大学 Large-view-field high-flux high-resolution pathological section analyzer
CN112862743A (en) * 2019-11-27 2021-05-28 静宜大学 Artificial intelligent cell detection method and system by utilizing hyperspectral data analysis
CN113065403A (en) * 2021-03-05 2021-07-02 浙江大学 Hyperspectral imaging-based machine learning cell classification method and device
CN113450305A (en) * 2020-03-26 2021-09-28 太原理工大学 Medical image processing method, system, equipment and readable storage medium
CN114207675A (en) * 2019-05-28 2022-03-18 佩治人工智能公司 System and method for processing images to prepare slides for processed images for digital pathology
CN114820502A (en) * 2022-04-21 2022-07-29 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN115236015A (en) * 2022-07-21 2022-10-25 华东师范大学 Puncture sample pathological analysis system and method based on hyperspectral imaging technology
CN115728236A (en) * 2022-11-21 2023-03-03 山东大学 Hyperspectral image acquisition and processing system and working method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011089895A (en) * 2009-10-22 2011-05-06 Arata Satori Device and method of hyperspectral imaging
CN104316473A (en) * 2014-10-28 2015-01-28 南京农业大学 Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image
US9345428B2 (en) * 2004-11-29 2016-05-24 Hypermed Imaging, Inc. Hyperspectral imaging of angiogenesis
CN106097355A (en) * 2016-06-14 2016-11-09 山东大学 The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9345428B2 (en) * 2004-11-29 2016-05-24 Hypermed Imaging, Inc. Hyperspectral imaging of angiogenesis
JP2011089895A (en) * 2009-10-22 2011-05-06 Arata Satori Device and method of hyperspectral imaging
CN104316473A (en) * 2014-10-28 2015-01-28 南京农业大学 Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image
CN106097355A (en) * 2016-06-14 2016-11-09 山东大学 The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
于翠荣 等: "高光谱成像技术用于肝切片癌变信息的提取", 《科学技术与工程》 *
周湘连: "基于高光谱成像的基底细胞癌和肝癌病理切片检测研究", 《西安交通大学机构知识库》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107843593A (en) * 2017-10-13 2018-03-27 上海工程技术大学 A kind of textile material recognition methods and system based on high light spectrum image-forming technology
US11295447B2 (en) 2018-02-15 2022-04-05 Niigata University System, program, and method for determining hypermutated tumor
CN111727371B (en) * 2018-02-15 2022-11-04 国立大学法人新潟大学 System, program and method for discriminating hyper-mutant cancer
CN111727371A (en) * 2018-02-15 2020-09-29 国立大学法人新潟大学 System, program and method for discriminating hyper-mutant cancer
CN109034208A (en) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 A kind of cervical cell pathological section classification method of high-low resolution combination
CN109034208B (en) * 2018-07-03 2020-10-23 怀光智能科技(武汉)有限公司 High-low resolution combined cervical cell slice image classification system
CN109272492B (en) * 2018-08-24 2022-02-15 深思考人工智能机器人科技(北京)有限公司 Method and system for processing cytopathology smear
CN109272492A (en) * 2018-08-24 2019-01-25 深思考人工智能机器人科技(北京)有限公司 A kind of processing method and system of cell pathology smear
CN109489816A (en) * 2018-10-23 2019-03-19 华东师范大学 A kind of micro- high light spectrum image-forming platform and the method for big area data cube acquisition
CN110008836B (en) * 2019-03-06 2023-04-25 华东师范大学 Feature extraction method of hyperspectral image of pathological tissue slice
CN110008836A (en) * 2019-03-06 2019-07-12 华东师范大学 A kind of feature extracting method of histopathologic slide's high spectrum image
CN109815945A (en) * 2019-04-01 2019-05-28 上海徒数科技有限公司 A kind of respiratory tract inspection result interpreting system and method based on image recognition
CN109815945B (en) * 2019-04-01 2024-04-30 上海徒数科技有限公司 Respiratory tract examination result interpretation system and method based on image recognition
US11869185B2 (en) 2019-05-28 2024-01-09 PAIGE.AI, Inc. Systems and methods for processing images to prepare slides for processed images for digital pathology
US11676274B2 (en) 2019-05-28 2023-06-13 PAIGE.AI, Inc. Systems and methods for processing images to prepare slides for processed images for digital pathology
CN114207675A (en) * 2019-05-28 2022-03-18 佩治人工智能公司 System and method for processing images to prepare slides for processed images for digital pathology
CN110600106A (en) * 2019-08-28 2019-12-20 上海联影智能医疗科技有限公司 Pathological section processing method, computer device and storage medium
CN110600106B (en) * 2019-08-28 2022-07-05 上海联影智能医疗科技有限公司 Pathological section processing method, computer device and storage medium
CN110517258A (en) * 2019-08-30 2019-11-29 山东大学 A kind of cervical carcinoma pattern recognition device and system based on high light spectrum image-forming technology
CN112862743A (en) * 2019-11-27 2021-05-28 静宜大学 Artificial intelligent cell detection method and system by utilizing hyperspectral data analysis
CN111325757A (en) * 2020-02-18 2020-06-23 西北工业大学 Point cloud identification and segmentation method based on Bayesian neural network
CN111325757B (en) * 2020-02-18 2022-12-23 西北工业大学 Point cloud identification and segmentation method based on Bayesian neural network
CN113450305B (en) * 2020-03-26 2023-01-24 太原理工大学 Medical image processing method, system, equipment and readable storage medium
CN113450305A (en) * 2020-03-26 2021-09-28 太原理工大学 Medical image processing method, system, equipment and readable storage medium
WO2022121284A1 (en) * 2020-12-07 2022-06-16 西安电子科技大学 Pathological section analyzer with large field of view, high throughput and high resolution
CN112712877B (en) * 2020-12-07 2024-02-09 西安电子科技大学 Large-view-field high-flux high-resolution pathological section analyzer
CN112712877A (en) * 2020-12-07 2021-04-27 西安电子科技大学 Large-view-field high-flux high-resolution pathological section analyzer
CN113065403A (en) * 2021-03-05 2021-07-02 浙江大学 Hyperspectral imaging-based machine learning cell classification method and device
CN114820502A (en) * 2022-04-21 2022-07-29 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN114820502B (en) * 2022-04-21 2023-10-24 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN115236015A (en) * 2022-07-21 2022-10-25 华东师范大学 Puncture sample pathological analysis system and method based on hyperspectral imaging technology
CN115236015B (en) * 2022-07-21 2024-05-03 华东师范大学 Puncture sample pathology analysis system and method based on hyperspectral imaging technology
CN115728236A (en) * 2022-11-21 2023-03-03 山东大学 Hyperspectral image acquisition and processing system and working method thereof

Also Published As

Publication number Publication date
CN107064019B (en) 2019-11-26

Similar Documents

Publication Publication Date Title
CN107064019B (en) The device and method for acquiring and dividing for dye-free pathological section high spectrum image
US11893739B2 (en) Method and system for digital staining of label-free fluorescence images using deep learning
US10083340B2 (en) Automated cell segmentation quality control
US6453060B1 (en) Method and apparatus for deriving separate images from multiple chromogens in a branched image analysis system
ES2301706T3 (en) METHOD OF QUANTITATIVE VIDEOMICROSCOPY AND ASSOCIATED SYSTEM AS WELL AS THE SOFWARE INFORMATION PROGRAM PRODUCT.
CN110120047A (en) Image Segmentation Model training method, image partition method, device, equipment and medium
CN107111874A (en) System and method for the coexpression analysis during fraction is calculated to be immunized
CN109903284A (en) A kind of other method and system of HER2 immunohistochemistry image automatic judging
CN109903280A (en) Tumour determines system, method and storage medium
US20230368379A1 (en) Image processing method and apparatus
AU2015372563A1 (en) Vessel analysis in multiplexed images
Wang et al. A deep learning framework design for automatic blastocyst evaluation with multifocal images
WO2021198247A1 (en) Optimal co-design of hardware and software for virtual staining of unlabeled tissue
CN106706643A (en) Liver cancer comparison section detection device and detection method
US20040014165A1 (en) System and automated and remote histological analysis and new drug assessment
JP2010261762A (en) Specimen preparing device and specimen preparing method
WO2021198252A1 (en) Virtual staining logic
WO2018128091A1 (en) Image analysis program and image analysis method
US20110262907A1 (en) Procecure for preparing a processed virtual analysis image
US20200074628A1 (en) Image processing apparatus, imaging system, image processing method and computer readable recoding medium
US8744827B2 (en) Method for preparing a processed virtual analysis plate
CN112883770A (en) PD-1/PD-L1 pathological picture identification method and device based on deep learning
US20210174147A1 (en) Operating method of image processing apparatus, image processing apparatus, and computer-readable recording medium
KR20060127403A (en) Automatic analysis of cellular samples
Chen et al. High-throughput strategy for profiling sequential section with multiplex staining of mouse brain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant