CN107064019B - The device and method for acquiring and dividing for dye-free pathological section high spectrum image - Google Patents

The device and method for acquiring and dividing for dye-free pathological section high spectrum image Download PDF

Info

Publication number
CN107064019B
CN107064019B CN201710353158.0A CN201710353158A CN107064019B CN 107064019 B CN107064019 B CN 107064019B CN 201710353158 A CN201710353158 A CN 201710353158A CN 107064019 B CN107064019 B CN 107064019B
Authority
CN
China
Prior art keywords
layer
image
parameter
training
acquisition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710353158.0A
Other languages
Chinese (zh)
Other versions
CN107064019A (en
Inventor
张镇西
王森豪
王晶
陈韵竹
张璐薇
姚翠萍
王斯佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN201710353158.0A priority Critical patent/CN107064019B/en
Publication of CN107064019A publication Critical patent/CN107064019A/en
Application granted granted Critical
Publication of CN107064019B publication Critical patent/CN107064019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/01Arrangements or apparatus for facilitating the optical investigation
    • G01N2021/0181Memory or computer-assisted visual determination

Abstract

The device and method for acquiring and dividing for dye-free pathological section high spectrum image, mid-stent branch has section sample platform, obtains lesion region segmentation result by the be unstained automatic acquisition and processing of pathological section high spectrum image of computer;The present invention is based on the SPECTRAL DIVERSITYs caused by lesion tissue, use PC synchronously control correlation module, it acquires the spectral sequence image for histopathologic slide of being unstained and pre-processes superposition and generate corresponding three-dimensional high-spectral data, and divided based on the identification that this data combines currently a popular neural network classification thought exploitation spectral classification algorithm to carry out lesion region, accelerate the recognition rate and efficiency of histopathologic slide, avoid the human error that may be introduced in dyeing course, the time required to reducing microsection manufacture, and use machine algorithm automatic discrimination, reduce artificial cognition bring subjectivity, pathological section being detected for pathologist, preferable auxiliary is provided.

Description

The device and method for acquiring and dividing for dye-free pathological section high spectrum image
Technical field
The invention belongs to detection device fields, and in particular to a kind of dye-free pathological section based on high-spectral data is quick The device and method of acquisition and segmentation.
Background technique
Pathologic finding (pathological examination) has been widely used in clinical position and scientific research.In Clinicing aspect is substantially carried out corpse pathologic finding and surgery Pathology inspection.The purpose of surgery Pathology inspection, first is that in order to clearly examine Break and verify preoperative diagnosis, improves clinical diagnostic level;Second is that can determine lower step therapeutic scheme and estimation after diagnosis is clear Prognosis, and then improve clinical treatment level.By Clinical and Pathological Analysis, it also can get a large amount of extremely valuable scientific research datas.
Pathological section is one kind of Pathologic specimen.The tissue or internal organs that partially have lesion are passed through into various chemicals when production With the processing of burying storage, be allowed to fixed hardening, thinly slice, be adhered on slide on slicer, dye with various colors, for Test under microscope makes pathological diagnosis to observe pathological change, provides help for clinical diagnosis and treatment.Pathology is routinely made Chip technology is the basis of pathological diagnosis, and chipping qualities refers to that guiding doctor teacher makes the important guarantee of Accurate Diagnosis.Hematoxylin (Hematoxylin) Yihong (Eosin) decoration method, abbreviation HE decoration method are that cell and the histology of biology and medicine are most wide The colouring method of general application.During actual fabrication, due to some pathology technicians work of system training without stringent It instructs, experience is insufficient in work, misoperation, slice often occurs and falls off, dyes unevenly, smudgy, gauffer and cell caryoplasm It compares phenomena such as unobvious and influences to diagnose.If microsection manufacture process therefore can be simplified, for using pathological section to assist Diagnosis is very significant.
The pathological process of tissue is usually associated with institutional framework in the variation of cell level and subcellular level, so as to cause The migration of tissue spectrum can provide sight by organically combining traditional two-dimensional imaging technique with spectral technique simultaneously The two-dimensional space information and one-dimensional spectral information for surveying target by spectral data analysis obtain the morphosis and chemistry of tissue Component.High light spectrum image-forming technology is applied to the auxiliary detection field of pathological section, can effectively reduce the experience to pathologist It is required that while reduce the subjective factors of pathological examination, and then improve pathological examination efficiency.
Currently, Siddiqi Anwer M et al. uses microspectrum imager acquisition dyeing cervical carcinoma slice in the world High-spectral data carries out machine recognition to slice by training least square method supporting vector machine algorithm, and sensitivity and specificity are equal Up to 90% or more.Domestic Qingdao Academy for Opto-electronics Engineering is also sliced liver cancer using similar approach and identifies, together Sample has reached preferable recognition effect.But this two methods is both needed to be sliced cancer and dye, and still has to pathologist Higher experiment and skill requirement, it is time-consuming more.
Summary of the invention
In order to overcome the above-mentioned deficiencies of the prior art, the purpose of the present invention is to provide one kind to be used for dye-free pathological section The device and method of high spectrum image acquisition and segmentation, can be realized the high spectrum image automatic collection of pathological section, and be based on The three-dimensional high-spectral data of sample slice carries out the training of artificial neural network algorithm, knows to dye-free liver cancer sectioning image Not, while reducing the complexity of tradition dye slice detection, the subjective factors during differentiating is reduced, can be cured for pathology Raw diagnosis provides preferable reference role.
In order to achieve the above object, the technical solution of the present invention is as follows:
The device for acquiring and dividing for dye-free pathological section high spectrum image, including alloy steel substrate 104, steel alloy Be equipped with xenon source 105 on substrate 104, be also vertically installed with fixed bracket 101 on alloy steel substrate 104, fixed bracket 101 from It is above past to be arranged with high spectrum image acquisition module 102 and example platform 103, high spectrum image acquisition module 102, example platform 103 is concentric with xenon source 105, the external computer 106 of high spectrum image acquisition module 102;
The high spectrum image acquisition module 102 include CCD camera 110, by 110 side USB interface of CCD camera outside Computer 106 is connect, CCD camera 110 passes through relaying camera lens 112, relaying camera adjusting ring 113 and liquid crystal tunable filter 114 It is connected and keeps concentric, liquid crystal tunable filter 114 is connected by side usb 1 15 with computer 106, and liquid crystal can 114 lower part of tunable filter is connected by C mouthfuls of focusing rings 116, aperture 117 with object lens 118.
For the acquisition of dye-free pathological section high spectrum image and dividing method, comprising the following steps:
Step 1: device is built: the slice that will be unstained, which is placed on section sample platform 104 and adjusts xenon lamp 103, illuminates model It encloses so that being sliced uniform illumination, and make by the setting CCD camera 110 of computer 106 and the parameter of liquid crystal tunable filter 114 It is clear to obtain image grayscale, while adjusting relaying camera adjusting ring 113, C mouthfuls of focusing rings 116, aperture 117 and making slice and imaging coke Plane is overlapped;
Step 2: parameter setting acquires training sample data: in setting high spectrum image acquisition module on computer 106 102 image acquisition parameter, including acquisition mode, starting, termination wavelength, wavelength resolution and time for exposure, start acquisition and obtain It takes to be unstained and is sliced each spectrum two-dimensional image data of sample, the acquisition parameter maintained like after the completion of acquisition acquires nothing respectively The completely black background of illumination and with each spectrum picture in the complete blank visual field under illumination parameter as background information refer to;
Step 3: data prediction, after being pre-processed to black/white reference information obtained by step 2 to each spectrum picture by Spectral coverage superimposition obtains three-dimensional bloom spectrum matrix, this matrix z-axis direction corresponds to every, space spectral information, to each point curve of spectrum Make the processing of wide interval counting backward technique with prominent features;
Step 4: differentiating the training of neural network, is sliced according to three gained training sample of stained slice result selecting step Lesion/non-lesion region wide interval curve of spectrum differentiates network as training data, training spectroscopic data;
Step 5: it is unstained and is sliced acquisition and the region recognition of high-spectral data, do not contaminated according to the mode of step 2 Color is sliced the acquisition of high-spectral data, does identical pretreatment according to the mode of step 3, what the training of input step four finished sentences Other neural network show that every lesion/non-lesion differentiates as a result, differentiate that result produces the knowledge of dye-free slice at comprehensive every Other result.
Step three concrete scheme is as follows:
The pretreatment of spectrum picture uses pixel correction method, in the identical condition of same acquisition slice high spectrum image Under, using the high-spectral data of one group of no light complete darkness and the high spectrum image in one group of complete blank visual field, with corresponding wave The difference of long high-spectrum image subtraction dark image pre-processes formula such as formula 1 than the difference of upper blank image and dark image It is shown.
Wherein, R is by the Optical transmission spectrum value of pretreatment conversion, IimIt is the gray scale under each wavelength of original high spectrum image Value, IblIt is each wavelength image gray value under the conditions of complete darkness, IwhIt is the gray scale for having each wavelength image under the illumination blank visual field Value, a kind of method for then using wide interval derivative, i.e. increase independent variable interval are differentiated, as shown in formula 2:
λ indicates that wavelength, Δ λ indicate wavelength interval in formula;
The step four, detailed process is as follows:
(1) it establishes and differentiates neural network model
Input information represents the spectral information of a pixel, and past the right is followed by convolutional layer and maximum pond layer based on Series of features picture is calculated, obtains output layer after feature image classification.Whole network includes input layer, convolutional layer C1, maximum pond Layer S1, convolutional layer C2, maximum pond layer S2, complete articulamentum F and output layer.The sample-size of input layer is (n1,1), wherein being N1 wave band number.First implicit convolutional layer C1 filters the input data of n1 × 1 using 24 kernel functions having a size of k1 × 1. Implicit convolutional layer C1 includes 24 × n2 × 1 node, wherein n2=n1-k1+1.Have 24 between input layer and convolutional layer C1 × (k1+1) it is a can training parameter, maximum pond layer S1 is second hidden layer, and the size of kernel function is (k2,1), maximum pond layer S1 packet Containing 24 × n3 × 1 node, wherein n3=n2/k2, this layer do not have parameter.Convolutional layer C2 includes 24 × n4 × 1 section Point, kernel function are (k3,1) wherein n4=n3-k3+1, have 24 between maximum pond layer S1 and convolutional layer C2 × (k3+1) is a trains Parameter.Maximum pond layer S2 includes 24 × n5 × 1 node, and kernel function size (k4,1), wherein n5=n4/k4, this layer do not have There is parameter.Complete articulamentum F includes n6 node, has (24 × n6+1) a training to join between this layer and maximum pond layer S2 Number.Finally output laminate contains n7 node, there is 24 × n6 × 1 node between this layer and complete articulamentum F, and 24 × n6 × 1 × n7 training parameter.The convolutional neural networks classifier of above-mentioned parameter is established for distinguishing EO-1 hyperion pixel, wherein n1 is light The number in channel is composed, n7 is exported data category number, and n2, n3, n4, n5 are the dimension of feature image respectively, and n6 is to connect entirely Connect the dimension of layer.
(2) propagated forward
The depth convolutional neural networks used are 5 layers of structures, in addition input and output layer can also regard 7 layers as, are expressed as (L+ 1) layer, wherein L=6, includes n1 input unit in input layer, includes n7 output unit in output layer, hidden layer is C1, S1, C2, S2 and F layers.Assuming that xiIt is the output of i-th layer input (i-1) layer, we can calculate xi+1Are as follows:
xi+1=fi(ui) (formula 3)
Wherein
It is i-th layer of weight matrix for acting on input data, biIt is i-th layer of additional Bayes's vector, fi() is I-th layer of activation primitive selects hyperbolic function tanh (u) as the activation primitive of convolutional layer C1, C2 and complete articulamentum F, It is maximized activation primitive of the function max (u) as maximum pond layer S1 and S2.Data are carried out by more classification using classifier, Output classification is n7, and n7 category regression model is defined as follows:
The output vector y=x of output layerL+1Indicate the probability in current iteration all categories.
(3) back-propagating
In the back-propagating stage, the parameter that training obtains is updated using Decent Gradient Methods adjustment, is melted by using minimum This function determines each parameter with cost function partial derivative is calculated, and the loss function used is defined as follows:
Wherein, m is training sample amount, and Y is output quantity.It is i-th training sample reality output y(i)Jth time Value, vector dimension is n7.The desired output Y of i-th of sample(i), the probability value of label is 1, if it is other classifications, then probability Value is 0.1 { j=Y(i)It is meant that if j is equal to the expectation category of i-th training sample, its value if is 1;Otherwise, it Value be 0.We increase negative sign before J (θ), more facilitate so that calculating.
Loss function is to uiLocal derviation is asked to obtain;
Wherein ° expression element multiplication.f′(ui) can be expressed simply as
Therefore, in iteration each time, will all update be executed:
For training of judgement parameter, α is Studying factors (α=0.01), and
Due to θiInclude WiAnd bi, and
Wherein
By multiple repetitive exercise, the recurrence of cost function is smaller and smaller, this also means that actual output and expectation Output become closer to, when it is actual output with desired output difference it is sufficiently small when, iteration stopping is finally, trained Depth convolutional neural networks model may be used for the image classification of EO-1 hyperion.
Compared with prior art, the present invention is based on the SPECTRAL DIVERSITYs caused by lesion tissue, control using PC is synchronous Correlation module processed acquires the spectral sequence image for histopathologic slide of being unstained and pre-processes the corresponding three-dimensional bloom of superposition generation Modal data, and combine currently a popular neural network classification thought exploitation spectral classification algorithm to carry out lesion region based on this data Identification segmentation, accelerate the recognition rate and efficiency of histopathologic slide.After this device, pathologist is in production pathology Without traditional HE dyeing course when slice, the human error that may be introduced in dyeing course is avoided, microsection manufacture is reduced Required time, and machine algorithm automatic discrimination is used, artificial cognition bring subjectivity is reduced, can be pathologist detection disease Reason slice provides preferable auxiliary.
Detailed description of the invention
Fig. 1 is the general structure schematic diagram of apparatus of the present invention.
Fig. 2 is the structural schematic diagram of high spectrum image acquisition module of the present invention.
Fig. 3 is the pictorial diagram of apparatus of the present invention.
Fig. 4 is the mating acquisition software screenshot of the present invention.
Fig. 5 is present invention slice spectroscopic data recognizer flow chart.
Fig. 6 is used spectroscopic data to differentiate neural network structure schematic diagram by the present invention.
Fig. 7 a is that the present invention extracts histotomy multiple spot primary light spectrogram;Fig. 7 b wide interval derivative method treated spectrum is bent Line chart.
Fig. 8 is that be unstained hierarchical model forecast image and same position stained slice segmentation result of the present invention compares.
Fig. 9 is that be unstained hierarchical model forecast image and algorithm of support vector machine recognition result of the present invention compares.
Specific embodiment
The present invention will be further described with reference to the accompanying drawing.
Referring to Fig. 1 to Fig. 3, the device for acquiring and dividing for dye-free pathological section high spectrum image, including steel alloy Substrate 104 is equipped with xenon source 105 on alloy steel substrate 104, is also vertically installed with fixed bracket on alloy steel substrate 104 101, fixed bracket 101 is provided with high spectrum image acquisition module 102 and example platform 103 from top to bottom, and high spectrum image is adopted It is concentric to collect module 102, example platform 103 and xenon source 105, the external computer 106 of high spectrum image acquisition module 102;
The high spectrum image acquisition module 102 include CCD camera 110, by 110 side USB interface of CCD camera outside Computer 106 is connect, CCD camera 110 passes through relaying camera lens 112, relaying camera adjusting ring 113 and liquid crystal tunable filter 114 Be connected and keep concentric, liquid crystal tunable filter 114 by side usb 1 15 be connected with computer 106 realize and The synchronously control of CCD camera 110,114 lower part of liquid crystal tunable filter pass through C mouthfuls of focusing rings 116, aperture 117 and object lens 118 It is connected, relaying camera lens 112 and object lens 118 are provided with the focusing ring to adjust focal plane and the light to control light-inletting quantity size Ring can be cut by adjusting relaying camera adjusting ring 113, C mouthfuls of focusing rings 116, aperture 117 to setting on section sample platform 104 Piece sample realizes accurate focusing and proper exposure, and sample stage side is provided with xenon lamp optical path lens group, and xenon lamp optical path lens group is logical The control of xenon source driving circuit is crossed, liquid crystal tunable filter connects computer with scientific research camera.
For the acquisition of dye-free pathological section high spectrum image and dividing method, comprising the following steps:
Step 1: device is built: the slice that will be unstained, which is placed on section sample platform 104 and adjusts xenon lamp 103, illuminates model It encloses so that being sliced uniform illumination, and make by the setting CCD camera 110 of computer 106 and the parameter of liquid crystal tunable filter 114 It is clear to obtain image grayscale, while adjusting relaying camera adjusting ring 113, C mouthfuls of focusing rings 116, aperture 117 and making slice and imaging coke Plane is overlapped;
Step 2: parameter setting acquires training sample data referring to Fig. 4: being adopted in high spectrum image is arranged on computer 106 Collect the image acquisition parameter of module 102, including acquisition mode, starting, termination wavelength, wavelength resolution and time for exposure, starts Acquisition, which obtains to be unstained, is sliced each spectrum two-dimensional image data of sample, the acquisition parameter maintained like after the completion of acquisition, respectively Acquire the completely black background of no light and with each spectrum picture in the complete blank visual field under illumination parameter as background information reference;
Step 3: data prediction, after being pre-processed to black/white reference information obtained by step 2 to each spectrum picture by Spectral coverage superimposition obtains three-dimensional bloom spectrum matrix, this matrix z-axis direction corresponds to every, space spectral information, to each point curve of spectrum Make the processing of wide interval counting backward technique with prominent features;
Step three specific embodiment is as follows:
The pretreatment of spectrum picture uses pixel correction method, in the identical condition of same acquisition slice high spectrum image Under, using the high-spectral data of one group of no light complete darkness and the high spectrum image in one group of complete blank visual field, with corresponding wave The difference of long high-spectrum image subtraction dark image pre-processes formula such as formula 1 than the difference of upper blank image and dark image It is shown.
Wherein, R is by the Optical transmission spectrum value of pretreatment conversion, IimIt is the gray scale under each wavelength of original high spectrum image Value, IblIt is each wavelength image gray value under the conditions of complete darkness, IwhIt is the gray scale for having each wavelength image under the illumination blank visual field Value, a kind of method for then using wide interval derivative, i.e. increase independent variable interval are differentiated, as shown in formula 2:
λ indicates that wavelength, Δ λ indicate wavelength interval in formula;
Step 4: differentiating the training of neural network, is sliced according to three gained training sample of stained slice result selecting step Lesion/non-lesion region wide interval curve of spectrum differentiates network as training data, training spectroscopic data.Network parameter training is calculated Method such as Fig. 5
Shown, detailed process is as follows:
(1) establish differentiate neural network model as shown in fig. 6,
Input information represents the spectral information of a pixel, and past the right is followed by convolutional layer and maximum pond layer based on Series of features picture is calculated, obtains output layer after feature image classification.Whole network includes input layer, convolutional layer C1, maximum pond Layer S1, convolutional layer C2, maximum pond layer S2, complete articulamentum F and output layer.The sample-size of input layer is (n1,1), wherein being N1 wave band number.First implicit convolutional layer C1 filters the input data of n1 × 1 using 24 kernel functions having a size of k1 × 1. Implicit convolutional layer C1 includes 24 × n2 × 1 node, wherein n2=n1-k1+1.Have 24 between input layer and convolutional layer C1 × (k1+1) it is a can training parameter, maximum pond layer S1 is second hidden layer, and the size of kernel function is (k2,1), maximum pond layer S1 packet Containing 24 × n3 × 1 node, wherein n3=n2/k2, this layer do not have parameter.Convolutional layer C2 includes 24 × n4 × 1 section Point, kernel function are (k3,1) wherein n4=n3-k3+1, have 24 between maximum pond layer S1 and convolutional layer C2 × (k3+1) is a trains Parameter.Maximum pond layer S2 includes 24 × n5 × 1 node, and kernel function size (k4,1), wherein n5=n4/k4, this layer do not have There is parameter.Complete articulamentum F includes n6 node, has (24 × n6+1) a training to join between this layer and maximum pond layer S2 Number.Finally output laminate contains n7 node, there is 24 × n6 × 1 node between this layer and complete articulamentum F, and 24 × n6 × 1 × n7 training parameter.The convolutional neural networks classifier of above-mentioned parameter is established for distinguishing EO-1 hyperion pixel, wherein n1 is light The number in channel is composed, n7 is exported data category number, and n2, n3, n4, n5 are the dimension of feature image respectively, and n6 is to connect entirely Connect the dimension of layer.
(2) propagated forward
The depth convolutional neural networks used are 5 layers of structures, in addition input and output layer can also regard 7 layers as, are expressed as (L+ 1) layer, wherein L=6, includes n1 input unit in input layer, includes n7 output unit in output layer, hidden layer is C1, S1, C2, S2 and F layers.Assuming that xiIt is the output of i-th layer input (i-1) layer, we can calculate xi+1Are as follows:
xi+1=fi(ui) (formula 3)
Wherein
It is i-th layer of weight matrix for acting on input data, biIt is i-th layer of additional Bayes's vector, fi() is I-th layer of activation primitive selects hyperbolic function tanh (u) as the activation primitive of convolutional layer C1, C2 and complete articulamentum F, It is maximized activation primitive of the function max (u) as maximum pond layer S1 and S2.Data are carried out by more classification using classifier, Output classification is n7, and n7 category regression model is defined as follows:
The output vector y=x of output layerL+1Indicate the probability in current iteration all categories.
(3) back-propagating
In the back-propagating stage, the parameter that training obtains is updated using Decent Gradient Methods adjustment, is melted by using minimum This function determines each parameter with cost function partial derivative is calculated, and the loss function used is defined as follows:
Wherein, m is training sample amount, and Y is output quantity.It is i-th training sample reality output y(i)Jth time Value, vector dimension is n7.The desired output Y of i-th of sample(i), the probability value of label is 1, if it is other classifications, then probability Value is 0.1 { j=Y(i)It is meant that if j is equal to the expectation category of i-th training sample, its value if is 1;Otherwise, it Value be 0.We increase negative sign before J (θ), more facilitate so that calculating.
Loss function is to uiLocal derviation is asked to obtain;
Wherein ° expression element multiplication.f′(ui) can be expressed simply as
Therefore, in iteration each time, will all update be executed:
For training of judgement parameter, α is Studying factors (α=0.01), and
Due to θiInclude WiAnd bi, and
Wherein
By multiple repetitive exercise, the recurrence of cost function is smaller and smaller, this also means that actual output and expectation Output become closer to, when it is actual output with desired output difference it is sufficiently small when, iteration stopping is finally, trained Depth convolutional neural networks model may be used for the image classification of EO-1 hyperion.
Step 5: it is unstained and is sliced acquisition and the region recognition of high-spectral data, do not contaminated according to the mode of step 2 Color is sliced the acquisition of high-spectral data, does identical pretreatment according to the mode of step 3, what the training of input step four finished sentences Other neural network show that every lesion/non-lesion differentiates as a result, differentiate that result produces the knowledge of dye-free slice at comprehensive every Other result.
Slice is dyed/is unstained using same position to verify the present apparatus and method
For the feasibility for verifying book device and method, by taking liver cancer is sliced as an example, we use Medical University Of Fujian Meng Chao Dyeing/the slice that is unstained at the same position that liver and gallbladder hospital provides has carried out system verifying.Microsection manufacture process are as follows: suffer from from liver cancer Pathological tissues are obtained in person's body, is immersed in formalin solution and fixes, and paraffin embedding are used after tissue is dehydrated completely, with slice Machine-cut slice, two plate sheet identical to same tissue excisions thickness, a piece of conventional Hematoxylin-eosin colouring method H&E stained slice (for theoretical control) is made in dyeing, and another is directly placed on glass slide, at dimethylbenzene dewaxing Reason, is made undyed histotomy.High spectrum image acquisition condition are as follows: xenon source 30mW/cm2Light intensity, wave-length coverage 400 ~718nm, wavelength interval 3nm.
For this lot sample sheet, it is that the derivative spectrum under 177nm is used between further component that we, which have chosen wavelength interval, Distinguish, original spectrum is as shown in Figure 7b such as curve after Fig. 7 a and the processing of wide interval derivative method, general's treated normal data as Neural network classification algorithm input training training neural network parameter, after to standard slice be split verifying, prediction result As shown in figure 8, segmentation result and traditional convolution algorithm of support vector machine segmentation result are compared simultaneously, comparing result such as Fig. 9 institute Show, take precision calculation formula shown in formula 13, calculates two kinds of algorithm cancerous area segmentation precisions, the results are shown in Table 1
1 depth convolutional neural networks model of table and support vector cassification accuracy compare
According to the experimental results, the present apparatus can effectively acquire the liver cancer slice high spectrum image that is unstained, and software kit is calculated Method is preferable to cancerous area segmentation result, relative to traditional haematoxylin eosin stains pathological section method, eliminates microscope And complicated dyeing course, the artifact interference in conventional method is reduced, preferable diagnosis can be provided for pathologist Auxiliary.

Claims (2)

1. the method for carrying out Image Acquisition and segmentation using the acquisition of dye-free pathological section high spectrum image and segmenting device, special Sign is that the acquisition of dye-free pathological section high spectrum image and segmenting device include alloy steel substrate (104), alloy steel substrate (104) it is equipped with xenon source (105), is also vertically installed on alloy steel substrate (104) fixed bracket (101) on, fixed bracket (101) it is provided with high spectrum image acquisition module (102) and example platform (103), high spectrum image acquisition module from top to bottom (102), example platform (103) and xenon source (105) are concentric, high spectrum image acquisition module (102) external computer (106);
The high spectrum image acquisition module (102) includes CCD camera (110), passes through CCD camera (110) side USB interface External computer (106), CCD camera (110) pass through relaying camera lens (112), relaying camera adjusting ring (113) and liquid crystal tunable Optical filter (114) is connected and keeps concentric, and liquid crystal tunable filter (114) passes through side USB interface (115) and computer (106) it is connected, liquid crystal tunable filter (114) lower part passes through C mouthfuls of focusing rings (116), aperture (117) and object lens (118) phase Even;
The following steps are included:
Step 1: device is built: the slice that will be unstained is placed on example platform (103) and adjusts xenon source (105) illumination Range to be sliced uniform illumination, and passes through computer (106) setting CCD camera (110) and liquid crystal tunable filter (114) Parameter make image grayscale clear, while adjust relaying camera adjusting ring (113), C mouthfuls of focusing rings (116), aperture (117) make It must be sliced and be overlapped with imaging focal plane;
Step 2: parameter setting and the acquisition of training sample data: in setting high spectrum image acquisition module on computer (106) (102) image acquisition parameter, including acquisition mode, starting, termination wavelength, wavelength resolution and time for exposure, start to acquire Acquisition, which is unstained, is sliced each spectrum two-dimensional image data of sample, and the acquisition parameter maintained like after the completion of acquisition acquires respectively The completely black background of no light and with each spectrum picture in the complete blank visual field under illumination parameter as background information refer to;
Step 3: data prediction, by spectrum after being pre-processed using black/white reference information obtained by step 2 to each spectrum picture Section superimposition obtains three-dimensional bloom spectrum matrix, this matrix z-axis direction corresponds to every, space spectral information, makees to each point curve of spectrum The processing of wide interval derivative method is with prominent features;
Step 4: differentiating the training of neural network, and three gained training sample of selecting step is sliced lesion/non-lesion region wide interval The curve of spectrum differentiates network as training data, training spectroscopic data;Detailed process is as follows:
(1) it establishes and differentiates neural network model
Input information represents the spectral information of a pixel, toward the right followed by convolutional layer and maximum pond layer for calculating one Series of features picture obtains output layer after feature image classification;Whole network includes input layer, convolutional layer C1, maximum pond layer S1, Convolutional layer C2, maximum pond layer S2, complete articulamentum F and output layer;The sample-size of input layer is (n1,1), and wherein n1 is spectrum Number of active lanes;Convolutional layer C1 is first hidden layer, uses the input of 24 kernel function filtering n1 × 1 having a size of k1 × 1 Data;Convolutional layer C1 includes 24 × n2 × 1 node, wherein n2=n1-k1+1;Have 24 between input layer and convolutional layer C1 × (k1+1) it is a can training parameter, maximum pond layer S1 is second hidden layer, and the size of kernel function is (k2,1), maximum pond layer S1 packet Containing 24 × n3 × 1 node, wherein n3=n2/k2, this layer do not have parameter;Convolutional layer C2 includes 24 × n4 × 1 section Point, kernel function are (k3,1) wherein n4=n3-k3+1, have 24 between maximum pond layer S1 and convolutional layer C2 × (k3+1) is a trains Parameter;Maximum pond layer S2 includes 24 × n5 × 1 node, and kernel function size (k4,1), wherein n5=n4/k4, this layer do not have There is parameter;Complete articulamentum F includes n6 node, has (24 × n6+1) a training to join between this layer and maximum pond layer S2 Number;Last output layer contains n7 node, has 24 × n6 × 1 node, 24 × n6 × 1 between this layer and complete articulamentum F × n7 training parameter;The convolutional neural networks classifier of above-mentioned parameter is established for distinguishing EO-1 hyperion pixel, wherein n1 is light Number of active lanes is composed, n7 is exported data category number, and n2, n3, n4, n5 are the dimension of feature image respectively, and n6 is completely to connect Connect the dimension of layer;
(2) propagated forward
The depth convolutional neural networks used are 5 layers of structures, in addition input and output layer can also regard 7 layers as, are expressed as (L+1) layer, Wherein L=6 includes n1 input unit in input layer, includes n7 output unit in output layer, hidden layer C1, S1, C2, S2 and F layers;Assuming that xiIt is the output of i-th layer input (i-1) layer, we can calculate xi+1Are as follows:
xi+1=fi(ui) (formula 3)
Wherein
It is i-th layer of weight matrix coefficient for acting on input data, biIt is i-th layer of additional Bayes's vector, fi() is I-th layer of activation primitive selects hyperbolic function tanh (u) as the activation primitive of convolutional layer C1, C2 and complete articulamentum F, It is maximized activation primitive of the function max (u) as maximum pond layer S1 and S2;Data are carried out by more classification using classifier, it is defeated Classification is n7 out, and n7 category regression model is defined as follows:
The output vector y=x of output layerL+1Indicate the probability in current iteration all categories;
(3) back-propagating
In the back-propagating stage, the parameter that training obtains is updated using Decent Gradient Methods adjustment, by using minimum cost letter Cost function partial derivative is counted and calculated to determine each parameter, the loss function used is defined as follows:
Wherein, m is training sample amount, and Y is output quantity;It is i-th training sample reality output y(i)Jth time value, to Taking measurements is n7;The desired output Y of i-th of sample(i), the probability value of label is 1, and if it is other classifications, then probability value is 0; 1 { j=Y(i)It is meant that if j is equal to the expectation category of i-th training sample, its value if is 1;Otherwise, its value is 0;Increase negative sign before J (θ), is more facilitated so that calculating;
Loss function is to uiLocal derviation is asked to obtain;
Wherein ° expression element multiplication;f′(ui) can be expressed simply as
Therefore, in iteration each time, update will be all executed, θ is all parameter sets in neural network, θiInclude WiAnd bi:
For training of judgement parameter, α is Studying factors, α=0.01, and
Due to θiInclude WiAnd bi, and
Wherein:
By multiple repetitive exercise, the recurrence of cost function is smaller and smaller, this also means that it is actual output with it is desired defeated It becomes closer to out, when actual output is sufficiently small with desired output difference, iteration stopping, finally, trained depth Convolutional neural networks model may be used for the image classification of EO-1 hyperion;
Step 5: being unstained and be sliced acquisition and the region recognition of high-spectral data, be unstained cutting according to the mode of step 2 The acquisition of piece high-spectral data does identical pretreatment, the differentiation mind that the training of input step four finishes according to the mode of step 3 Through network, show that every lesion/non-lesion differentiates as a result, differentiate that result produces the identification knot of dye-free slice at comprehensive every Fruit.
2. according to claim 1 adopted using the acquisition of dye-free pathological section high spectrum image and segmenting device progress image Collection and the method for segmentation, which is characterized in that step three concrete scheme is as follows:
The pretreatment of spectrum picture uses pixel correction method, is sliced high spectrum image under the same conditions in same acquisition, makes With the high-spectral data of one group of no light complete darkness and the high spectrum image in one group of complete blank visual field, with the height of corresponding wavelength Spectrum picture subtracts difference of the difference than upper blank image and dark image of dark image, and pretreatment formula is as shown in formula 1;
Wherein, R is by the Optical transmission spectrum value of pretreatment conversion, IimIt is the gray value under each wavelength of original high spectrum image, IblIt is each wavelength image gray value under the conditions of complete darkness, IwhIt is the gray value for having each wavelength image under the illumination blank visual field, with A kind of method for using wide interval derivative afterwards, i.e. increase independent variable interval are differentiated, as shown in formula 2:
λ indicates that wavelength, Δ λ indicate wavelength interval in formula.
CN201710353158.0A 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image Active CN107064019B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710353158.0A CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710353158.0A CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Publications (2)

Publication Number Publication Date
CN107064019A CN107064019A (en) 2017-08-18
CN107064019B true CN107064019B (en) 2019-11-26

Family

ID=59610992

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710353158.0A Active CN107064019B (en) 2017-05-18 2017-05-18 The device and method for acquiring and dividing for dye-free pathological section high spectrum image

Country Status (1)

Country Link
CN (1) CN107064019B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107843593A (en) * 2017-10-13 2018-03-27 上海工程技术大学 A kind of textile material recognition methods and system based on high light spectrum image-forming technology
EP4235594A3 (en) 2018-02-15 2023-10-25 Denka Company Limited System, program, and method for determining hypermutated tumor
CN109034208B (en) * 2018-07-03 2020-10-23 怀光智能科技(武汉)有限公司 High-low resolution combined cervical cell slice image classification system
CN109272492B (en) * 2018-08-24 2022-02-15 深思考人工智能机器人科技(北京)有限公司 Method and system for processing cytopathology smear
CN109489816B (en) * 2018-10-23 2021-02-26 华东师范大学 Microscopic hyperspectral imaging platform and large-area data cube acquisition method
CN110008836B (en) * 2019-03-06 2023-04-25 华东师范大学 Feature extraction method of hyperspectral image of pathological tissue slice
CN109815945B (en) * 2019-04-01 2024-04-30 上海徒数科技有限公司 Respiratory tract examination result interpretation system and method based on image recognition
EP3977402A1 (en) 2019-05-28 2022-04-06 PAIGE.AI, Inc. Systems and methods for processing images to prepare slides for processed images for digital pathology
CN110600106B (en) * 2019-08-28 2022-07-05 上海联影智能医疗科技有限公司 Pathological section processing method, computer device and storage medium
CN110517258A (en) * 2019-08-30 2019-11-29 山东大学 A kind of cervical carcinoma pattern recognition device and system based on high light spectrum image-forming technology
TWI781408B (en) * 2019-11-27 2022-10-21 靜宜大學 Artificial intelligence based cell detection method by using hyperspectral data analysis technology
CN111325757B (en) * 2020-02-18 2022-12-23 西北工业大学 Point cloud identification and segmentation method based on Bayesian neural network
CN113450305B (en) * 2020-03-26 2023-01-24 太原理工大学 Medical image processing method, system, equipment and readable storage medium
CN112712877B (en) * 2020-12-07 2024-02-09 西安电子科技大学 Large-view-field high-flux high-resolution pathological section analyzer
CN113065403A (en) * 2021-03-05 2021-07-02 浙江大学 Hyperspectral imaging-based machine learning cell classification method and device
CN114820502B (en) * 2022-04-21 2023-10-24 济宁医学院附属医院 Coloring detection method for protein kinase CK2 in intestinal mucosa tissue
CN115236015B (en) * 2022-07-21 2024-05-03 华东师范大学 Puncture sample pathology analysis system and method based on hyperspectral imaging technology

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011089895A (en) * 2009-10-22 2011-05-06 Arata Satori Device and method of hyperspectral imaging
CN104316473A (en) * 2014-10-28 2015-01-28 南京农业大学 Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image
US9345428B2 (en) * 2004-11-29 2016-05-24 Hypermed Imaging, Inc. Hyperspectral imaging of angiogenesis
CN106097355A (en) * 2016-06-14 2016-11-09 山东大学 The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9345428B2 (en) * 2004-11-29 2016-05-24 Hypermed Imaging, Inc. Hyperspectral imaging of angiogenesis
JP2011089895A (en) * 2009-10-22 2011-05-06 Arata Satori Device and method of hyperspectral imaging
CN104316473A (en) * 2014-10-28 2015-01-28 南京农业大学 Gender determination method for chicken hatching egg incubation early embryo based on hyperspectral image
CN106097355A (en) * 2016-06-14 2016-11-09 山东大学 The micro-Hyperspectral imagery processing method of gastroenteric tumor based on convolutional neural networks
CN106226247A (en) * 2016-07-15 2016-12-14 暨南大学 A kind of cell detection method based on EO-1 hyperion micro-imaging technique

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于高光谱成像的基底细胞癌和肝癌病理切片检测研究;周湘连;《西安交通大学机构知识库》;20161231;第1页 *
高光谱成像技术用于肝切片癌变信息的提取;于翠荣 等;《科学技术与工程》;20150930;第15卷(第27期);第106-107页,图1-2 *

Also Published As

Publication number Publication date
CN107064019A (en) 2017-08-18

Similar Documents

Publication Publication Date Title
CN107064019B (en) The device and method for acquiring and dividing for dye-free pathological section high spectrum image
Rana et al. Computational histological staining and destaining of prostate core biopsy RGB images with generative adversarial neural networks
CN105182514B (en) Based on LED light source without lens microscope and its image reconstructing method
US10083340B2 (en) Automated cell segmentation quality control
ES2301706T3 (en) METHOD OF QUANTITATIVE VIDEOMICROSCOPY AND ASSOCIATED SYSTEM AS WELL AS THE SOFWARE INFORMATION PROGRAM PRODUCT.
KR20200140301A (en) Method and system for digital staining of label-free fluorescent images using deep learning
KR20220119669A (en) Method and system for digital staining of microscopic images using deep learning
CN107111874A (en) System and method for the coexpression analysis during fraction is calculated to be immunized
JP2016526185A (en) Microscopic observation of tissue samples using structured illumination
CN113781455B (en) Cervical cell image anomaly detection method, device, equipment and medium
Chen et al. Deep-learning-assisted microscopy with ultraviolet surface excitation for rapid slide-free histological imaging
Lovell et al. International contest on pattern recognition techniques for indirect immunofluorescence images analysis
CN106706643A (en) Liver cancer comparison section detection device and detection method
CN115032196A (en) Full-scribing high-flux color pathological imaging analysis instrument and method
US20040014165A1 (en) System and automated and remote histological analysis and new drug assessment
CN111380843A (en) Device design and imaging method of high-sensitivity visible-near infrared double-channel laser fluorescence microscope
Tsafas et al. Application of a deep-learning technique to non-linear images from human tissue biopsies for shedding new light on breast cancer diagnosis
US8744827B2 (en) Method for preparing a processed virtual analysis plate
CN115629072A (en) Bone marrow smear image analysis and diagnosis method and pathological section scanner device
US20210174147A1 (en) Operating method of image processing apparatus, image processing apparatus, and computer-readable recording medium
CN114518362A (en) Sperm quality analysis device, system, method and readable storage medium
Vladimirov et al. The Benchtop mesoSPIM: a next-generation open-source light-sheet microscope for large cleared samples
Chen et al. High-throughput strategy for profiling sequential section with multiplex staining of mouse brain
US20200074628A1 (en) Image processing apparatus, imaging system, image processing method and computer readable recoding medium
WO2021198247A1 (en) Optimal co-design of hardware and software for virtual staining of unlabeled tissue

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant