CN113313673B - TB-level cranial nerve fiber data reduction method and system based on deep learning - Google Patents

TB-level cranial nerve fiber data reduction method and system based on deep learning Download PDF

Info

Publication number
CN113313673B
CN113313673B CN202110501547.XA CN202110501547A CN113313673B CN 113313673 B CN113313673 B CN 113313673B CN 202110501547 A CN202110501547 A CN 202110501547A CN 113313673 B CN113313673 B CN 113313673B
Authority
CN
China
Prior art keywords
image
nerve fiber
tested
dimensional
nerve
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110501547.XA
Other languages
Chinese (zh)
Other versions
CN113313673A (en
Inventor
全廷伟
黄青
刘世杰
骆清铭
曾绍群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202110501547.XA priority Critical patent/CN113313673B/en
Publication of CN113313673A publication Critical patent/CN113313673A/en
Application granted granted Critical
Publication of CN113313673B publication Critical patent/CN113313673B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention belongs to the field of image processing, and discloses a TB magnitude 3D cranial nerve data reduction method and system based on deep learning, wherein the method comprises the following steps: s1: establishing a test data set; s2: establishing a segmentation model and training; s3: processing each image to be tested in the test data set by using a segmentation model to obtain nerve fiber distribution information corresponding to each image to be tested; s4: and optimizing the distribution information of the nerve fibers by using morphological operation, so that the maximum connected domain corresponding to the region where the nerve fibers are located in the nerve fiber contour map in the three-dimensional dimension of the space is optimized, and finally obtaining the TB-level sparse data reduction result of the whole brain nerve fibers. The invention can quickly, accurately and effectively reduce the data set of the cranial nerves with TB magnitude and above, greatly reduce the data volume of the subsequent neuron reconstruction and improve the reconstruction efficiency.

Description

TB-level cranial nerve fiber data reduction method and system based on deep learning
Technical Field
The invention belongs to the field of image processing, and particularly relates to a TB-level brain nerve fiber data reduction method and system based on deep learning.
Background
In recent years, serial breakthroughs in molecular labeling and imaging technologies have enabled single cell-resolved full brain scale neuro-colony imaging. The generation of massive data and the hysteresis of data processing tools present new challenges to the reconstruction of neuron morphology. The neurons are widely distributed in the whole brain, and are distributed differently in all brain areas, so that the neurons have the characteristic of sparsity on the whole brain scale, and the reduction of the nerve fibers of the whole brain becomes possible. In the face of mass whole brain nerve fiber data, data redundancy removal can greatly reduce the data volume of subsequent neuron form reconstruction and improve the reconstruction efficiency. Therefore, there is a need to develop a method and a system for reducing data of the whole brain nerve fibers to solve the problem of massive data in reconstruction of neuron morphology.
Disclosure of Invention
In view of the above defects or improvement requirements of the prior art, the present invention aims to provide a TB-level brain nerve fiber data reduction method and system based on deep learning, wherein the overall processing flow design of the data reduction method, the component settings of the corresponding system, and the like are improved, the deep learning is used to perform the preliminary processing on the nerve fiber image, and the morphological operation is further used to obtain the whole brain nerve fiber image, so as to reduce the data amount for the subsequent whole brain nerve fiber morphological reconstruction and improve the efficiency of the neuron morphological reconstruction.
To achieve the above object, according to one aspect of the present invention, there is provided a TB-level sparse whole brain nerve fiber data reduction method based on deep learning, including the steps of:
s1: based on a series of brain slice images obtained from the same brain tissue, establishing a Cartesian space rectangular coordinate system with the length direction of any brain slice image as the X-axis direction, the width direction as the Y-axis direction and the normal direction of the plane where the brain slice image is located as the Z-axis direction, wherein the origin of the rectangular coordinate system is the top left corner vertex of the uppermost or lowermost brain slice image; then dividing the brain slice images according to a preset unit layer number dividing unit; carrying out maximum projection on a plurality of divided brain slice images belonging to the same unit to obtain a two-dimensional plane projection image, then cutting each two-dimensional plane projection image according to a preset unit length and a preset unit width to obtain a plurality of images to be tested, and taking the top left corner vertex of each image to be tested as the image origin of the image to be tested, thereby establishing a test data set, wherein the length and the width of each image to be tested both meet preset requirements, and taking the three-dimensional space coordinate of the image origin of each image to be tested under the rectangular coordinate system as the space position information of each image to be tested under the rectangular coordinate system;
s2: establishing a segmentation model, and training the segmentation model to obtain a trained deep learning segmentation model; the segmentation model can be used for classifying and segmenting whether each pixel point on each image to be tested in the test set belongs to nerve fibers;
s3: processing each image to be tested in the test data set obtained in the step S1 by using the deep learning segmentation model obtained in the step S2 to obtain nerve fiber distribution information corresponding to each image to be tested;
s4: optimizing the nerve fiber distribution information obtained in the step S3 by using morphological operation, so that the maximum connected domain corresponding to the region where the nerve fiber is located in the nerve fiber contour map in the space three-dimensional dimension obtained based on the nerve fiber distribution information is optimized; based on the optimized maximum connected domain, selecting a plurality of images to be detected corresponding to the maximum connected domain from all the images to be detected obtained in the step S1 to further construct a three-dimensional nerve fiber image, wherein the three-dimensional nerve fiber image corresponds to a TB-level sparse whole brain nerve fiber data reduction result.
More preferably, in step S1, the number of layers per unit set in advance is 200; the preset unit length is 256 pixels, the preset unit width is 256 pixels, and the length and the width of each image to be measured are 256 pixels and 256 pixels respectively.
As a further preferred aspect of the present invention, in the step S2, the segmentation model is specifically a voxreset segmentation model including 25 convolutional layers, 6 residual error modules, and 4 deconvolution layers; and training the segmentation model, specifically adopting a mixed loss function of Dicelos and Crossentryp.
As a further preferred aspect of the present invention, the step S3 specifically includes the following sub-steps:
s31: classifying and segmenting each image to be tested in the test data set obtained in the step S1 by using the deep learning segmentation model obtained in the step S2, so that each image to be tested obtains a corresponding nerve fiber binary image; for any binary image of nerve fibers, if a certain pixel point corresponds to the nerve fibers, recording the value of the pixel point as a preset value A; if a certain pixel point does not correspond to the nerve fiber, recording the value of the pixel point as a preset B value;
s32: performing nerve fiber length extraction processing on each nerve fiber binary image obtained in the step S1, specifically, for any nerve fiber binary image, recording the total number of pixel points of which the median is a as the nerve fiber length information of the nerve fiber binary image;
s33: based on the spatial position information of each image to be detected in the spatial rectangular coordinate system and the nerve fiber length information of the corresponding nerve fiber binary image, the nerve fiber distribution information corresponding to each image to be detected can be obtained;
preferably, in step S33, the nerve fiber distribution information is stored in the form of a nerve fiber distribution table, and the nerve fiber distribution table is configured by using the spatial position information of each image to be measured as an index and the nerve fiber length information of the corresponding binary image of nerve fibers as a value.
As a further preferred aspect of the present invention, the step S4 specifically includes the following sub-steps:
s41: dividing the nerve fiber distribution table according to a preset nerve fiber length threshold value; specifically, for any one index, if the value of the index is greater than or equal to the threshold, the index is divided into p 1; if its value is less than the threshold and greater than 0, then divide the index into p 2;
s42: drawing a nerve fiber contour map in a three-dimensional dimension based on p1 to obtain a maximum connected domain p3 of the nerve fiber contour map;
s43: performing morphological expansion on the p3, taking intersection between the newly added expanded data set and the p2 data set according to the condition whether the indexes are the same or not, adding the data in the intersection into the p3, and performing optimization updating on the p 3; repeating the optimization updating for 3-4 times, wherein the p3 obtained by optimization is the maximum connected domain corresponding to the region where the nerve fiber is located in the nerve fiber contour map under the corresponding three-dimensional dimension; and finally, further constructing a nerve fiber contour map under a space three-dimensional dimension based on the p3 obtained through optimization, and selecting a plurality of images to be detected corresponding to the p3 from all images to be detected obtained in the step S1 based on the p3 obtained through optimization to further construct a three-dimensional nerve fiber image, wherein the three-dimensional nerve fiber image corresponds to a TB-level sparse whole brain nerve fiber data reduction result.
In a further preferred embodiment of the present invention, in step S41, the preset nerve fiber length threshold is greater than 400 and less than 600;
step S42 is to establish a three-dimensional matrix with initial values of elements all being 0, the length of the matrix being [ l/l0]Width of [ w/w0]Deep is [ s/s0]Wherein l and w are the length and width of any one brain slice image in step S1, and S is the total number of layers of the series of brain slice images in step S1; l0、w0、s0The unit length, unit width and unit layer number preset in step S1; wherein, the operator [ alpha ], [ beta ] or]Representing rounding; then, based on p1, calculating the corresponding positions of the indexes in p1 in the three-dimensional matrix, and recording the values of the corresponding positions as 255, wherein the spatial position information in each index corresponds to the positions in the three-dimensional matrix; obtaining a binarized three-dimensional matrix, correspondingly obtaining a binarized three-dimensional nerve fiber contour map based on the binarized three-dimensional matrix, and finally obtaining the maximum connected domain p3 of the nerve fiber contour map based on the nerve fiber contour map;
in step S43, the morphological dilation uses a dilation radius of 1, a dilation kernel of an ellipse, and a dilation frequency of 1.
According to another aspect of the present invention, the present invention provides a TB-level sparse whole brain nerve fiber data reduction system based on deep learning, comprising:
an image preprocessing module: the method is used for establishing a Cartesian space rectangular coordinate system by taking the length direction of any brain slice image as the X-axis direction and the width direction as the Y-axis direction and the normal direction of the plane where the brain slice image is located as the Z-axis direction based on a series of brain slice images obtained based on the same brain tissue, wherein the origin of the rectangular coordinate system is the top left corner vertex of the uppermost layer or the lowermost layer of the brain slice image; then dividing the brain slice images according to a preset unit layer number dividing unit; carrying out maximum projection on a plurality of divided brain slice images belonging to the same unit to obtain a two-dimensional plane projection image, then cutting each two-dimensional plane projection image according to a preset unit length and a preset unit width to obtain a plurality of images to be tested, and taking the top left corner vertex of each image to be tested as the image origin of the image to be tested, thereby establishing a test data set, wherein the length and the width of each image to be tested both meet preset requirements, and taking the three-dimensional space coordinate of the image origin of each image to be tested under the rectangular coordinate system as the space position information of each image to be tested under the rectangular coordinate system;
a segmentation model module: the device is used for classifying and segmenting whether each pixel point on each image to be tested in the test set belongs to nerve fiber or not to obtain the nerve fiber distribution information corresponding to each image to be tested;
a morphology processing module: the neural fiber distribution information obtained by the segmentation model module is optimized by morphological operation, so that the maximum connected domain corresponding to the region where the neural fiber is located in the neural fiber contour map in the space three-dimensional dimension obtained based on the neural fiber distribution information is optimized; and based on the maximum connected domain obtained by optimization, selecting a plurality of images to be detected corresponding to the maximum connected domain from all images to be detected obtained by the image preprocessing module so as to further construct a three-dimensional nerve fiber image, wherein the three-dimensional nerve fiber image corresponds to a TB-level sparse whole brain nerve fiber data reduction result.
Compared with the prior art, the technical scheme of the invention has the advantages that the neural fiber image is subjected to preliminary processing by using deep learning, and the whole brain neural fiber image is further obtained by morphological operation, so that the data volume is reduced for the subsequent whole brain neuron morphological reconstruction, and the neuron morphological reconstruction efficiency is improved.
The neural images of the whole brain scale have large difference, the gray scale change range of the images in different brain areas is different, and the density degree of the distribution of the neural fibers is different. Neural image datasets at the full brain scale contain high signal-to-noise ratio images, high noise images, low signal-to-noise ratio high noise images, and the like, and the variety of images is great. In the face of such complex and diverse neural image data sets, it is difficult to accurately judge whether the neural image contains a neural fiber signal by using a two-classification network, and thus, the stable data reduction cannot be realized. The conventional method usually adopts a data compression mode to achieve subtraction of neural image data, and a common data compression mode such as h.264 [ see: economo, M.N.et al.A platform for broad-and reconstruction of induced nerves [ J ].2016: 1-22. doi:10.7554/eLife.10566 ], Huffman Encoding [ see: huffman D A.A method for the construction of the minimum-redundancy codes [ J ]. Proceedings of the IRE,1952,40(9): 1098-: digital image processing algorithms and applications [ M ]. John Wiley & Sons,2000 ], and the like. However, the compression-based method considers the elimination of the correlation and redundancy among the image pixels, which reduces the information contained in the image, reduces the quality of the image, and is not beneficial to the subsequent research of neuroscience; secondly, the compression method targets each image in the data set, and after compression, the size of the image is reduced, but the capacity of the data set is not changed, so that the image without nerve fiber signals at the edge of the whole brain cannot be excluded, which does not reduce the workload of the subsequent reconstruction of the neuron morphology.
Specifically, compared with the prior art, the above technical solution of the present inventive concept has the following beneficial effects:
in the invention, the neural images are preliminarily processed by deep learning by utilizing the sparsity of the whole brain distribution of the neural fibers, the diversity characteristics of the neural images can be fully considered in the construction process of a training data set, and the neural images covering a plurality of brains and different brain areas are used as the training set, so that the richness of data is ensured; in the loss function, the problem of imbalance between positive and negative samples can be alleviated in a weighting manner (such as formula (1) in the following); the two points ensure the robustness of the network segmentation result, and a large number of images without nerve fiber signals are eliminated from the segmentation result, so that the first reduction of the whole brain nerve image is realized.
Secondly, the neural image is segmented by using a deep learning-based method, and the rapid whole brain neural image segmentation can be realized once the network is trained, so that the algorithm is simple and the application range is wide.
And thirdly, by utilizing the continuity of the whole brain distribution of the nerve fibers and adopting morphological operation, the problem that the network identifies errors at the tail ends of the nerve fibers and in weak signal areas can be solved, and further the integrity of the nerve fiber signals is ensured. The morphological operation also enables the elimination of a large number of high noise images without nerve fiber signals, thereby enabling a second subtraction of whole brain nerve images.
The deep learning module and the morphological operation module are organically combined, so that the problem that the whole brain data cannot be steadily acquired and reduced by using a two-classification network is solved, and the data reduction stability is improved; on the other hand, the result of data reduction corresponds to the original image in the data set, so that the problem of neural image quality degradation caused by using a compression mode is eliminated, and the reconstruction of subsequent neural fibers is facilitated.
In conclusion, the invention effectively ensures the data volume of the signal images while reducing the data volume of the whole brain, and can process different brains in batch and high-efficiency, thereby achieving the purpose of data reduction. The method and the corresponding system can quickly, accurately and effectively reduce the brain nerve data set with the TB magnitude and above, greatly reduce the data volume of subsequent neuron reconstruction and improve the reconstruction efficiency.
Drawings
Fig. 1 is a flowchart of a sparse mouse whole brain data reduction method provided by an embodiment of the present invention.
Fig. 2 is a raw image of nerve fibers provided by an embodiment of the present invention.
Fig. 3 is a binarized image of nerve fiber segmentation provided by an embodiment of the present invention.
Fig. 4 is a diagram of an initialized whole brain nerve fiber profile (without morphological operations) according to an embodiment of the present invention.
Fig. 5 is a final global brain nerve fiber contour map (processed by morphological operations) provided by an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
The invention relates to a TB-level sparse whole brain nerve fiber data reduction method based on deep learning, which comprises the steps of obtaining an image to be tested and constructing a whole brain test data set; loading a pre-trained deep learning segmentation model; processing the image to be detected by using a deep learning segmentation model to obtain a whole brain nerve fiber distribution table; and processing the whole brain nerve fiber distribution table by using morphological operation to obtain a final whole brain nerve fiber image.
As shown in fig. 1, the method comprises the steps of: training and building a 2D neural fiber segmentation network model using large scale (e.g., 8000) data from multiple brains and brain regions; for each brain data, sequentially dividing the brain data into data blocks with equal size, generating a corresponding maximum density projection graph, and constructing a test set; predicting and acquiring nerve fiber length information of the projection data block by using a segmentation model; preliminarily classifying the data blocks by using a threshold value of the number of pixels of fibers (namely, a foreground), optimizing a classification result by using morphological operation, further enhancing the connectivity of a foreground area, and eliminating the interference of isolated noise points; according to the classification result, only image blocks of the fibers are reserved (namely, a large number of background data blocks are deleted), and reduction is achieved, so that a reduction result of TB-level sparse holoencephalic nerve big data is obtained.
The following are specific examples:
example 1
The method for reducing the TB-level sparse mouse whole brain data in the embodiment comprises the following steps of:
s1: and acquiring an image to be tested, and constructing a whole brain test data set.
For example, the original image is selected from a mouse brain slice image obtained by a fluorescence microtome imaging system or a functional two-photon confocal imaging microscope. The test image data set is obtained by performing maximum projection on every 200 layers of coronal plane slices from the whole brain coronal plane slice data (the selection of the layer number directly determines the data volume of the data set to be tested; certainly, the specific layer number requirement can be adjusted according to the actual situation except for 200 layers; when the layer number requirement is adjusted, the quality of projection data needs to be considered, so that the situation that i, the number of layers is too small, nerve fiber information covered by two-dimensional maximum projection is less, or ii, the number of layers is too large, and redundant information is more is avoided); and sequentially cutting the images according to the size of 256 multiplied by 256 by taking the upper left corner of the maximum projection as the image origin. Of course, in addition to the size of 256 pixels × 200 layers, the length and width dimensions may be equal-sized image blocks of other dimensions that can be processed by a computer (e.g., 300 pixels × 300 pixels, etc.).
S2: and loading a pre-trained deep learning segmentation model.
The deep learning segmentation model selects a VoxResnet segmentation model; the model comprises 25 convolution layers, 6 residual modules and 4 deconvolution layers (connection relation of each layer structure of the model, and the like, can be referred to as Chen H, Dou Q, Yu L, et al. VoxResNet: Deep voxelwitse residual networks for bridging from 3D MR images [ J ]. NeuroImage,2018,170: 446-; the training set can select different neural images from two sets of whole brain data sets, the neural images comprise neural images of various styles such as high-noise images, high signal-to-noise ratio images, low-signal-to-noise ratio images and the like, and the capacity of the training set is 8000 in the embodiment; the model was trained using a mixture loss function of Diceloss, crosssend, as follows:
Figure BDA0003056577710000091
where CE represents Crossentpy loss function and Dice represents Dicells, losstotalRepresenting the mixing loss function of the two, λ1,λ2Respectively, Dicelos and Crossentpy weights, both of which are 0.5 in this document, y' denotes the predicted value, y denotes the label, wiThe weight of each category is represented by the formula:
Figure BDA0003056577710000092
wherein N represents the total number of pixels, NiBut the number of pixels of the type i in the true value (GT) is shown.
Regarding the mixing loss function, other parts not described in detail can refer to the related art; for example, Kheend M, Kollerathu V A, Krishmania ambiurti G. full volumetric multi-scale reactive definitions for cardiac segmentation and automatic cardiac segmentation using embodiments of classifiers [ J ]. Medical image analysis,2019,51: 21-45; among them, Dice loss can be referred to as: milletari F, Navab N, Ahmadi S-A.V-net, full volumetric neural networks for volumetric medical image segmentation [ C ].2016Fourth International Conference on 3D Vision,2016. IEEE.; the crosntropy loss function can be referred to in the literature: hu K, Zhang Z, Niu X, et al, reliable vessel segmentation of color functional image using multiscale spatial compression function [ J ]. neuro-compression, 2018,309:179-191.
S3: and processing the image by using the deep learning segmentation model to obtain a whole brain nerve fiber distribution table.
Step S3 specifically includes the following steps:
s31: and carrying out segmentation processing on the image to be detected to obtain a nerve fiber binary image.
S32: and extracting the length of the nerve fiber from the binary image of the nerve fiber to obtain the length information of the nerve fiber.
Counting the number of 255 elements in the nerve fiber binarization image matrix as the length information of the nerve fibers in the image;
s33: and constructing a whole brain nerve fiber distribution table according to the space position information of the image to be detected in the whole brain and the length information of the divided nerve fibers.
The whole brain space position information (x, y, z) of the image to be measured is composed of the position information z of the coronal plane slice and the position information (x, y) of the image to be measured with respect to the coronal plane slice (where the position information is the upper left corner of each image to be measured (cropped), and the origin corresponds to the coordinate value in the coordinate system at step S1). The whole brain nerve fiber distribution table is formed by taking the space position information of the image to be detected as an index and taking the nerve fiber length of the corresponding binary image as a value.
The image to be measured is named according to the spatial position information (x, y, z) of the whole brain, for example, the naming mode can be 'z _ x _ y.png', wherein z, x and y are all replaced by the spatial position information of the image to be measured.
S4: and further processing the deep learning result by using morphological operation to obtain a final whole brain nerve fiber image.
Step S4 specifically includes the following steps:
s41: selecting 500 as an empirical threshold (of course, other preset thresholds, such as other values within the interval of 400-600, may also be adopted), dividing the nerve fiber distribution table in S33, and only retaining corresponding indexes (the part larger than the threshold is denoted as p1, and the part smaller than the threshold is denoted as p 2).
S42: and drawing a 3D whole brain nerve fiber contour map of p1 to obtain the maximum connected domain of the nerve fiber contour map, as shown in figure 4.
Acquiring the number s of layers of a full-brain coronal plane slice, and the length l and the width w of the coronal plane slice, and constructing a 3D all-zero matrix (an operator [ ] represents rounding, such as rounding downwards, and the like) with the length of [ l/256], the width of [ w/256] and the depth of [ s/200], so that the edge image area corresponding to the rounding is discarded after the rounding; and calculating the positions of each point in the p1 in the 3D all-zero matrix, changing the values of the positions to be 1, and obtaining a binary 3D whole brain nerve fiber contour map. And performing morphological operation on the 3D whole brain nerve fiber contour map to obtain a maximum connected domain p 3.
S43: morphological dilation of p3 (Morphological dilation is described in: Morphological Image Analysis; Principles and Applications by Pierre solile, ISBN 3-540-; this step S43 is repeated 3 times (of course, other preset number of iterations is also possible), and a subtracted whole brain nerve fiber image is acquired from the whole brain test data set according to the finally acquired index data set p 3.
In step S43, the parameters for morphological dilation may be set as follows: the expansion radius is 1, the expansion nucleus is an ellipse, and the expansion times is 1.
The final whole brain nerve fiber contour map is shown in fig. 5, compared with fig. 4, contour information is more abundant, and the whole brain signal image is better protected. The method can effectively reduce the sparse nerve fiber whole brain data, can be efficiently applied to different sparse nerve fiber whole brain data, and greatly reduces the data volume of subsequent neuron morphological reconstruction.
In the invention, the building and training process of the model based on deep learning, and other parts which are not described in detail, can directly refer to the related prior art.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A TB-level sparse whole brain nerve fiber data reduction method based on deep learning is characterized by comprising the following steps:
s1: based on a series of brain slice images obtained from the same brain tissue, establishing a Cartesian space rectangular coordinate system with the length direction of any brain slice image as the X-axis direction, the width direction as the Y-axis direction and the normal direction of the plane where the brain slice image is located as the Z-axis direction, wherein the origin of the rectangular coordinate system is the top left corner vertex of the uppermost or lowermost brain slice image; then dividing the brain slice images according to a preset unit layer number dividing unit; carrying out maximum projection on a plurality of divided brain slice images belonging to the same unit to obtain a two-dimensional plane projection image, then cutting each two-dimensional plane projection image according to a preset unit length and a preset unit width to obtain a plurality of images to be tested, and taking the top left corner vertex of each image to be tested as the image origin of the image to be tested, thereby establishing a test data set, wherein the length and the width of each image to be tested both meet preset requirements, and taking the three-dimensional space coordinate of the image origin of each image to be tested under the rectangular coordinate system as the space position information of each image to be tested under the rectangular coordinate system;
s2: establishing a segmentation model, and training the segmentation model to obtain a trained deep learning segmentation model; the segmentation model can be used for classifying and segmenting whether each pixel point on each image to be tested in the test set belongs to nerve fibers;
s3: processing each image to be tested in the test data set obtained in the step S1 by using the deep learning segmentation model obtained in the step S2 to obtain nerve fiber distribution information corresponding to each image to be tested;
s4: optimizing the nerve fiber distribution information obtained in the step S3 by using morphological operation, so that the maximum connected domain corresponding to the region where the nerve fiber is located in the nerve fiber contour map in the space three-dimensional dimension obtained based on the nerve fiber distribution information is optimized; and based on the maximum connected domain obtained by optimization, selecting a plurality of images to be detected corresponding to the maximum connected domain from all the images to be detected obtained in the step S1 to further construct a three-dimensional nerve fiber image, wherein the three-dimensional nerve fiber image corresponds to a TB-level sparse whole brain nerve fiber data reduction result.
2. The method according to claim 1, wherein in the step S1, the predetermined number of unit layers is 200; the preset unit length is 256 pixels, the preset unit width is 256 pixels, and the length and the width of each image to be measured are 256 pixels and 256 pixels respectively.
3. The subtraction method according to claim 1, wherein in step S2, the segmentation model is a voxreset segmentation model that includes 25 convolutional layers, 6 residual modules, and 4 deconvolution layers; and training the segmentation model, specifically adopting a mixed loss function of Dicelos and Crossentryp.
4. The abatement method of claim 1, wherein the step S3 specifically comprises the following sub-steps:
s31: classifying and segmenting each image to be tested in the test data set obtained in the step S1 by using the deep learning segmentation model obtained in the step S2, so that each image to be tested obtains a corresponding nerve fiber binary image; for any binary image of nerve fibers, if a certain pixel point corresponds to the nerve fibers, recording the value of the pixel point as a preset value A; if a certain pixel point does not correspond to the nerve fiber, recording the value of the pixel point as a preset B value;
s32: performing nerve fiber length extraction processing on each nerve fiber binary image obtained in the step S1, specifically, for any nerve fiber binary image, recording the total number of pixel points of which the median is a as the nerve fiber length information of the nerve fiber binary image;
s33: obtaining the nerve fiber distribution information corresponding to each image to be measured based on the spatial position information of each image to be measured in the spatial rectangular coordinate system and the nerve fiber length information of the corresponding nerve fiber binary image;
in step S33, the nerve fiber distribution information is stored in the form of a nerve fiber distribution table, and the nerve fiber distribution table is formed by using the spatial position information of each image to be measured as an index and using the nerve fiber length information of the corresponding binary image of nerve fibers as a value.
5. The reduction method according to claim 4, wherein the step S4 further comprises the following sub-steps:
s41: dividing the nerve fiber distribution table according to a preset nerve fiber length threshold value; specifically, for any one index, if the value of the index is greater than or equal to the threshold, the index is divided into p 1; if its value is less than the threshold and greater than 0, then divide the index into p 2;
s42: drawing a nerve fiber contour map in a three-dimensional dimension based on p1 to obtain the maximum connected domain p3 of the nerve fiber contour map;
s43: performing morphological expansion on the p3, taking intersection between the newly added expanded data set and the p2 data set according to the condition whether the indexes are the same or not, adding the data in the intersection into the p3, and performing optimization updating on the p 3; repeating the optimization updating for 3-4 times, wherein the p3 obtained by optimization is the maximum connected domain corresponding to the region where the nerve fiber is located in the nerve fiber contour map under the corresponding three-dimensional dimension; and finally, further constructing a nerve fiber contour map under a space three-dimensional dimension based on the p3 obtained through optimization, and selecting a plurality of images to be detected corresponding to the p3 from all images to be detected obtained in the step S1 based on the p3 obtained through optimization to further construct a three-dimensional nerve fiber image, wherein the three-dimensional nerve fiber image corresponds to a TB-level sparse whole brain nerve fiber data reduction result.
6. The method of claim 5, wherein in step S41, the preset nerve fiber length threshold is greater than 400 and less than 600;
step S42 is to establish a three-dimensional matrix with initial values of elements all being 0, the length of the matrix being [ l/l0]Width of [ w/w0]Deep is [ s/s0]Wherein l and w are the length and width of any one brain slice image in step S1, and S is the total number of layers of the series of brain slice images in step S1; l0、w0、s0The unit length, unit width and unit layer number preset in step S1; wherein, the operator [ alpha ], [ beta ] or]Representing rounding; then, based on p1, calculating the corresponding positions of the indexes in p1 in the three-dimensional matrix, and recording the values of the corresponding positions as 255, wherein the spatial position information in each index corresponds to the positions in the three-dimensional matrix; obtaining a binarized three-dimensional matrix, correspondingly obtaining a binarized three-dimensional nerve fiber contour map based on the binarized three-dimensional matrix, and finally obtaining the maximum connected domain p3 of the nerve fiber contour map based on the nerve fiber contour map;
in step S43, the morphological dilation uses a dilation radius of 1, a dilation kernel of an ellipse, and a dilation frequency of 1.
7. A TB-level sparse whole brain nerve fiber data reduction system based on deep learning, comprising:
an image preprocessing module: the method is used for establishing a Cartesian space rectangular coordinate system by taking the length direction of any brain slice image as the X-axis direction and the width direction as the Y-axis direction and the normal direction of the plane where the brain slice image is located as the Z-axis direction based on a series of brain slice images obtained based on the same brain tissue, wherein the origin of the rectangular coordinate system is the top left corner vertex of the uppermost layer or the lowermost layer of the brain slice image; then dividing the brain slice images according to a preset unit layer number dividing unit; carrying out maximum projection on a plurality of divided brain slice images belonging to the same unit to obtain a two-dimensional plane projection image, then cutting each two-dimensional plane projection image according to a preset unit length and a preset unit width to obtain a plurality of images to be tested, and taking the top left corner vertex of each image to be tested as the image origin of the image to be tested, thereby establishing a test data set, wherein the length and the width of each image to be tested both meet preset requirements, and taking the three-dimensional space coordinate of the image origin of each image to be tested under the rectangular coordinate system as the space position information of each image to be tested under the rectangular coordinate system;
a segmentation model module: the device is used for classifying and segmenting whether each pixel point on each image to be tested in the test set belongs to nerve fiber or not to obtain the nerve fiber distribution information corresponding to each image to be tested;
a morphology processing module: the neural fiber distribution information obtained by the segmentation model module is optimized by morphological operation, so that the maximum connected domain corresponding to the region where the neural fiber is located in the neural fiber contour map in the space three-dimensional dimension obtained based on the neural fiber distribution information is optimized; and based on the maximum connected domain obtained by optimization, selecting a plurality of images to be detected corresponding to the maximum connected domain from all images to be detected obtained by the image preprocessing module so as to further construct a three-dimensional nerve fiber image, wherein the three-dimensional nerve fiber image corresponds to a TB-level sparse whole brain nerve fiber data reduction result.
CN202110501547.XA 2021-05-08 2021-05-08 TB-level cranial nerve fiber data reduction method and system based on deep learning Active CN113313673B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110501547.XA CN113313673B (en) 2021-05-08 2021-05-08 TB-level cranial nerve fiber data reduction method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110501547.XA CN113313673B (en) 2021-05-08 2021-05-08 TB-level cranial nerve fiber data reduction method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN113313673A CN113313673A (en) 2021-08-27
CN113313673B true CN113313673B (en) 2022-05-20

Family

ID=77371757

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110501547.XA Active CN113313673B (en) 2021-05-08 2021-05-08 TB-level cranial nerve fiber data reduction method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN113313673B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109222972A (en) * 2018-09-11 2019-01-18 华南理工大学 A kind of full brain data classification method of fMRI based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10080508B2 (en) * 2015-09-10 2018-09-25 Toshiba Medical Systems Corporation Magnetic resonance imaging apparatus and image processing apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109222972A (en) * 2018-09-11 2019-01-18 华南理工大学 A kind of full brain data classification method of fMRI based on deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《CS2-Net: Deep Learning Segmentation of Curvilinear Structures in Medical Imaging》;Lei Mou等;《arXiv》;20201019;全文 *
《基于对称区域生长和边缘梯度的视神经纤维的分割》;赵希梅 等;《中国生物医学工程学报》;20091203;全文 *

Also Published As

Publication number Publication date
CN113313673A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
CN104751178B (en) Lung neoplasm detection means and method based on shape template matching combining classification device
CN110163813B (en) Image rain removing method and device, readable storage medium and terminal equipment
CN111476266B (en) Non-equilibrium type leukocyte classification method based on transfer learning
CN109102498B (en) Method for segmenting cluster type cell nucleus in cervical smear image
CN111062296B (en) Automatic white blood cell identification and classification method based on computer
CN110188763B (en) Image significance detection method based on improved graph model
CN112329871B (en) Pulmonary nodule detection method based on self-correction convolution and channel attention mechanism
CN108629772A (en) Image processing method and device, computer equipment and computer storage media
Min et al. MRI images enhancement and tumor segmentation for brain
CN111680755A (en) Medical image recognition model construction method, medical image recognition device, medical image recognition medium and medical image recognition terminal
CN114677525B (en) Edge detection method based on binary image processing
He et al. An automated three-dimensional detection and segmentation method for touching cells by integrating concave points clustering and random walker algorithm
CN113177592A (en) Image segmentation method and device, computer equipment and storage medium
CN109741358B (en) Superpixel segmentation method based on adaptive hypergraph learning
Zeng et al. Progressive feature fusion attention dense network for speckle noise removal in OCT images
CN113313673B (en) TB-level cranial nerve fiber data reduction method and system based on deep learning
CN112651951A (en) DCE-MRI-based breast cancer classification method
CN111539966A (en) Colorimetric sensor array image segmentation method based on fuzzy c-means clustering
CN115131384B (en) Bionic robot 3D printing method, device and medium based on edge preservation
Yan et al. Two and multiple categorization of breast pathological images by transfer learning
CN114841985A (en) High-precision processing and neural network hardware acceleration method based on target detection
CN114565511A (en) Lightweight image registration method, system and device based on global homography estimation
Chen et al. Enhancing nucleus segmentation with haru-net: a hybrid attention based residual u-blocks network
CN114022521A (en) Non-rigid multi-mode medical image registration method and system
CN117893450B (en) Digital pathological image enhancement method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant