CN114155234A - Method and device for identifying position of lung segment of focus, storage medium and electronic equipment - Google Patents

Method and device for identifying position of lung segment of focus, storage medium and electronic equipment Download PDF

Info

Publication number
CN114155234A
CN114155234A CN202111494117.6A CN202111494117A CN114155234A CN 114155234 A CN114155234 A CN 114155234A CN 202111494117 A CN202111494117 A CN 202111494117A CN 114155234 A CN114155234 A CN 114155234A
Authority
CN
China
Prior art keywords
lung
image
model
target
lung segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111494117.6A
Other languages
Chinese (zh)
Inventor
孙小婉
蔡巍
张霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Original Assignee
Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd filed Critical Shenyang Neusoft Intelligent Medical Technology Research Institute Co Ltd
Priority to CN202111494117.6A priority Critical patent/CN114155234A/en
Publication of CN114155234A publication Critical patent/CN114155234A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure relates to a method, an apparatus, a storage medium, and an electronic device for identifying a lesion lung segment position, which improve the efficiency of identifying a lesion lung segment position and the reliability of an identification result of a lesion lung segment position. The method comprises the following steps: acquiring a target lung image marked with a focus position; inputting the target lung image into a focus lung segment position identification model to obtain a lung segment position identification result corresponding to the focus in the target lung image, wherein the lung segment position identification result is used for representing the position of the focus in the lung segment; the lesion lung segment position identification model comprises a first classification sub-model and a second classification sub-model, the first classification sub-model is used for classifying a target lung image according to a lung segment type corresponding to the target lung image, the classified target lung image is input into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position identification result corresponding to a lesion in the target lung image.

Description

Method and device for identifying position of lung segment of focus, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for identifying a lesion lung segment position, a storage medium, and an electronic device.
Background
Ct (computed tomography), that is, computed tomography, scans a certain thickness layer of a human body with X-ray beams, receives the X-rays transmitted through the layer by a detector, converts the X-rays into visible light, converts the visible light into electrical signals by photoelectric conversion, converts the electrical signals into digital signals by an analog/digital converter, and inputs the digital signals into a computer for processing. As the most effective noninvasive detection technology for lung diseases, the lung CT image has the advantages of thinness, high definition, low noise and the like, and is widely applied to lung disease screening and auxiliary diagnosis.
Currently, the lung CT image analysis technology mainly studies whether a lesion exists in the lung and a disease type of the lesion, and the identification of the lung segment position of the lesion in the lung CT image is mainly achieved through manual reading. However, the recognition result of manual interpretation depends on factors such as knowledge storage and diagnosis and treatment experience of the interpreter, so that the accuracy of the recognition result of the lesion lung segment position cannot be ensured. Furthermore, the method is simple. When a large number of lung CT images are accumulated, the identification efficiency of manual film reading on the position of a focus lung segment is low.
Disclosure of Invention
The present disclosure is directed to a method, an apparatus, a storage medium, and an electronic device for identifying a lesion lung segment position, so as to solve the technical problems of low reliability and low efficiency in identifying a lesion lung segment position through manual interpretation.
In order to achieve the above object, a first aspect of the present disclosure provides a method for identifying a focal lung segment position, the method comprising:
acquiring a target lung image marked with a focus position;
inputting the target lung image into a focus lung segment position identification model to obtain a lung segment position identification result corresponding to the focus in the target lung image, wherein the lung segment position identification result is used for representing the lung segment position of the focus in the lung;
the lesion lung segment position recognition model comprises a first classification sub-model and a second classification sub-model, the number of lung segment types which can be recognized by the first classification sub-model is the same as that of the second classification sub-model, each second classification sub-model corresponds to one lung segment type, the first classification sub-model is used for classifying the target lung image according to the lung segment type corresponding to the target lung image and inputting the classified target lung image into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position recognition result corresponding to a lesion in the target lung image.
Optionally, the lesion lung segment position recognition model includes an image segmentation module, and the step of inputting the target lung image into the lesion lung segment position recognition model to obtain a lung segment position recognition result corresponding to a lesion in the target lung image includes:
inputting the target lung image into the first classification sub-model, extracting first image characteristics from the target lung image through the first classification sub-model, and determining a target lung segment category corresponding to the target lung image according to the first image characteristics;
inputting the target lung image into the image segmentation module to obtain a plurality of small images;
and inputting the small image with the focus into a second classification sub-model corresponding to the target lung segment category, extracting second image characteristics from the small image with the focus through the second classification sub-model, and determining a lung segment position recognition result corresponding to the small image with the focus according to the second image characteristics.
Optionally, the inputting the small image with the lesion into a second classification sub-model corresponding to the target lung segment category, and extracting a second image feature from the small image with the lesion through the second classification sub-model includes:
inputting the small images with the focus and the preset number of target small images around the small images into a second classification sub-model corresponding to the target lung segment type;
respectively extracting the image features of the small image with the focus and the image features of the target small image for feature fusion to obtain a target fusion feature map;
and performing feature extraction on the target fusion feature map to obtain the second image feature.
Optionally, the training process of the lesion lung segment position identification model includes:
acquiring a first sample lung image marked with a lung segment category, and training the first classification sub-model based on the first sample lung image, wherein the lung segment category is used for representing the actual lung segment category of the first sample lung image;
classifying a second sample lung image through a trained first classification sub-model to obtain a lung segment class corresponding to the second sample lung image, and segmenting the second sample lung image into a plurality of sample small images, wherein each sample small image is marked with an actual lung segment position identification result;
training the second classification submodel based on each of the sample thumbnail images.
Optionally, the training the first classification submodel based on the first sample lung image comprises:
inputting the first sample lung image into the first classification sub-model to obtain a predicted lung segment class of the first sample lung image;
calculating a first loss function according to the actual lung segment class and the predicted lung segment class of the first sample lung image, and adjusting the parameter of the first classification sub-model according to the calculation result of the first loss function;
the training the second classification submodel based on each of the sample small images includes:
inputting each sample small image into a second classification sub-model corresponding to the lung segment type respectively to obtain a predicted lung segment position identification result corresponding to each sample small image;
and calculating a second loss function according to the actual lung segment position identification result corresponding to each sample small image and the predicted lung segment position identification result, and adjusting the parameters of the second classification submodel according to the calculation result of the second loss function.
Optionally, the acquiring a target lung image marked with a lesion position includes:
collecting a lung scanning image;
and inputting the lung scanning image into a pre-trained focus recognition model to obtain the target lung image marked with the focus position.
Optionally, the first classification submodel and the second classification submodel are residual error networks, and the number of network layers of the first classification submodel is greater than the number of network layers of the second classification submodel.
The second aspect of the present disclosure also provides an apparatus for identifying a focal lung segment location, the apparatus comprising:
the acquisition module is used for acquiring a target lung image marked with a focus position;
the identification module is used for inputting the target lung image into a focus lung segment position identification model to obtain a lung segment position identification result corresponding to the focus in the target lung image, and the lung segment position identification result is used for representing the lung segment position of the focus in the lung;
the lesion lung segment position recognition model comprises a first classification sub-model and a second classification sub-model, the number of lung segment types which can be recognized by the first classification sub-model is the same as that of the second classification sub-model, each second classification sub-model corresponds to one lung segment type, the first classification sub-model is used for classifying the target lung image according to the lung segment type corresponding to the target lung image and inputting the classified target lung image into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position recognition result corresponding to a lesion in the target lung image.
The third aspect of the present disclosure also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any one of the above first aspects.
A fourth aspect of the present disclosure also provides an electronic device, including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of any of the first aspects above.
Through the technical scheme, the following technical effects can be at least achieved:
firstly, a target lung image marked with a focus position is obtained, then the target lung image is input into a focus lung segment position identification model, a lung segment position identification result corresponding to the focus in the target lung image is obtained, and the lung segment position identification result is used for representing the position of the focus in the lung segment. The lesion lung segment position recognition model comprises a first classification sub-model and a second classification sub-model, the number of lung segment types which can be recognized by the first classification sub-model is the same as that of the second classification sub-model, each second classification sub-model corresponds to one lung segment type, the first classification sub-model is used for classifying a target lung image according to the lung segment type corresponding to the target lung image and inputting the classified target lung image into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position recognition result corresponding to a lesion in the target lung image. By the method, the lung segment position of the focus in the lung image is automatically identified, the identification efficiency of the focus lung segment position is improved, and the identification result of the lung image identified by the focus lung segment position identification model is stable in accuracy and high in reliability.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a schematic illustration of a lung CT raw image;
fig. 2 is a schematic flowchart of a method for identifying a focal lung segment position according to an embodiment of the present disclosure;
FIG. 3 is a schematic illustration of a segmentation of a lung image provided by an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another method for identifying a focal lung segment position according to an embodiment of the present disclosure;
fig. 5 is a block diagram of a device for identifying a lesion lung segment position according to an embodiment of the present disclosure;
fig. 6 is a block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect. The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units. In addition, references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and one skilled in the art will appreciate that "one or more" will be understood unless the context clearly dictates otherwise.
Currently, the lung CT image analysis technology mainly studies whether a lesion exists in the lung and a disease type of the lesion, and the identification of the lung segment position of the lesion in the lung CT image is mainly achieved through manual reading. However, the recognition result of manual interpretation depends on factors such as knowledge storage and diagnosis and treatment experience of the interpreter, so that the accuracy of the recognition result of the lesion lung segment position cannot be ensured. Furthermore, the method is simple. When a large number of lung CT images are accumulated, the identification efficiency of manual film reading on the position of a focus lung segment is low.
In view of the above, the present disclosure provides a method, an apparatus, a storage medium, and an electronic device for identifying a lesion lung segment position to solve the above problem.
Before describing detailed embodiments of the technical solution of the present disclosure, an application scenario of the technical solution of the present disclosure is described below.
As shown in fig. 1, the original lung CT images are three-dimensional images, each of which includes a series of axial slices of the lung, i.e., the original lung CT image is a three-dimensional image composed of a certain number of two-dimensional images. At present, the related technology can identify whether the two-dimensional image has shadows with different densities, such as sheet, patch, nodular, and the like, through a trained model, and mark the shadows to obtain the two-dimensional image marked with the focus position. That is, the related art can determine the position of the lesion in the two-dimensional image, but cannot determine the lung tissue to which the lesion belongs in the lung, i.e., the position of the lung segment where the lesion is located. The method for identifying the lung segment position of the lesion provided by the embodiment of the disclosure can further identify the lung segment position of the lesion in the two-dimensional image.
The following provides a detailed description of embodiments of the present disclosure.
Referring to fig. 2, an embodiment of the present disclosure provides a method for identifying a focal lung segment position, where the method includes:
s201, obtaining a target lung image marked with a focus position.
S202, inputting the target lung image into the lesion lung segment position recognition model to obtain a lung segment position recognition result corresponding to the lesion in the target lung image, wherein the lung segment position recognition result is used for representing the lung segment position of the lesion in the lung.
The lesion lung segment position recognition model comprises a first classification sub-model and a second classification sub-model, the number of lung segment types which can be recognized by the first classification sub-model is the same as that of the second classification sub-model, each second classification sub-model corresponds to one lung segment type, the first classification sub-model is used for classifying a target lung image according to the lung segment type corresponding to the target lung image and inputting the classified target lung image into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position recognition result corresponding to a lesion in the target lung image.
By adopting the method, the lung segment position of the focus in the lung image is automatically identified, the identification efficiency of the focus lung segment position is improved, and the identification result of the lung image identified by the focus lung segment position identification model has stable accuracy and high reliability.
In order to make the identification method of the lesion lung segment position provided by the present disclosure more understandable to those skilled in the art, the above steps are exemplified in detail below.
First, a training process of the lesion lung segment position recognition model in the present disclosure is explained.
In an embodiment of the present disclosure, the lesion lung segment position identification model includes a first classification sub-model and a second classification sub-model, where the first classification sub-model is configured to classify a target lung image according to a lung segment category corresponding to the target lung image, and input the classified target lung image into the second classification sub-model corresponding to the lung segment category, and the second classification sub-model is configured to determine a lung segment position identification result corresponding to a lesion in the target lung image. In the training process, the first classification submodel may be trained, and then the parameters of the first classification submodel are fixed, and the second classification submodel is trained.
Therefore, in a possible manner, the training process of the lesion lung segment position identification model includes: the method comprises the steps of obtaining a first sample lung image marked with lung segment types, training a first classification sub-model based on the first sample lung image, wherein the lung segment types are used for representing the actual lung segment types of the first sample lung image, classifying a second sample lung image through the trained first classification sub-model to obtain the lung segment types corresponding to the second sample lung image, dividing the second sample lung image into a plurality of sample small images, marking the actual lung segment position recognition result on each sample small image, and finally training a second classification sub-model based on each sample small image.
Optionally, the first classification submodel may be trained based on the first sample lung image by: firstly, inputting a first sample lung image into a first classification sub-model to obtain a predicted lung segment class of the first sample lung image, then calculating a first loss function according to the actual lung segment class and the predicted lung segment class of the first sample lung image, and adjusting parameters of the first classification sub-model according to a calculation result of the first loss function. And, a second classification submodel may be trained based on each sample thumbnail by: firstly, inputting each sample small image into a second classification sub-model corresponding to the lung segment type respectively to obtain a predicted lung segment position identification result corresponding to each sample small image, then calculating a second loss function according to an actual lung segment position identification result and the predicted lung segment position identification result corresponding to each sample small image, and adjusting parameters of the second classification sub-model according to a calculation result of the second loss function.
It should be noted that, although the classification method of the lung segment is currently classified into four, five, or six types according to the classification method, the present disclosure does not specifically limit this, but the number of recognition results of the first classification submodel and the second classification submodel is different according to the classification method, and the classification method is specifically set with reference to the corresponding classification method.
Illustratively, the lung segment classes of the first sample lung image are labeled first, and the present disclosure classifies the lung images into four classes as an example. When the lung image is displayed in a trachea way, the lung image is marked as a first type of lung image, when the lung image is displayed in a bronchus branch way, the lung image is marked as a second type of lung image, when the lung image is displayed in an included angle between a middle lobe bronchus and a lower lobe bronchus, the lung image is marked as a third type of lung image, and when the lung image is displayed in a basal stem way, the lung image is marked as a fourth type of lung image. For the specific classification, reference may be made to related technologies, which are not described in detail herein.
After the first sample lung image labeled with the actual lung segment category is obtained, the first sample lung image may be input into the first classification submodel, that is, the predicted lung segment category of the first sample lung image is obtained through the first classification submodel. A first loss function may then be calculated from the predicted lung segment class and the actual lung segment class of the sample lung image. Finally, the parameters of the first classification submodel are adjusted according to the calculation result of the first loss function. It should be understood that the first loss function may be used to characterize the difference between the predicted lung segment class and the actual lung segment class, and therefore, adjusting the parameters of the first classification submodel according to the calculation result of the first loss function may make the predicted lung segment class output by the first classification submodel closer and closer to the actual lung segment class of the sample lung image, thereby completing the training of the first classification submodel.
It should be understood that, since multiple first sample lung images may be acquired to train the first classification sub-model, the multiple first sample lung images may also be preprocessed, such as processing the multiple first sample lung images into a uniform size picture, for example, a uniform 224 × 224 pixel picture, and so on, which is not limited in this disclosure.
Further, after the first classification sub-model is obtained through training, classifying the second sample lung image through the trained first classification sub-model to obtain the lung segment class corresponding to the second sample lung image, where the first sample lung image and the second sample lung image may be different or the same, and the disclosure does not specifically limit this. Since the lung segment classification result of the first classification submodel is four lung segment classes, a second classification submodel corresponding to the four lung segment classes correspondingly exists. Referring to fig. 3, the second sample lung image is segmented into a plurality of sample small images, and the actual lung segment position identification result of each sample small image is labeled. Preferably, the second sample lung image is segmented into a plurality of sample small images of the same size.
It should be noted that, since the lung images corresponding to different lung segment categories display different lung segment positions, the lung segment position recognition results of the second classification sub-models corresponding to different lung segment categories are different. The lung segment position displayed by the first kind of lung image comprises a tip segment, a back segment, a front segment and a back segment of the right lung, the lung segment position displayed by the second kind of lung image comprises a back segment, a front segment, a back segment and a back segment of the right lung and a back segment, a front segment and a back segment of the left lung, the lung segment position displayed by the third kind of lung image comprises a front segment, an outer segment, an inner segment, a back segment and a front segment, an upper segment of a tongue blade, a lower segment of a tongue blade and a back segment of the right lung, the lung segment position displayed by the fourth kind of lung image comprises an outer segment, an inner basal segment, a front basal segment, an outer basal segment and a tongue blade lower segment, an inner basal segment, a front basal segment, an outer basal segment and a back basal segment of the right lung. For the specific classification, reference may be made to related technologies, which are not described in detail herein.
And segmenting the lung image corresponding to different lung segment categories into a plurality of sample small images according to the lung segment positions displayed by the different lung segment categories, and labeling the actual lung segment position identification result of each sample small image. And then, respectively inputting each sample small image into a second classification sub-model corresponding to the lung segment type to obtain a predicted lung segment position identification result corresponding to each sample small image. Furthermore, the second loss function may be calculated according to the actual lung segment position recognition result and the predicted lung segment position recognition result corresponding to each sample small image. Finally, the parameters of the second classification submodel are adjusted according to the calculation result of the second loss function. It should be understood that the second loss function may be used to characterize a difference between the predicted lung segment position recognition result and the actual lung segment position recognition result, and therefore, the parameters of the second classification sub-model are adjusted according to the calculation result of the second loss function, so that the predicted lung segment position recognition result output by the second classification sub-model is closer to the actual lung segment position recognition result of the sample small image, thereby completing the training of the second classification sub-model.
It should be understood that, when training the second classification submodel, in addition to acquiring a lung image with unknown lung segment type and inputting the lung image into the second classification submodel corresponding to the lung segment type for training after classifying the lung image by the trained first classification submodel, the second classification submodel corresponding to the lung segment type may also be directly acquired and input into the lung image with known lung segment type for training, so that the first classification submodel and the second classification submodel may be trained simultaneously, and the training period is shortened.
The first and second classification submodels described above may be any neural network model. In a possible manner, the first classification submodel and the second classification submodel may be a residual network, and the number of network layers of the first classification submodel is greater than that of the second classification submodel.
Illustratively, the first layer network of the first classification submodel includes a convolutional layer, a nonlinear mapping layer, a normalization layer, and a pooling layer. The number of convolution kernels of the convolution layer is 96, the size of the convolution kernels is 11 × 11, the convolution step size is 4, and the edge extension is 0. And inputting the lung image with 224 x 224 pixels into the convolution layer of the first layer network to obtain a feature map after convolution, obtaining a new feature map through a nonlinear mapping layer and a normalization layer, and finally performing maximum pooling operation on the new feature map through a pooling layer to obtain an output feature map of the first layer network. The second layer network of the first classification submodel includes a convolutional layer, a nonlinear mapping layer, a normalization layer, and a pooling layer. The number of convolution kernels of the convolution layers is 256, the size of the convolution kernels is 5 multiplied by 5, the convolution step length is 2, the edge is expanded to be 2, and the output characteristic diagram of the first layer network is input into the second layer network to obtain the output characteristic diagram of the second layer network. And in the same way, sequentially inputting the output characteristic diagram of the previous layer of network into the next layer of network until the final classification result of the lung segment categories is obtained.
Wherein, the third layer network of the first classification submodel comprises convolution layers with the number of convolution kernels being 384, the size of the convolution kernels being 3 x 3, the convolution step size being 1 and the edge extension being 1 and nonlinear mapping layers, the fourth layer network of the first classification submodel comprises convolution layers with the number of convolution kernels being 384, the size of the convolution kernels being 3 x 3, the convolution step size being 1 and the edge extension being 1 and nonlinear mapping layers, the fifth layer network of the first classification submodel comprises convolution layers with the number of convolution kernels being 256, the size of the convolution kernels being 3 x 3, the convolution step size being 1 and the edge extension being 1 and nonlinear mapping layers, the sixth layer network and the seventh layer network of the first classification submodel are all connected layers with the random deactivation rate of 0.5 and the number of neurons being 4096, the eighth layer network of the first classification submodel is all connected layers with the random deactivation rate of 0.5 and the number of neurons corresponding to the number of lung segment categories, taking the above lung segment categories as four categories as examples, the number of neurons is 4. And finally, classifying the output characteristics of the layer eight network by using a softmax classifier so as to obtain the predicted lung segment class of the lung image.
The nonlinear mapping layers in the networks of all layers adopt the ReLU activation function to carry out activation operation, the expression capability of the model can be improved through the ReLU activation function, and the problem of overfitting is effectively solved. And the normalization layers in each layer of network adopt maximum and minimum normalization, so that the speed of training the model is increased. And the pooling layer in each layer of network adopts maximum pooling operation to reduce the size of the model and increase the calculation speed, wherein the pooling size is 3 x 3 and the step length is 2.
It should be understood that in order to prevent a significant loss of features, part of the network layers are not provided with normalization layers and network layers, such as the third, fourth, and fifth layer networks. In addition, the first loss function trained by the first classification submodel may be a cross-entropy loss function, and the specific formula is as follows:
Figure BDA0003400254500000121
where L represents the cross entropy, M represents the number of lung segment classes, yicFor the symbolic function (0 or 1), c represents the lung segment class, i represents the sample lung image, if the true lung segment class of the sample lung image i is equal to c, 1 is taken, otherwise 0 is taken, picFor the predicted probability that a sample lung image i belongs to the lung segment class c, N represents the number of sample images. And parameters of the first classification submodel can be adjusted according to the calculation result of the cross entropy loss function, and the first classification submodel is continuously optimized until the difference between the predicted lung segment category and the actual lung segment category of the sample lung image reaches the preset requirement, so that the training of the first classification submodel is realized.
Since the input image of the second classification sub-model is a small image obtained by dividing the lung image, for example, the lung image of 224 × 224 pixels is divided into 256 small images of 14 × 14 pixels, the image characteristics of the input image of the second classification sub-model are smaller than those of the input image of the first classification sub-model, and thus the number of network layers of the second classification sub-model may be smaller than that of the first classification sub-model. Moreover, since the divided small images are relatively small and may lose the dependency relationship between adjacent pixels, the target small image and the small images around the target small image may be input into the second classification sub-model together, for example, four adjacent small images located above, below, left and right of the target small image may be selected, eight adjacent small images located around the target small image in a circle may also be selected, and the like, which is not specifically limited by the present disclosure.
Illustratively, the first layer network of the second classification submodel includes a convolutional layer, a nonlinear mapping layer, a normalization layer, and a pooling layer. The convolution layer has convolution kernel number of 64, convolution kernel size of 8 × 8, convolution step size of 1 and edge extension of 0. Taking an input target small image and four adjacent small images of the input target small image, namely, the upper, lower, left and right small images as an example, performing convolution operation on the input 5 small images by using convolution kernels to obtain a feature map of each small image, and adding position elements corresponding to the 5 feature maps to obtain a fused feature map. And finally, performing maximum pooling operation on the new characteristic diagram through the pooling layer to obtain an output characteristic diagram of the first-layer network. The second layer network of the second classification submodel includes a convolutional layer, a nonlinear mapping layer, a normalization layer, and a pooling layer.
The convolution layer has convolution kernel number of 128, convolution kernel size of 3 × 3, convolution step size of 1 and edge extension of 2. And inputting the output characteristic diagram of the first-layer network into the second-layer network to obtain the output characteristic diagram of the second-layer network. And in the same way, sequentially inputting the output characteristic diagram of the previous layer of network into the next layer of network until the final classification result of the lung segment position is obtained. The third layer network of the second classification submodel comprises convolution layers with the number of convolution kernels of 128, the size of the convolution kernels of 3 x 3, the convolution step size of 1 and the edge expansion of 2 and nonlinear mapping layers, the fourth layer network of the second classification submodel comprises convolution layers with the number of the convolution kernels of 256, the size of the convolution kernels of 3 x 3, the convolution step size of 1 and the edge expansion of 2 and nonlinear mapping layers, the fifth layer network of the second classification submodel is a fully-connected layer with the random inactivation rate of 0.5 and the number of neurons of 1024, and the sixth layer network of the second classification submodel is a fully-connected layer with the random inactivation rate of 0.5 and the number of neurons of positions of the lung segments displayed corresponding to the lung segment types. And finally, classifying the output characteristics of the sixth-layer network by using a softmax classifier so as to obtain the predicted lung segment position of the target small image.
The nonlinear mapping layer, the normalization layer and the pooling layer in each layer of the network of the second classification submodel may be set with reference to the first classification submodel. In addition, the second loss function of the second classification submodel may refer to the first loss function of the first classification submodel, and the details of the disclosure are not repeated herein. In addition, since the lung segment categories identified by the first classification submodel are set to be 4, there are four corresponding second classification submodels, each of which corresponds to one lung segment category, and the four second classification submodels may adopt the same model structure or different model structures, which is not specifically limited by the present disclosure.
It should be noted that, the first classification submodel and the second classification submodel are only used as an exemplary illustration, and in a specific implementation, the model structure may be adjusted according to actual needs and training results, which is not limited in this disclosure.
After the lesion lung segment position recognition model is obtained through training, in a model application stage, a target lung image marked with a lesion position can be obtained, then the target lung image is input into the lesion lung segment position recognition model, and the lesion lung segment position recognition model can output a lung segment position recognition result corresponding to the lesion in the target lung image, namely the lung segment position of the lesion in the lung, so that automatic recognition of the lesion lung segment position is realized.
In a possible approach, the target lung image marked with the lesion location may be acquired as follows: firstly, a lung scanning image is collected, and then the lung scanning image is input into a pre-trained focus identification model to obtain a target lung image marked with a focus position.
Wherein, the training process of the focus identification model comprises the following steps: firstly, inputting a lung image marked with an actual focus position into a focus recognition model to obtain a predicted focus position of the lung image, then calculating a loss function according to the actual focus position and the predicted focus position of the lung image, and adjusting parameters of the focus recognition model according to a calculation result of the loss function. The specific model structure and training process can refer to the related art, and the disclosure is not repeated herein.
After a target lung image marked with a focus position is obtained, the target lung image is input into a first classification sub-model of a focus lung segment position recognition model to obtain a lung segment type corresponding to the target lung image, and the classified target lung image is input into a second classification sub-model corresponding to the lung segment type to obtain a lung segment position recognition result corresponding to the focus in the target lung image. The input of the second classification sub-model is a small image obtained by segmenting the target lung image, so that the target lung image needs to be segmented into small images before the target lung image is input into the second classification sub-model corresponding to the lung segment class.
Therefore, in a possible manner, the lesion lung segment position recognition model includes an image segmentation module, the process of inputting the target lung image into the lesion lung segment position recognition model to obtain the lung segment position recognition result corresponding to the lesion in the target lung image includes: the method comprises the steps of firstly inputting a target lung image into a first classification sub-model, extracting first image characteristics from the target lung image through the first classification sub-model, determining a target lung segment type corresponding to the target lung image according to the first image characteristics, inputting the target lung image into an image segmentation module to obtain a plurality of small images, finally inputting the small images with focuses into a second classification sub-model corresponding to the target lung segment type, extracting second image characteristics from the small images with the focuses through the second classification sub-model, and determining a lung segment position recognition result corresponding to the small images with the focuses according to the second image characteristics.
Illustratively, a 224 × 224 pixel target lung image is input into the first classification sub-model, feature extraction is performed on the target lung image to obtain first image features, a target lung segment class corresponding to the target lung image is further determined according to the first image features, and the target lung image is input into the image segmentation module to obtain 256 small images of 14 × 14 pixels. The size of the image segmented by the image segmentation module can be set according to requirements, and the size is not particularly limited by the disclosure. Because the position of the focus is known, a small image with the focus can be selected and input into a second classification sub-model corresponding to the target lung segment category, the small image with the focus is subjected to feature extraction to obtain second image features, and then a lung segment position recognition result corresponding to the small image with the focus is determined according to the second image features.
In addition, if the segmented small image is relatively small, the dependency relationship between adjacent pixels may be lost, so in a possible manner, the step of inputting the small image with the lesion into the second classification sub-model corresponding to the target lung segment category, and extracting the second image feature from the small image with the lesion by the second classification sub-model comprises the following steps: firstly, inputting small images with focuses and target small images with preset number around the small images into a second classification sub-model corresponding to the target lung segment types, respectively extracting image features of the small images with the focuses and image features of the target small images for feature fusion to obtain a target fusion feature map, and finally extracting features of the target fusion feature map to obtain second image features.
For example, the target small image and the surrounding small images may be input into the second classification sub-model together with reference to the model training process, for example, four adjacent small images located above, below, on the left, and the right of the target small image may be selected, eight adjacent small images located around the target small image for one circle may also be selected, and the like, which is not specifically limited by the present disclosure. Taking an input target small image and four adjacent small images of the input target small image, namely, the upper layer network, the lower layer network, the left layer network and the right layer network as examples, performing convolution operation on the input 5 small images by using convolution kernels in a first layer network of a second classification sub-model to obtain a feature map of each small image, adding corresponding position elements of the 5 feature maps to obtain a fused feature map, and further performing feature extraction on the target fused feature map in a subsequent network layer to obtain second image features.
And after the second image characteristics are obtained, determining a lung segment position recognition result corresponding to the small image with the focus according to the second image characteristics. Moreover, because a plurality of small images with focuses may exist in one target lung image, and the plurality of small images with focuses may be from the same lung segment position or different lung segment positions, the lung segment position identification results can be integrated, and a plurality of lung segment position identification results indicating the same lung segment position are integrated into one lung segment position identification result to be output, so that data redundancy caused by repeated output of the lung segment position identification results at the same lung segment position is avoided.
By adopting the method, the target lung segment type corresponding to the target lung image is determined through the trained first classification sub-model, the target lung image is segmented by the image segmentation module to obtain a plurality of small images, and then the lung segment position recognition result corresponding to the small image with the focus is determined through the trained second classification sub-model. Therefore, the lung segment position of the focus in the lung image is automatically identified, the identification efficiency of the focus lung segment position is improved, and the identification result of the lung image identified by the focus lung segment position identification model is stable in accuracy and high in reliability.
The steps of the method for identifying the focal lung segment position provided by the embodiments of the present disclosure are described below by another exemplary embodiment. As shown in fig. 4, the method includes:
s401, collecting a lung scanning image.
S402, inputting the lung scanning image into a pre-trained focus recognition model to obtain a target lung image marked with a focus position.
S403, inputting the target lung image into the first classification sub-model, extracting first image features from the target lung image through the first classification sub-model, and determining the target lung segment category corresponding to the target lung image according to the first image features.
S404, inputting the target lung image into an image segmentation module to obtain a plurality of small images.
S405, inputting the small image with the focus and the small images around the small image into a second classification sub-model corresponding to the target lung segment category, extracting second image features from the small image with the focus and the small images around the small image with the focus through the second classification sub-model, and determining a lung segment position recognition result corresponding to the small image with the focus according to the second image features.
The method comprises the steps of firstly obtaining a target lung image marked with a focus position, then inputting the target lung image into a first classification sub-model of a focus lung segment position recognition model, classifying the target lung image according to a lung segment type corresponding to the target lung image by the first classification sub-model, inputting the classified target lung image into an image segmentation module to be segmented into a plurality of small images, further inputting the small images with the focus and the small images around the small images into a second classification sub-model corresponding to the lung segment type, and finally obtaining a lung segment position recognition result corresponding to the small images with the focus through the second classification sub-model so as to determine the lung segment position of the focus in the lung. By the method, the lung segment position of the focus in the lung image is automatically identified, the identification efficiency of the focus lung segment position is improved, and the identification result of the lung image identified by the focus lung segment position identification model is stable in accuracy and high in reliability.
The embodiment of the present disclosure provides an apparatus for identifying a lesion lung segment position, where the apparatus 500 includes:
an obtaining module 501, configured to obtain a target lung image marked with a lesion position.
The identifying module 502 is configured to input the target lung image into a lesion lung segment position identifying model, so as to obtain a lung segment position identifying result corresponding to a lesion in the target lung image, where the lung segment position identifying result is used to represent a lung segment position where the lesion is located in a lung.
The lesion lung segment position recognition model comprises a first classification sub-model and a second classification sub-model, the number of lung segment types which can be recognized by the first classification sub-model is the same as that of the second classification sub-model, each second classification sub-model corresponds to one lung segment type, the first classification sub-model is used for classifying the target lung image according to the lung segment type corresponding to the target lung image and inputting the classified target lung image into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position recognition result corresponding to a lesion in the target lung image.
By adopting the device, the automatic identification of the lung segment position of the focus in the lung image is realized, the identification efficiency of the focus lung segment position is improved, and the accuracy of the identification result of the lung image identified by the focus lung segment position identification model is stable and the reliability is high.
Optionally, the lesion lung segment location identification model comprises an image segmentation module, and the identification module 502 is configured to:
inputting the target lung image into the first classification sub-model, extracting first image characteristics from the target lung image through the first classification sub-model, and determining a target lung segment category corresponding to the target lung image according to the first image characteristics;
inputting the target lung image into the image segmentation module to obtain a plurality of small images;
and inputting the small image with the focus into a second classification sub-model corresponding to the target lung segment category, extracting second image characteristics from the small image with the focus through the second classification sub-model, and determining a lung segment position recognition result corresponding to the small image with the focus according to the second image characteristics.
Optionally, the identifying module 502 is configured to:
inputting the small images with the focus and the preset number of target small images around the small images into a second classification sub-model corresponding to the target lung segment type;
respectively extracting the image features of the small image with the focus and the image features of the target small image for feature fusion to obtain a target fusion feature map;
and performing feature extraction on the target fusion feature map to obtain the second image feature.
Optionally, the apparatus 500 further comprises a training module configured to:
acquiring a first sample lung image marked with a lung segment category, and training the first classification sub-model based on the first sample lung image, wherein the lung segment category is used for representing the actual lung segment category of the first sample lung image;
classifying a second sample lung image through a trained first classification sub-model to obtain a lung segment class corresponding to the second sample lung image, and segmenting the second sample lung image into a plurality of sample small images, wherein each sample small image is marked with an actual lung segment position identification result;
training the second classification submodel based on each of the sample thumbnail images.
Optionally, the training module is configured to:
inputting the first sample lung image into the first classification sub-model to obtain a predicted lung segment class of the first sample lung image;
calculating a first loss function according to the actual lung segment class and the predicted lung segment class of the first sample lung image, and adjusting the parameter of the first classification sub-model according to the calculation result of the first loss function;
the training module is configured to:
inputting each sample small image into a second classification sub-model corresponding to the lung segment type respectively to obtain a predicted lung segment position identification result corresponding to each sample small image;
and calculating a second loss function according to the actual lung segment position identification result corresponding to each sample small image and the predicted lung segment position identification result, and adjusting the parameters of the second classification submodel according to the calculation result of the second loss function.
Optionally, the obtaining module 501 is configured to:
collecting a lung scanning image;
and inputting the lung scanning image into a pre-trained focus recognition model to obtain the target lung image marked with the focus position.
Optionally, the first classification submodel and the second classification submodel are residual error networks, and the number of network layers of the first classification submodel is greater than the number of network layers of the second classification submodel.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
The disclosed embodiments also provide a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method for identifying a lesion lung segment position provided in the above embodiments.
An embodiment of the present disclosure further provides an electronic device, including:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method for identifying a lesion lung segment position provided in the above embodiments.
Fig. 6 is a block diagram illustrating an electronic device 600 according to an example embodiment. For example, the electronic device 600 may be provided as a server. Referring to fig. 6, the electronic device 600 includes a processor 622, which may be one or more in number, and a memory 632 for storing computer programs executable by the processor 622. The computer program stored in memory 632 may include one or more modules that each correspond to a set of instructions. Further, the processor 622 may be configured to execute the computer program to perform the above-mentioned identification method of the lesion lung segment position.
Additionally, electronic device 600 may also include a power component 626 that may be configured to perform power management of electronic device 600 and a communication component 650 that may be configured to enable communication, e.g., wired or wireless communication, of electronic device 600. The electronic device 600 may also include input/output (I/O) interfaces 658. The electronic device 600 may operate based on an operating system, such as Windows Server, stored in the memory 632TM,Mac OS XTM,UnixTM,LinuxTMAnd so on.
In another exemplary embodiment, a computer readable storage medium is also provided, which comprises program instructions, which when executed by a processor, implement the steps of the above-described method for identifying a focal lung segment location. For example, the non-transitory computer readable storage medium may be the memory 632 described above that includes program instructions executable by the processor 622 of the electronic device 600 to perform the method for identifying a focal lung segment location described above.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned method of identification of a lesion lung segment location when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A method of identifying a focal lung segment location, the method comprising:
acquiring a target lung image marked with a focus position;
inputting the target lung image into a focus lung segment position identification model to obtain a lung segment position identification result corresponding to the focus in the target lung image, wherein the lung segment position identification result is used for representing the lung segment position of the focus in the lung;
the lesion lung segment position recognition model comprises a first classification sub-model and a second classification sub-model, the number of lung segment types which can be recognized by the first classification sub-model is the same as that of the second classification sub-model, each second classification sub-model corresponds to one lung segment type, the first classification sub-model is used for classifying the target lung image according to the lung segment type corresponding to the target lung image and inputting the classified target lung image into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position recognition result corresponding to a lesion in the target lung image.
2. The method of claim 1, wherein the lesion lung segment position recognition model comprises an image segmentation module, and the inputting the target lung image into the lesion lung segment position recognition model to obtain a lung segment position recognition result corresponding to a lesion in the target lung image comprises:
inputting the target lung image into the first classification sub-model, extracting first image characteristics from the target lung image through the first classification sub-model, and determining a target lung segment category corresponding to the target lung image according to the first image characteristics;
inputting the target lung image into the image segmentation module to obtain a plurality of small images;
and inputting the small image with the focus into a second classification sub-model corresponding to the target lung segment category, extracting second image characteristics from the small image with the focus through the second classification sub-model, and determining a lung segment position recognition result corresponding to the small image with the focus according to the second image characteristics.
3. The method of claim 2, wherein the inputting the small image with the lesion into a second classification sub-model corresponding to the target lung segment category and extracting second image features from the small image with the lesion by the second classification sub-model comprises:
inputting the small images with the focus and the preset number of target small images around the small images into a second classification sub-model corresponding to the target lung segment type;
respectively extracting the image features of the small image with the focus and the image features of the target small image for feature fusion to obtain a target fusion feature map;
and performing feature extraction on the target fusion feature map to obtain the second image feature.
4. The method according to any one of claims 1 to 3, wherein the training process of the lesion lung segment position identification model comprises:
acquiring a first sample lung image marked with a lung segment category, and training the first classification sub-model based on the first sample lung image, wherein the lung segment category is used for representing the actual lung segment category of the first sample lung image;
classifying a second sample lung image through a trained first classification sub-model to obtain a lung segment class corresponding to the second sample lung image, and segmenting the second sample lung image into a plurality of sample small images, wherein each sample small image is marked with an actual lung segment position identification result;
training the second classification submodel based on each of the sample thumbnail images.
5. The method of claim 4, wherein the training the first classification submodel based on the first sample lung image comprises:
inputting the first sample lung image into the first classification sub-model to obtain a predicted lung segment class of the first sample lung image;
calculating a first loss function according to the actual lung segment class and the predicted lung segment class of the first sample lung image, and adjusting the parameter of the first classification sub-model according to the calculation result of the first loss function;
the training the second classification submodel based on each of the sample small images includes:
inputting each sample small image into a second classification sub-model corresponding to the lung segment type respectively to obtain a predicted lung segment position identification result corresponding to each sample small image;
and calculating a second loss function according to the actual lung segment position identification result corresponding to each sample small image and the predicted lung segment position identification result, and adjusting the parameters of the second classification submodel according to the calculation result of the second loss function.
6. The method of any one of claims 1-3, wherein obtaining the target lung image with the lesion location marked comprises:
collecting a lung scanning image;
and inputting the lung scanning image into a pre-trained focus recognition model to obtain the target lung image marked with the focus position.
7. The method according to any of claims 1-3, wherein the first classification submodel and the second classification submodel are residual networks, and the number of network layers of the first classification submodel is greater than the number of network layers of the second classification submodel.
8. An apparatus for identifying a focal lung segment location, the apparatus comprising:
the acquisition module is used for acquiring a target lung image marked with a focus position;
the identification module is used for inputting the target lung image into a focus lung segment position identification model to obtain a lung segment position identification result corresponding to the focus in the target lung image, and the lung segment position identification result is used for representing the lung segment position of the focus in the lung;
the lesion lung segment position recognition model comprises a first classification sub-model and a second classification sub-model, the number of lung segment types which can be recognized by the first classification sub-model is the same as that of the second classification sub-model, each second classification sub-model corresponds to one lung segment type, the first classification sub-model is used for classifying the target lung image according to the lung segment type corresponding to the target lung image and inputting the classified target lung image into the second classification sub-model corresponding to the lung segment type, and the second classification sub-model is used for determining a lung segment position recognition result corresponding to a lesion in the target lung image.
9. A non-transitory computer readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 7.
CN202111494117.6A 2021-12-08 2021-12-08 Method and device for identifying position of lung segment of focus, storage medium and electronic equipment Pending CN114155234A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111494117.6A CN114155234A (en) 2021-12-08 2021-12-08 Method and device for identifying position of lung segment of focus, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111494117.6A CN114155234A (en) 2021-12-08 2021-12-08 Method and device for identifying position of lung segment of focus, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN114155234A true CN114155234A (en) 2022-03-08

Family

ID=80454025

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111494117.6A Pending CN114155234A (en) 2021-12-08 2021-12-08 Method and device for identifying position of lung segment of focus, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN114155234A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830302A (en) * 2024-03-04 2024-04-05 瀚依科技(杭州)有限公司 Optimization method and device for lung segment segmentation, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830302A (en) * 2024-03-04 2024-04-05 瀚依科技(杭州)有限公司 Optimization method and device for lung segment segmentation, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN108446730B (en) CT pulmonary nodule detection device based on deep learning
CN109670532B (en) Method, device and system for identifying abnormality of biological organ tissue image
CN107895367B (en) Bone age identification method and system and electronic equipment
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
CN109584209B (en) Vascular wall plaque recognition apparatus, system, method, and storage medium
CN111862044B (en) Ultrasonic image processing method, ultrasonic image processing device, computer equipment and storage medium
Mansour et al. Internet of things and synergic deep learning based biomedical tongue color image analysis for disease diagnosis and classification
CN111091536B (en) Medical image processing method, apparatus, device, medium, and endoscope
CN110197474B (en) Image processing method and device and training method of neural network model
CN113034462B (en) Method and system for processing gastric cancer pathological section image based on graph convolution
CN112581438B (en) Slice image recognition method and device, storage medium and electronic equipment
CN111754453A (en) Pulmonary tuberculosis detection method and system based on chest radiography image and storage medium
CN110766670A (en) Mammary gland molybdenum target image tumor localization algorithm based on deep convolutional neural network
CN112085714A (en) Pulmonary nodule detection method, model training method, device, equipment and medium
US20230177698A1 (en) Method for image segmentation, and electronic device
CN117274270B (en) Digestive endoscope real-time auxiliary system and method based on artificial intelligence
CN115205520A (en) Gastroscope image intelligent target detection method and system, electronic equipment and storage medium
Manikandan et al. Segmentation and Detection of Pneumothorax using Deep Learning
CN117593293B (en) Intelligent processing system and method for nasal bone fracture image
CN117710760B (en) Method for detecting chest X-ray focus by using residual noted neural network
CN112990339B (en) Gastric pathological section image classification method, device and storage medium
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
CN111462203B (en) DR focus evolution analysis device and method
CN112927215A (en) Automatic analysis method for digestive tract biopsy pathological section

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination