CN114140378A - Scanned image processing method, electronic device, and readable medium - Google Patents

Scanned image processing method, electronic device, and readable medium Download PDF

Info

Publication number
CN114140378A
CN114140378A CN202111131767.4A CN202111131767A CN114140378A CN 114140378 A CN114140378 A CN 114140378A CN 202111131767 A CN202111131767 A CN 202111131767A CN 114140378 A CN114140378 A CN 114140378A
Authority
CN
China
Prior art keywords
lesion
area
focus
image
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111131767.4A
Other languages
Chinese (zh)
Inventor
陈磊
刘爱娥
王祥
樊荣荣
孙瑶
孙红标
李清楚
萧毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Changzheng Hospital
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai Changzheng Hospital
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Changzheng Hospital, Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai Changzheng Hospital
Priority to CN202111131767.4A priority Critical patent/CN114140378A/en
Publication of CN114140378A publication Critical patent/CN114140378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Epidemiology (AREA)
  • Evolutionary Biology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure provides a scanned image processing method, an electronic device, and a readable medium, the method including: acquiring a scanning image; inputting the scanning image into a focus segmentation model to obtain a segmented focus region; based on the similarity with the characteristics of the focus area, carrying out area expansion from the focus area to the outside to determine the area around the focus; and obtaining the focus area and the focus surrounding area. According to the method and the device, the focus region and other regions can be accurately extracted from the scanned images such as CT (computed tomography) images, the region around the focus is effectively determined by using a self-adaptive radial expansion mode, and the focus wettability grading prediction is effectively carried out by using the focus determination result, so that more accurate tumor wettability analysis and prediction are provided for doctors and other related personnel, the evaluation and prediction efficiency and accuracy are improved, and further the working efficiency and the differential diagnosis level of the doctors are effectively improved.

Description

Scanned image processing method, electronic device, and readable medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a scanned image processing method, an electronic device, and a readable medium.
Background
The morbidity and mortality of the lung cancer are the first in the world, and early detection and early diagnosis of the lung cancer are the keys for preventing and treating the lung cancer and improving the survival rate. The most common of primary lung cancers are adenocarcinomas, which are classified by severity into pre-invasive lesions, micro-invasive and invasive adenocarcinomas. Accurate assessment of the aggressiveness of adenocarcinomas directly determines the choice of treatment and prognostic assessment. The tumor environment can secrete a large amount of growth factors and cytokines, induce hypoxia and angiogenesis, and play an important role in the occurrence, development and metastasis of tumors. The world health organization defines the airway dissemination (STAS) of tumor cells as a special metastasis mode of lung adenocarcinoma, and the STAS definition refers to the discovery of tumor cells in lung parenchymal airway cavities in the area around tumors and is an independent risk factor for postoperative recurrence and metastasis of lung cancer patients.
General studies mainly perform feature extraction or analysis on tumor regions, and other studies determine the peritumoral range through radial expansion of a fixed distance and combine the image features of the tumor and surrounding regions to assist diagnosis. The peritumoral image information can be fused to more comprehensively depict the invasiveness of the tumors, so that the intelligent identification and the invasiveness prejudgment of early lung adenocarcinoma are realized, and a theoretical basis is provided for the selection of a surgical operation treatment scheme and the prognosis evaluation.
The current lung cancer invasive image identification is mainly to identify and diagnose by manual interpretation, wherein a radiologist refers to a CT (computed tomography) flat scan and an enhanced scan image of a patient. However, this requires the doctor to repeatedly observe each layer of CT image of the patient, which is time-consuming and tedious, and the diagnosis result is highly subjective, resulting in low efficiency and accuracy of evaluation and prediction.
Disclosure of Invention
A primary object of the present disclosure is to provide a scanned image processing method, an electronic device, and a readable medium to improve the above-mentioned drawbacks of the prior art.
The technical problem is solved by the following technical scheme:
as an aspect of the present disclosure, there is provided a scan image processing method including:
acquiring a scanning image;
inputting the scanning image into a focus segmentation model to obtain a segmented focus region;
based on the similarity with the characteristics of the focus area, carrying out area expansion from the focus area to the outside to determine the area around the focus; and the number of the first and second groups,
and obtaining the focus area and the focus surrounding area.
As an alternative embodiment, after the step of obtaining the lesion region and the lesion surrounding region, the image processing method further includes:
and evaluating the lesion wettability grading based on the image comprising the lesion area and the area around the lesion to output a lesion wettability grading result.
As an alternative embodiment, the step of evaluating a lesion wettability rating based on an image including the lesion region and the lesion surrounding region to output a lesion wettability rating result includes:
extracting the image omics characteristics of the focus area and the focus surrounding area from the image comprising the focus area and the focus surrounding area;
inputting the image of the image omics characteristics of the lesion area and the area around the lesion into a lesion wettability grading model to predict lesion wettability grading;
and outputting a lesion infiltrative grading result.
As an alternative embodiment, the step of evaluating a lesion wettability rating based on an image including the lesion region and the lesion surrounding region to output a lesion wettability rating result includes:
inputting an image comprising a focus area and a focus surrounding area into a deep neural network classification model to output a focus wettability grading result;
the training step of the deep neural network classification model specifically comprises the following steps:
inputting a training image comprising a focus area and a focus surrounding area into a deep neural network classification model to be trained;
extracting an attention area from the training image and extracting the same attention area from a supervision image, wherein the attention area at least comprises the focus area and the area around the focus;
calculating the stage loss of the lesion wettability between the training image and the supervision image based on the lesion wettability of the pixel points of the attention area of the training image and the supervision image;
and training the deep neural network classification model to be trained according to the grading loss to obtain the trained deep neural network classification model.
As an alternative embodiment, the lesion region feature includes any one or more of a gray value, entropy, gray statistic feature, shape feature, texture feature, and feature extracted by various filters of the lesion region.
As an alternative embodiment, the step of inputting the scan image into a lesion segmentation model further includes:
inputting the scanning image into a lesion segmentation model to obtain a segmented tissue structure;
the step of performing region expansion from the lesion region to the outside based on the similarity with the feature of the lesion region to determine the region around the lesion includes:
based on the similarity with the characteristics of the lesion area, carrying out area expansion from the lesion area outwards and along the direction of the tissue structure vector to determine the area around the lesion.
As an alternative embodiment, the step of inputting the scan image into a lesion segmentation model further includes:
inputting the scanning image into a lesion segmentation model to obtain a segmented tissue anatomical structure;
before the step of obtaining the lesion region and the lesion surrounding region, the scan image processing method further includes:
and limiting the range and the boundary of the determined lesion surrounding area based on the range and the boundary of the tissue anatomical structure so as to limit the lesion area and the lesion surrounding area within the range and the boundary of the tissue anatomical structure.
As an alternative embodiment, the step of performing region expansion from the lesion region to the outside based on the similarity with the feature of the lesion region includes:
judging whether the similarity between a target pixel point outside the focus area and a reference point is smaller than a preset threshold value, if so, stopping the expansion of the target pixel point, and if not, expanding the target pixel point;
wherein the reference point is determined according to a pixel point of the lesion area.
As another aspect of the present disclosure, there is provided an electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the scan image processing method as described above when executing the computer program.
As another aspect of the present disclosure, there is provided a computer readable medium having stored thereon computer instructions which, when executed by a processor, implement the scanned image processing method as described above.
Other aspects of the disclosure will be apparent to those skilled in the art in light of the disclosure.
The positive progress effect of this disclosure lies in:
the scanned image processing method, the electronic equipment and the readable medium can accurately extract focus regions and other regions from scanned images such as CT (computed tomography) and the like, effectively determine the regions around the focuses by using a self-adaptive radial expansion mode, effectively predict the focus infiltrative classification by using the focus determination result, and provide more accurate tumor infiltrative analysis and prediction for doctors and other related personnel, so that the evaluation and prediction efficiency and accuracy are improved, and further the working efficiency and differential diagnosis level of the doctors are effectively improved.
Drawings
The features and advantages of the present disclosure may be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
Fig. 1 is a schematic flow chart diagram of a scanned image processing method according to an alternative embodiment of the present disclosure.
Fig. 2 is a flowchart of a scanning image processing method applied to lung tumor analysis.
Fig. 3 is a schematic diagram illustrating a model training and reasoning usage process of a lesion segmentation model.
Fig. 4 is a schematic view illustrating a conventional equidistant expansion lesion area.
Fig. 5 is a schematic diagram illustrating an adaptive radially expanding lesion area according to an alternative embodiment of the present disclosure.
Fig. 6 is a schematic view illustrating expansion of a lesion region using a lung structure.
Fig. 7 is a schematic view showing the expansion of the peri-focal region within a lung lobe or segment.
FIG. 8 is a schematic structural diagram of a deep neural network classification model based on an attention mechanism.
Fig. 9 is a schematic structural diagram of an electronic device implementing a scanned image processing method according to another alternative embodiment of the present disclosure.
Detailed Description
The present disclosure is further illustrated by the following examples, but is not intended to be limited thereby in the scope of the examples described.
It should be noted that references in the specification to "one embodiment," "an alternative embodiment," "another embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the relevant art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
In the description of the present disclosure, it is to be understood that the terms "center," "lateral," "upper," "lower," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the present disclosure and for simplicity in description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed in a particular orientation, and be operated in a particular manner, and therefore should not be construed as limiting the present disclosure. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present disclosure, "a plurality" means two or more unless otherwise specified. Furthermore, the term "comprises" and any variations thereof is intended to cover non-exclusive inclusions.
In the description of the present disclosure, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present disclosure can be understood in specific instances by those of ordinary skill in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
In order to overcome the above existing defects, the present embodiment provides a method for processing a scanned image, including: acquiring a scanning image; inputting the scanning image into a focus segmentation model to obtain a segmented focus region; based on the similarity with the characteristics of the focus area, carrying out area expansion from the focus area to the outside to determine the area around the focus; and obtaining a focus area and a focus surrounding area.
In the embodiment, the focus region can be accurately extracted from the scanned image, and the region around the focus is effectively determined by using a self-adaptive radial expansion mode, so that more accurate disease analysis and prediction are provided for doctors and other related personnel, the evaluation and prediction efficiency and accuracy are improved, and the working efficiency and differential diagnosis level of the doctors are effectively improved.
Specifically, as an alternative embodiment, as shown in fig. 1, the scanned image processing method provided in this embodiment mainly includes the following steps:
step 101, acquiring a scanned image.
In this embodiment, the scan image processing method is mainly applied to a lung tumor analysis scene, and the scan image is preferably a CT image of a patient, but the application scene and the image type of the scan image processing method are not particularly limited, and can be correspondingly adjusted and selected according to actual needs or possible needs.
Step 102, inputting the scanned image into a lesion segmentation model to obtain segmented lesion regions, tissue structures and anatomical structures.
In this step, the lesion segmentation model may be a three-dimensional convolutional neural network segmentation model, and referring to fig. 2, the three-dimensional convolutional neural network is used to segment lung tissue anatomical structures such as lung lobes and lung segments, lung tissue structures such as trachea and blood vessels, and lesion regions from the CT image, and of course, the segmentation model may also be segmented at the same time, and may be set according to actual requirements.
Specifically, referring to fig. 3, the neural network segmentation model mainly includes 5 segmentation models, such as lung lobes, lung segments, trachea, blood vessels, and lesions, and is mainly divided into model training and reasoning processes. In the model training stage, corresponding ROI (region of interest) labeled image data are respectively collected to perform independent model training, and 5 neural network model files are generated, wherein the files contain a large number of parameters obtained through a deep learning algorithm.
The training process of the 5 segmentation models is similar, so the following describes the training process of the lesion segmentation model in detail by taking the lesion segmentation model as an example.
The training stage of the focus segmentation model is to input the CT image and the marked image into a three-dimensional convolution neural network for training so as to obtain a segmentation model file. For example, the three-dimensional convolutional neural network can adopt a V-Net (a network architecture) network, and the training strategy can adopt the combination of two different scale segmentation models, so that the processing efficiency and the accuracy of the segmentation models are effectively improved. Of course, the deep learning algorithm model used for model training and reasoning may further include a DenseNet (a network architecture) network, and may be selected and adjusted according to actual needs or possible needs.
Specifically, the input original volume data image is first resampled into two images of different resolution sizes (e.g., using sampling operators [3,3,3] and [1,1,1]), the image is normalized as a whole, and image blocks are randomly fetched from the complete image in the same image block size (e.g., 64X 64). And then, respectively inputting the image blocks with the two resolution sizes into corresponding convolutional neural networks for training, iterating until the size of a loss function (for example, dice) is reduced to a preset threshold, and storing the trained model files to obtain segmentation model files based on the two resolution sizes, namely a rough segmentation network model file and a fine segmentation network model file.
The following explains the reasoning process of using the lesion segmentation model. And cascading the rough segmentation network model and the fine segmentation network model, wherein the rough segmentation network model file is used for positioning the focus, and the fine segmentation network model file is used for finely segmenting the focus edge. The specific using process is that the CT image is input into a deep learning segmentation algorithm, the segmentation algorithm firstly resamples the image into an image with a specified resolution ratio, and then the preprocessed image is input into a segmentation model to automatically extract a focus area after normalization.
And 103, performing region expansion from the focus region outwards and along the direction of the tissue structure vector based on the similarity with the characteristics of the focus region to determine the region around the focus.
In this step, based on the lesion segmentation result obtained in step 102, an adaptive radial expansion algorithm is used to perform expansion and extension along lung structures such as trachea and blood vessels, so as to calculate a lesion surrounding area.
Specifically, the peripheral region of the lesion such as the periphery of the tumor is expanded by gradient equidistant expansion, as shown in fig. 4, for example, by equidistantly expanding the regions 0-5mm and 5-10mm, the image omics features are extracted from different equidistant regions, and the relationship of the features to the classification of benign and malignant tumors is further analyzed. However, the equidistant expansion does not take into account whether the peritumoral region has a region related or unrelated to the disease, but simply an equidistant expansion, thus reducing accuracy.
In this example, it is proposed to use the gray information in omics features to expand and identify the peritumoral region non-equidistantly.
Referring to fig. 5, the middle inner region (i) is a lesion area, and the annular region (ii) is an optional region according to the correlation of the gray level information, and the distance is extended based on the entropy (entrop) of the lesion area, wherein the upward extension indicates that the distance is close to the entropy, and the downward extension indicates that the distance is not close to the entropy, so that the close region is extended by a larger distance than the distance of the non-close region. The annular area (c) is lighter in color, and the similarity is smaller, so that the expanded area is gradually smaller until the similarity reaches a set threshold value and is not expanded. Therefore, the larger the similarity of the gray information (entropy), the larger the expanded region, the more accurate the boundary can be, and the more favorable the evaluation of the tumor can be.
As another embodiment, in this step, it is determined whether the similarity between the target pixel point outside the focal region and the reference point is smaller than a preset threshold, if yes, the expansion of the target pixel point is stopped, and if not, the expansion of the target pixel point is performed; wherein the reference point is determined according to the pixel point of the focus area. In this embodiment, the lesion area is determined, and the expanded area is used as the area around the lesion, that is, the area around the lesion is dynamically changed, and the area around the lesion is not dynamically changed. Of course, the focal region may also be dynamically changed, and at this time, iteration of the expansion region may be generated, that is, the last expansion region may change the selection of the pixel points of the focal region, so that the focal region may also be dynamically changed in the process of iteration.
The basic principle of the non-equidistant expansion mode is that pixels with similar properties are gathered to form a region, all pixel points of a focus region are used as reference points, then pixels with the similarity degree to the reference points within a set threshold range are radially expanded, and then the expanded region is formed. In this embodiment, the similarity includes not only the gray value and the entropy, but also the gray statistic feature, the shape feature, the texture feature in the omics feature, and features extracted by various filters.
From the image point of view, there are five main types of relationships between bronchi and lung nodules: a is broncho-fold tumor truncation; b is the bronchus is wrapped by the tumor; c is the pushing of the bronchi by the tumor; d is the uniform thickening of the bronchial wall leading to the tumor, and the smooth and narrow lumen; e is the irregular thickening and distortion of the bronchial wall leading to the tumor, narrowing the lumen. From histopathology, in situ adenocarcinomas mostly originate from the bronchiolar mucosal epithelium or alveolar epithelium, and have the appearance of bronchial stiffness, stretch, narrowing, and truncation. The pulmonary artery accompanies the bronchi, and there are many morphological changes similar to those of the bronchi.
Therefore, in the present embodiment, the expansion boundary of the peritumoral region is suppressed based on the obtained segmentation results of the trachea, the blood vessels, and the lesion region. Referring to fig. 6, the trachea is taken as an example, and the mask of the lesion area, the mask of the trachea, the expanded lesion area and the arrow mark the direction of the trachea (the clinical cause along the vector direction of the trachea) are respectively shown. The expansion region expands more pixel points along the trachea direction and expands less pixel points along the vertical direction of the trachea vector. Of course, the manner of acquiring the tracheal direction vector is not limited to the use of PCA (principal component analysis).
And 104, limiting the range and the boundary of the determined area around the focus based on the range and the boundary of the tissue anatomical structure.
In this step, the range and the boundary of the determined lesion surrounding area are limited based on the range and the boundary of the tissue anatomical structure, so that the lesion area and the lesion surrounding area are limited within the range and the boundary of the tissue anatomical structure.
In particular, the extent and boundaries of the expansion are limited by the segmentation results of the lung anatomy (lobes and segments). Referring to fig. 7, after the preliminary expansion of the lesion surrounding tissue is obtained, the segmentation result of the lung anatomical structure, including the lung lobes and the lung segments, is used to further limit the expansion of the lesion surrounding tissue, and ensure that the expansion region and the lesion region are in the same lung lobe. The lung membrane is arranged between the lung sections, and the lesion area generally cannot cross the lung sections to invade, so that the expanded area is limited in the same lung section except for the lesion area crossing the lung sections.
Step 105, evaluating the lesion wettability grading based on the image including the lesion area and the lesion surrounding area to output a lesion wettability grading result.
In the step, based on the tumor and the peritumoral region, the infiltrative property of the focus is classified, evaluated and predicted by using image omics analysis and a three-dimensional convolution neural classification network model.
In this embodiment, the diagnosis information obtained by automatic classification includes, but is not limited to, lung tumor, non-infiltration, micro-infiltration, etc., and can be selected and adjusted according to actual needs or possible needs.
As an optional implementation manner, in this step, the cinematology features of the lesion area and the lesion surrounding area are extracted from the image including the lesion area and the lesion surrounding area; inputting the image of the image omics characteristics of the extracted focus area and the focus surrounding area into a focus wettability grading model to predict focus wettability grading; and outputting a lesion infiltrative grading result.
Specifically, the mode using omics analysis is explained below. Firstly, the image omics characteristics are extracted from the tumor and the peritumoral region, then the characteristic dimension reduction and characteristic selection are carried out by using methods such as LASSO (regression model) and the like, and a tumor infiltrative classification model, namely a focus infiltrative classification model, is further constructed by using classification algorithms such as SVM (support vector machine), logistic regression, random forest and the like. The output of this step is mainly two, one is selected characteristics, which can be fed back to the peritumoral region expansion algorithm to provide more characteristics for the adaptive radial expansion algorithm, and the other is a classification model based on a classifier.
As another alternative, in this step, the image including the lesion area and the area around the lesion is input to the deep neural network classification model to output the lesion infiltrative grading result.
The training step of the deep neural network classification model specifically comprises the following steps: inputting a training image comprising a focus area and a focus surrounding area into a deep neural network classification model to be trained; extracting an attention area from the training image and extracting the same attention area from the supervision image, wherein the attention area at least comprises a focus area and a focus surrounding area; calculating the stage loss of the lesion wettability between the training image and the monitoring image based on the lesion wettability of the pixel points of the attention area of the training image and the monitoring image; and training the deep neural network classification model to be trained according to the grading loss so as to obtain the trained deep neural network classification model.
Specifically, the deep neural network classification model is explained below. The obtained tumor and peritumoral region are input into a classification network by using an attention mechanism, depth information of the tumor and peritumoral region is learned in a focused manner, namely the infiltration result (namely the value of 0-1) of each pixel point of the attention region (namely the region containing the tumor and peritumoral region) in a supervision image is used for calculating the grading loss between the infiltration result and each pixel point of the corresponding region in a training image, so that a deep neural network classification model is trained according to the calculated grading loss, the classification model learns the tumor and peritumoral region in a focused manner, as shown in fig. 8, and the classification stability and accuracy are further improved.
In the classification model training stage, a region including a tumor and a tumor periphery in a CT image is input to a three-dimensional Convolutional neural Network for training, for example, the three-dimensional Convolutional neural Network may adopt a DenseNet (Dense Convolutional Network) Network, and a classification result of the tumor and the tumor periphery is input to the Network, so that the model learns the region in a focused manner by using an attention mechanism, and a classification model file is obtained by using a computation fractional Loss (MSE Loss).
In the classification model reasoning and using stage, firstly, segmentation results of tumors and tumor peripheries are read and input into a classification model together with an original image containing a focus to obtain a plurality of graded probability values (probability), and the maximum probability value is set as a final predicted infiltration level, so that the lung cancer infiltration is automatically evaluated.
In this embodiment, the parameters involved in the above steps may be arbitrarily set according to the characteristics of the actual medical image.
The scanned image processing method provided in this embodiment can accurately extract regions such as lung lobes, lung segments, trachea, blood vessels, and lesions from the scanned image such as CT through the deep neural network segmentation model, and based on the boundary of the lesions, by using the adaptive radial expansion algorithm, meanwhile, the lung structural characteristics (trachea, blood vessels and the like) are combined, the peritumoral region can be effectively calculated, the peritumoral boundary is further limited and accurately positioned by utilizing the segmentation result of the lung anatomical structure, the image omics analysis and the deep neural network classification model training are carried out based on the tumor and the peritumoral region, therefore, the lesion infiltrative grading prediction is effectively carried out, more accurate tumor infiltrative analysis and prediction are provided for doctors and other related personnel, the evaluation and prediction efficiency and accuracy are improved, and further the working efficiency and differential diagnosis level of doctors are effectively improved.
Fig. 9 is a schematic structural diagram of an electronic device according to this embodiment. The electronic device comprises a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the scanned image processing method as in the above embodiments when executing the program. The electronic device 30 shown in fig. 9 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 9, the electronic device 30 may be embodied in the form of a general purpose computing device, which may be, for example, a server device. The components of the electronic device 30 may include, but are not limited to: the at least one processor 31, the at least one memory 32, and a bus 33 connecting the various system components (including the memory 32 and the processor 31).
The bus 33 includes a data bus, an address bus, and a control bus.
The memory 32 may include volatile memory, such as Random Access Memory (RAM)321 and/or cache memory 322, and may further include Read Only Memory (ROM) 323.
Memory 32 may also include a program/utility 325 having a set (at least one) of program modules 324, such program modules 324 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The processor 31 executes various functional applications and data processing, such as a scanned image processing method in the above embodiments of the present disclosure, by executing the computer program stored in the memory 32.
The electronic device 30 may also communicate with one or more external devices 34 (e.g., keyboard, pointing device, etc.). Such communication may be through input/output (I/O) interfaces 35. Also, model-generating device 30 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via network adapter 36. As shown in FIG. 9, network adapter 36 communicates with the other modules of model-generating device 30 via bus 33. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the model-generating device 30, including but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID (disk array) systems, tape drives, and data backup storage systems, etc.
It should be noted that although in the above detailed description several units/modules or sub-units/modules of the electronic device are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module, in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
The present embodiment also provides a computer-readable storage medium on which a computer program is stored, which when executed by a processor implements the steps in the scanned image processing method as in the above embodiments.
More specific examples, among others, that the readable storage medium may employ may include, but are not limited to: a portable disk, a hard disk, random access memory, read only memory, erasable programmable read only memory, optical storage device, magnetic storage device, or any suitable combination of the foregoing.
In a possible implementation, the present disclosure may also be implemented in the form of a program product including program code for causing a terminal device to perform the steps of implementing the scanned image processing method as in the above embodiments when the program product is executed on the terminal device.
Where program code for carrying out the disclosure is written in any combination of one or more programming languages, the program code may execute entirely on the user device, partly on the user device, as a stand-alone software package, partly on the user device and partly on a remote device or entirely on the remote device.
While specific embodiments of the disclosure have been described above, it will be understood by those skilled in the art that this is by way of illustration only, and that the scope of the disclosure is defined by the appended claims. Various changes and modifications to these embodiments may be made by those skilled in the art without departing from the principles and spirit of this disclosure, and these changes and modifications are intended to be within the scope of this disclosure.

Claims (10)

1. A method of processing a scanned image, comprising:
acquiring a scanning image;
inputting the scanning image into a focus segmentation model to obtain a segmented focus region;
based on the similarity with the characteristics of the focus area, carrying out area expansion from the focus area to the outside to determine the area around the focus; and the number of the first and second groups,
and obtaining the focus area and the focus surrounding area.
2. The scan image processing method according to claim 1, wherein after the step of obtaining the lesion region and the lesion surrounding region, the image processing method further comprises:
and evaluating the lesion wettability grading based on the image comprising the lesion area and the area around the lesion to output a lesion wettability grading result.
3. The scan image processing method of claim 2, wherein the step of evaluating a lesion wettability rating based on the image including the lesion area and the area around the lesion to output a lesion wettability rating result comprises:
extracting the image omics characteristics of the focus area and the focus surrounding area from the image comprising the focus area and the focus surrounding area;
inputting the image of the image omics characteristics of the lesion area and the area around the lesion into a lesion wettability grading model to predict lesion wettability grading;
and outputting a lesion infiltrative grading result.
4. The scan image processing method of claim 2, wherein the step of evaluating a lesion wettability rating based on the image including the lesion area and the area around the lesion to output a lesion wettability rating result comprises:
inputting an image comprising a focus area and a focus surrounding area into a deep neural network classification model to output a focus wettability grading result;
the training step of the deep neural network classification model specifically comprises the following steps:
inputting a training image comprising a focus area and a focus surrounding area into a deep neural network classification model to be trained;
extracting an attention area from the training image and extracting the same attention area from a supervision image, wherein the attention area at least comprises the focus area and the area around the focus;
calculating the stage loss of the lesion wettability between the training image and the supervision image based on the lesion wettability of the pixel points of the attention area of the training image and the supervision image;
and training the deep neural network classification model to be trained according to the grading loss to obtain the trained deep neural network classification model.
5. The scan image processing method according to any one of claims 1 to 4, wherein the lesion region features include any one or more of a gray value, entropy, gray statistic feature, shape feature, texture feature, and various filter-extracted features of the lesion region.
6. The scan image processing method of claim 5, wherein the step of inputting the scan image into a lesion segmentation model further comprises:
inputting the scanning image into a lesion segmentation model to obtain a segmented tissue structure;
the step of performing region expansion from the lesion region to the outside based on the similarity with the feature of the lesion region to determine the region around the lesion includes:
based on the similarity with the characteristics of the lesion area, carrying out area expansion from the lesion area outwards and along the direction of the tissue structure vector to determine the area around the lesion.
7. The scan image processing method of claim 6, wherein the step of inputting the scan image to a lesion segmentation model further comprises:
inputting the scanning image into a lesion segmentation model to obtain a segmented tissue anatomical structure;
before the step of obtaining the lesion region and the lesion surrounding region, the scan image processing method further includes:
and limiting the range and the boundary of the determined lesion surrounding area based on the range and the boundary of the tissue anatomical structure so as to limit the lesion area and the lesion surrounding area within the range and the boundary of the tissue anatomical structure.
8. The scan image processing method according to claim 1, wherein the step of performing region expansion from the lesion region outward based on the proximity to the feature of the lesion region comprises:
judging whether the similarity between a target pixel point outside the focus area and a reference point is smaller than a preset threshold value, if so, stopping the expansion of the target pixel point, and if not, expanding the target pixel point;
wherein the reference point is determined according to a pixel point of the lesion area.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the scan image processing method according to any one of claims 1 to 8 when executing the computer program.
10. A computer readable medium having stored thereon computer instructions, which when executed by a processor implement a method of processing a scanned image as claimed in any one of claims 1 to 8.
CN202111131767.4A 2021-09-26 2021-09-26 Scanned image processing method, electronic device, and readable medium Pending CN114140378A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111131767.4A CN114140378A (en) 2021-09-26 2021-09-26 Scanned image processing method, electronic device, and readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111131767.4A CN114140378A (en) 2021-09-26 2021-09-26 Scanned image processing method, electronic device, and readable medium

Publications (1)

Publication Number Publication Date
CN114140378A true CN114140378A (en) 2022-03-04

Family

ID=80393946

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111131767.4A Pending CN114140378A (en) 2021-09-26 2021-09-26 Scanned image processing method, electronic device, and readable medium

Country Status (1)

Country Link
CN (1) CN114140378A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115409830A (en) * 2022-09-30 2022-11-29 广州医科大学附属第一医院(广州呼吸中心) Detection system, device and storage medium for ureter and renal pelvis tumors
CN116309551A (en) * 2023-05-11 2023-06-23 浙江太美医疗科技股份有限公司 Method, device, electronic equipment and readable medium for determining focus sampling area
CN117635613A (en) * 2024-01-25 2024-03-01 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method
EP4343781A1 (en) * 2022-09-21 2024-03-27 FUJIFILM Corporation Information processing apparatus, information processing method, and information processing program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4343781A1 (en) * 2022-09-21 2024-03-27 FUJIFILM Corporation Information processing apparatus, information processing method, and information processing program
CN115409830A (en) * 2022-09-30 2022-11-29 广州医科大学附属第一医院(广州呼吸中心) Detection system, device and storage medium for ureter and renal pelvis tumors
CN116309551A (en) * 2023-05-11 2023-06-23 浙江太美医疗科技股份有限公司 Method, device, electronic equipment and readable medium for determining focus sampling area
CN116309551B (en) * 2023-05-11 2023-08-15 浙江太美医疗科技股份有限公司 Method, device, electronic equipment and readable medium for determining focus sampling area
CN117635613A (en) * 2024-01-25 2024-03-01 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method
CN117635613B (en) * 2024-01-25 2024-04-16 武汉大学人民医院(湖北省人民医院) Fundus focus monitoring device and method

Similar Documents

Publication Publication Date Title
EP3770850B1 (en) Medical image identifying method, model training method, and computer device
Shamshad et al. Transformers in medical imaging: A survey
Murugesan et al. A hybrid deep learning model for effective segmentation and classification of lung nodules from CT images
CN114140378A (en) Scanned image processing method, electronic device, and readable medium
CN111325739B (en) Method and device for detecting lung focus and training method of image detection model
JP6077993B2 (en) Image data processing method, system, and program for identifying image variants
US7274810B2 (en) System and method for three-dimensional image rendering and analysis
JP5279245B2 (en) Method and apparatus for detection using cluster change graph cut
US9269139B2 (en) Rib suppression in radiographic images
JP5830295B2 (en) Image processing apparatus, operation method of image processing apparatus, and image processing program
US8913817B2 (en) Rib suppression in radiographic images
JP2008520345A (en) Method and system for detecting and classifying lesions in ultrasound images
JP2009502232A (en) Detection of lung nodules on chest radiographs
EP2208183B1 (en) Computer-aided detection (cad) of a disease
Gong et al. Fusion of quantitative imaging features and serum biomarkers to improve performance of computer‐aided diagnosis scheme for lung cancer: a preliminary study
Chandra et al. Disease localization and severity assessment in chest X-ray images using multi-stage superpixels classification
JP6415878B2 (en) Image processing apparatus, image processing method, and medical image diagnostic apparatus
CN114092450A (en) Real-time image segmentation method, system and device based on gastroscopy video
US11783476B2 (en) System and method for analyzing three-dimensional image data
JP2023508358A (en) Systems and methods for analyzing two-dimensional and three-dimensional image data
CN115631387B (en) Method and device for predicting lung cancer pathology high-risk factor based on graph convolution neural network
CN116740386A (en) Image processing method, apparatus, device and computer readable storage medium
Sundhari RETRACTED ARTICLE: Enhanced histogram equalization based nodule enhancement and neural network based detection for chest x-ray radiographs
CN112862786B (en) CTA image data processing method, device and storage medium
Mousavi Moghaddam et al. Lung parenchyma segmentation from CT images with a fully automatic method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination