CN111080642A - Tissue typing method and device based on medical image and electronic equipment - Google Patents

Tissue typing method and device based on medical image and electronic equipment Download PDF

Info

Publication number
CN111080642A
CN111080642A CN201911412146.6A CN201911412146A CN111080642A CN 111080642 A CN111080642 A CN 111080642A CN 201911412146 A CN201911412146 A CN 201911412146A CN 111080642 A CN111080642 A CN 111080642A
Authority
CN
China
Prior art keywords
image
tissue
classified
segmentation
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911412146.6A
Other languages
Chinese (zh)
Inventor
王瑜
赵朝炜
吴福乐
周越
孙岩峰
邹彤
张金
张轶曦
宋晓媛
李新阳
王少康
陈宽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Infervision Technology Co Ltd
Infervision Co Ltd
Original Assignee
Infervision Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infervision Co Ltd filed Critical Infervision Co Ltd
Priority to CN201911412146.6A priority Critical patent/CN111080642A/en
Publication of CN111080642A publication Critical patent/CN111080642A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

Disclosed are a tissue typing method and device based on medical images, a computer readable storage medium and an electronic device, relating to the technical field of image processing. The tissue typing method based on the medical image comprises the following steps: determining an image area corresponding to a tissue to be classified based on the medical image to be classified; determining gray characteristic information corresponding to the tissue to be classified based on the image area; and determining the type of the tissue to be classified based on the gray characteristic information. Because the gray characteristic information can accurately represent the structural distribution condition of the tissue to be classified, the accuracy of the determined type of the tissue to be classified can be greatly improved. In addition, compared with the existing typing method directly adopting deep learning, the embodiment of the disclosure can not only improve the typing precision, but also reduce the calculation amount and improve the calculation speed.

Description

Tissue typing method and device based on medical image and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for tissue classification based on medical images, a computer-readable storage medium, and an electronic device.
Background
The importance of medical images as an important tool for adjuvant therapy is self-evident. It is known that medical images are more difficult to process than general images because they include more complex tissue structures of the human or animal body (such as the heart, the breast, etc.). Especially when a typing operation is required for a tissue in a medical image, the existing typing method is difficult to apply to the medical image.
Therefore, how to perform typing operation on the tissues in the medical images and obtain higher typing precision becomes an urgent problem to be solved.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a tissue classification method and device based on a medical image, a computer-readable storage medium and an electronic device.
In one aspect, the disclosed embodiments provide a tissue typing method based on medical images, which is applied to medical images to be typed including tissues to be typed. The tissue typing method based on the medical image comprises the following steps: determining an image area corresponding to a tissue to be classified based on the medical image to be classified; determining gray characteristic information corresponding to the tissue to be classified based on the image area; and determining the type of the tissue to be classified based on the gray characteristic information.
In an embodiment of the present disclosure, determining gray feature information corresponding to a tissue to be classified based on an image region includes: and determining gray histogram information corresponding to the tissues to be classified based on the image area. Determining the type of the tissue to be classified based on the gray characteristic information, wherein the determining comprises the following steps: the type of tissue to be classified is determined based on the grayscale histogram information.
In an embodiment of the present disclosure, determining the type of tissue to be classified based on the gray feature information includes: determining preset typing information corresponding to tissues to be typed; and determining the type of the tissue to be typed based on the gray characteristic information and the preset typing information.
In an embodiment of the present disclosure, determining an image region corresponding to a tissue to be classified based on a medical image to be classified includes: performing image segmentation operation on the medical image to be classified to determine image segmentation information; and determining an image area corresponding to the tissue to be classified based on the image segmentation information.
In an embodiment of the present disclosure, performing an image segmentation operation on a medical image to be classified to determine image segmentation information includes: inputting a medical image to be classified into a segmentation network model to determine a first segmentation area, wherein the first segmentation area corresponds to a tissue to be classified; and performing a fine processing operation on the first segmentation area to determine image segmentation information.
In an embodiment of the present disclosure, performing a fine processing operation on the first segmentation region to determine image segmentation information includes: determining a seed region corresponding to the tissue to be typed based on the first segmentation region; and performing energy segmentation operation on the seed region by using an energy optimization algorithm to determine image segmentation information.
In an embodiment of the present disclosure, the energy optimization algorithm includes a graph cutting algorithm, and performing an energy segmentation operation on the seed region by using the energy optimization algorithm to determine image segmentation information, including: determining a second segmentation area corresponding to the seed area by utilizing a graph cutting algorithm based on the seed area; and performing optimization operation on the segmentation boundary of the second segmentation region by using a morphological operator to determine image segmentation information.
In an embodiment of the present disclosure, before performing an image segmentation operation on the medical image to be classified to determine image segmentation information, at least one of the following is further included: adjusting window width information and/or window level information corresponding to the medical image to be classified; cutting the medical image to be classified to remove a first image area, wherein the first image area and the tissue to be classified have a first association relation; denoising the medical image to be classified; and removing a second image area based on HU information of the medical image to be classified, wherein a second incidence relation exists between the second image area and the tissue to be classified.
In an embodiment of the present disclosure, after performing an image segmentation operation on a medical image to be classified to determine image segmentation information, the method further includes: determining a third segmentation area based on the image segmentation information; and determining key point information corresponding to the tissue to be classified based on the third segmentation area and the tissue to be classified. The method for determining the image area corresponding to the tissue to be classified based on the image segmentation information comprises the following steps: and performing cutting operation on the third segmentation area based on the key point information to determine an image area corresponding to the tissue to be classified.
In an embodiment of the present disclosure, determining, based on the third segmentation region and the tissue to be classified, the key point information corresponding to the tissue to be classified includes: and inputting the third segmentation area into the key point network model to determine key point information corresponding to the to-be-classified tissue.
In an embodiment of the present disclosure, the medical image to be classified is a breast molybdenum target image, and the tissue to be classified is a breast.
In another aspect, an embodiment of the present disclosure provides a tissue typing device based on medical images, which is applied to medical images to be typed including tissues to be typed. The medical image-based tissue typing apparatus includes: the image region determining module is used for determining an image region corresponding to the tissue to be classified based on the medical image to be classified; the gray characteristic information determining module is used for determining gray characteristic information corresponding to the tissues to be classified based on the image area; and the type determining module is used for determining the type of the tissue to be classified based on the gray characteristic information.
In another aspect, an embodiment of the present disclosure provides a computer-readable storage medium storing a computer program for executing the medical image-based tissue classification method mentioned in the above embodiment.
In another aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory for storing processor executable instructions, wherein the processor is configured to perform the medical image based tissue typing method mentioned in the above embodiments.
According to the tissue classification method based on the medical image, the purpose of determining the type of the tissue to be classified in the medical image to be classified is achieved by determining the image area corresponding to the tissue to be classified based on the medical image to be classified, then determining the gray feature information corresponding to the tissue to be classified based on the image area, and determining the type of the tissue to be classified based on the gray feature information. Because the gray characteristic information can accurately represent the structural distribution condition of the tissue to be classified, the accuracy of the determined type of the tissue to be classified can be greatly improved. In addition, compared with the existing typing method directly adopting deep learning, the embodiment of the disclosure can not only improve the typing precision, but also reduce the calculation amount and improve the calculation speed.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a schematic view of a scene to which the embodiment of the present disclosure is applied.
Fig. 2 is a schematic diagram of another scenario in which the embodiment of the present disclosure is applied.
Fig. 3 is a flowchart illustrating a medical image-based tissue classification method according to an exemplary embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a medical image-based tissue classification method according to another exemplary embodiment of the present disclosure.
Fig. 5a to 5d are schematic views illustrating types of breasts provided by an exemplary embodiment of the present disclosure.
Fig. 6a to 6d are graphs showing gray level histograms corresponding to different types of mammary glands provided in an exemplary embodiment of the present disclosure.
Fig. 7 is a schematic flowchart illustrating a process of determining an image region corresponding to a tissue to be classified based on a medical image to be classified according to an exemplary embodiment of the present disclosure.
Fig. 8 is a flowchart illustrating an image segmentation operation performed on a medical image to be classified to determine image segmentation information according to an exemplary embodiment of the present disclosure.
Fig. 9 is a schematic flowchart illustrating a fine processing operation performed on a first segmentation region to determine image segmentation information according to an exemplary embodiment of the present disclosure.
Fig. 10 is a flowchart illustrating a process of determining an image region corresponding to a tissue to be classified based on a medical image to be classified according to another exemplary embodiment of the present disclosure.
Fig. 11a to 11d are schematic diagrams illustrating a plurality of preprocessing stages corresponding to a medical image to be classified according to an exemplary embodiment of the disclosure.
Fig. 12 is a flowchart illustrating a process of determining an image region corresponding to a tissue to be classified based on a medical image to be classified according to another exemplary embodiment of the present disclosure.
Fig. 13 is a schematic diagram illustrating key points corresponding to a tissue to be classified according to an exemplary embodiment of the disclosure.
Fig. 14a and 14b are graphs showing gray level histograms at different stages provided by an exemplary embodiment of the present disclosure.
Fig. 15 is a schematic structural diagram of a medical image-based tissue typing device according to an exemplary embodiment of the present disclosure.
Fig. 16 is a schematic structural diagram of a medical image-based tissue typing device according to another exemplary embodiment of the present disclosure.
Fig. 17 is a schematic structural diagram of an image area determining module according to an exemplary embodiment of the present disclosure.
Fig. 18 is a schematic structural diagram of an image segmentation information determination unit according to an exemplary embodiment of the present disclosure.
Fig. 19 is a schematic structural diagram of an image segmentation information determination subunit according to an exemplary embodiment of the present disclosure.
Fig. 20 is a schematic structural diagram of an image area determining module according to another exemplary embodiment of the present disclosure.
Fig. 21 is a schematic structural diagram of an image area determining module according to still another exemplary embodiment of the present disclosure.
Fig. 22 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
Medical images are images that show information about the internal tissue structure, density, etc. of a human or animal body in an image form by means of interaction with the human or animal body with a medium such as X-rays, electromagnetic fields, ultrasonic waves, etc. In modern medicine, medical images are important tools for adjuvant therapy.
The gray histogram is a function of gray level distribution, and is a statistic of gray level distribution in an image. Specifically, the gray histogram is obtained by counting the frequency of occurrence of all pixels in the digital image according to the size of the gray value. The gray histogram is a function of gray level, which represents the number of pixels in an image having a certain gray level, reflecting the frequency of occurrence of a certain gray level in the image.
It is known that the tissue structure of human or animal body is relatively complex, such as breast molybdenum target image capable of clearly displaying each layer of breast tissue. Therefore, the processing difficulty of the medical image is higher than that of the general image. Existing typing methods are difficult to apply to medical images, particularly when it is desirable to perform a typing operation on tissue in the medical image to assist in subsequent image analysis and/or diagnostic work. Moreover, even if some existing typing methods can be applied to medical images, the typing accuracy and robustness are poor.
Based on the above-mentioned technical problems, the basic concept of the present disclosure is to provide a method and an apparatus for tissue classification based on medical images, a computer-readable storage medium, and an electronic device. The tissue typing method based on the medical image achieves the purpose of determining the type of the tissue to be typed in the medical image to be typed by determining the image area corresponding to the tissue to be typed based on the medical image to be typed, then determining the gray characteristic information corresponding to the tissue to be typed based on the image area and determining the type of the tissue to be typed based on the gray characteristic information. Because the gray characteristic information can accurately represent the structural distribution condition of the tissue to be classified, the accuracy of the determined type of the tissue to be classified can be greatly improved. In addition, compared with the existing typing method directly adopting deep learning, the embodiment of the disclosure can not only improve the typing precision, but also reduce the calculation amount and improve the calculation speed.
Having described the general principles of the present disclosure, various non-limiting embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic view of a scene to which the embodiment of the present disclosure is applied. As shown in fig. 1, a scenario to which the embodiment of the present disclosure is applied includes a server 1 and an image capturing device 2, where there is a communication connection relationship between the server 1 and the image capturing device 2.
Specifically, the image acquisition device 2 is configured to acquire a medical image to be classified including a tissue to be classified, and the server 1 is configured to determine an image area corresponding to the tissue to be classified based on the medical image to be classified acquired by the image acquisition device 2, then determine grayscale feature information corresponding to the tissue to be classified based on the image area, and then determine the type of the tissue to be classified based on the grayscale feature information. That is, this scenario implements a medical image-based tissue typing method. Since the above-mentioned scenario shown in fig. 1 implements the tissue classification method based on medical images by using the server 1, the scenario not only can improve the adaptability of the scenario, but also can effectively reduce the calculation amount of the image acquisition device 2.
It should be noted that the present disclosure is also applicable to another scenario. Fig. 2 is a schematic diagram of another scenario in which the embodiment of the present disclosure is applied. Specifically, the scene includes an image processing device 3, wherein the image processing device 3 includes an image acquisition module 301 and a calculation module 302, and a communication connection relationship exists between the image acquisition module 301 and the calculation module 302.
Specifically, the image acquisition module 301 in the image processing device 3 is configured to acquire a medical image to be classified including a tissue to be classified, and the calculation module 302 in the image processing device 3 is configured to determine an image region corresponding to the tissue to be classified based on the medical image to be classified acquired by the image acquisition module 301, then determine grayscale feature information corresponding to the tissue to be classified based on the image region, and then determine the type of the tissue to be classified based on the grayscale feature information. That is, this scenario implements a medical image-based tissue typing method. Since the above-described scenario shown in fig. 2 implements the tissue classification method based on the medical image by using the image processing apparatus 3 without performing a data transmission operation with a related device such as a server, the above-described scenario can ensure the real-time performance of the tissue classification method based on the medical image.
It should be noted that the image capturing device 2 and the image capturing module 301 mentioned in the above scenario include, but are not limited to, an X-ray machine, a ct (computed tomography) scanner, and an mri (magnetic Resonance imaging) device. Correspondingly, the medical image to be classified including the tissue to be classified acquired by the image acquisition device 2 and the image acquisition module 301 mentioned in the above scenario includes, but is not limited to, medical images such as X-ray images, CT images, MRI images and the like capable of presenting information such as internal tissue structure, density and the like of the human or animal body in an image manner.
Fig. 3 is a flowchart illustrating a medical image-based tissue classification method according to an exemplary embodiment of the present disclosure. In particular, the tissue typing method based on the medical image provided by the embodiment of the disclosure is applied to a medical image to be typed which comprises a tissue to be typed.
As shown in fig. 3, the tissue classification method based on medical images provided by the embodiment of the present disclosure includes the following steps.
And step 10, determining an image area corresponding to the tissue to be classified based on the medical image to be classified.
It should be noted that the tissue to be classified in step 10 refers to the tissue of which the type needs to be determined. For example, the medical image to be classified is a molybdenum target image of mammary gland, and the tissue to be classified is mammary gland.
And step 20, determining gray characteristic information corresponding to the tissues to be classified based on the image area.
Illustratively, the grayscale feature information mentioned in step 20 can characterize the image feature information corresponding to the tissue to be classified. It will be appreciated that in the usual case, a plurality of different tissue regions are included in the tissue to be typed.
It should be noted that, because different types of tissue regions have different densities in medical images, for example, in a breast molybdenum target image, the density of a breast gland is greater than the density of fat. Thus, different types of tissue regions differ in their gray scale information in the medical image.
And step 30, determining the type of the tissue to be classified based on the gray characteristic information.
Illustratively, the medical image to be classified is a breast molybdenum target image, and the tissue to be classified is a breast. Then, the types of tissues to be typed include adipose type, minor glandular type, major glandular type and compact type.
In the practical application process, firstly, an image area corresponding to the tissue to be classified is determined based on the medical image to be classified, then, gray characteristic information corresponding to the tissue to be classified is determined based on the image area, and the type of the tissue to be classified is determined based on the gray characteristic information.
According to the tissue classification method based on the medical image, the purpose of determining the type of the tissue to be classified in the medical image to be classified is achieved by determining the image area corresponding to the tissue to be classified based on the medical image to be classified, then determining the gray feature information corresponding to the tissue to be classified based on the image area, and determining the type of the tissue to be classified based on the gray feature information. Because the gray characteristic information can accurately represent the structural distribution condition of the tissue to be classified, the accuracy of the determined type of the tissue to be classified can be greatly improved. In addition, compared with the existing typing method directly adopting deep learning, the embodiment of the disclosure can not only improve the typing precision, but also reduce the calculation amount and improve the calculation speed.
In an embodiment of the present disclosure, the step of determining gray feature information corresponding to a tissue to be classified based on an image region includes: determining preset typing information corresponding to tissues to be typed; and determining the type of the tissue to be typed based on the gray characteristic information and the preset typing information.
It should be noted that the preset typing information may be determined manually or by using a related deep learning network model, which is not limited in the embodiment of the present disclosure in a unified manner.
Fig. 4 is a flowchart illustrating a medical image-based tissue classification method according to another exemplary embodiment of the present disclosure. The embodiment shown in fig. 4 of the present disclosure is extended on the basis of the embodiment shown in fig. 3 of the present disclosure, and the differences between the embodiment shown in fig. 4 and the embodiment shown in fig. 3 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 4, in the tissue typing method based on medical images provided by the embodiment of the present disclosure, the step of determining the gray feature information corresponding to the tissue to be typed based on the image region includes the following steps.
And step 21, determining gray histogram information corresponding to the tissues to be classified based on the image area.
As described above, the gray histogram can count the frequency of occurrence of all pixels in the image area according to the size of the gray value. Moreover, because the tissue regions with the same density correspond to the same gray scale and the tissue regions with different densities correspond to different gray scales, the density condition of the tissue region of the tissue to be classified can be determined based on the gray scale histogram information, and then the type of the tissue to be classified can be determined based on the density condition.
Furthermore, in the tissue typing method based on the medical image provided by the embodiment of the disclosure, the step of determining the type of the tissue to be typed based on the gray characteristic information comprises the following steps.
Step 31, the type of tissue to be classified is determined based on the gray histogram information.
In the practical application process, firstly, an image area corresponding to the tissue to be classified is determined based on the medical image to be classified, then, gray histogram information corresponding to the tissue to be classified is determined based on the image area, and the type of the tissue to be classified is determined based on the gray histogram information.
According to the tissue classification method based on the medical image, the purpose of determining the type of the tissue to be classified in the medical image to be classified is achieved by determining the image area corresponding to the tissue to be classified based on the medical image to be classified, then determining the gray histogram information corresponding to the tissue to be classified based on the image area, and determining the type of the tissue to be classified based on the gray histogram information. Because the type of the tissue to be classified can be determined quickly and accurately based on the gray level histogram information, the accuracy and the real-time performance of the type determining operation can be further improved.
In an embodiment of the present disclosure, the purpose of determining the type of tissue to be classified based on the gray histogram information mentioned in step 31 is achieved based on the ResNet network model. Illustratively, the sample image data determination process of the ResNet network model is: determining a plurality of sample images of the same type as the medical image to be classified, then determining gray histogram information corresponding to the plurality of sample images, and then determining sample image data based on the plurality of sample images and the gray histogram information corresponding to the plurality of sample images. Illustratively, the training process of the ResNet network model is as follows: and determining an initial network model corresponding to the ResNet network model, and training the initial network model based on sample image data to generate the ResNet network model.
The types of the mammary glands in the mammary gland molybdenum target image mentioned in the above embodiment and the gray histogram situations corresponding to the different types are described below with reference to fig. 5a to 5d and fig. 6a to 6d, so as to further prove that the types of the mammary glands can be determined based on the gray histogram information.
Fig. 5a to 5d are schematic views illustrating types of breasts provided by an exemplary embodiment of the present disclosure. Specifically, fig. 5a shows a fat-type mammary gland, and fat-type refers to a gland that accounts for 25% or less of the breast. Fig. 5b shows a mammary gland of the minor glandular type, which refers to a closed interval range with glands accounting for 25% to 50% of the breast. FIG. 5c shows a glandular type of mammary gland, which refers to an open range of 50% to 75% gland to breast ratio. Figure 5d shows a dense mammary gland, meaning the range where the gland comprises 75% or more than 75% of the breast.
Correspondingly, fig. 6a to 6d illustrate the corresponding gray level histograms of different types of breasts provided by an exemplary embodiment of the present disclosure. Specifically, fig. 6a shows a gray level histogram 1 corresponding to a fat-type mammary gland. Fig. 6b shows a grayscale histogram 2 corresponding to a small number of glandular types of mammary glands. FIG. 6c shows a gray level histogram 3 for a mammary gland of the glandular type. Fig. 6d shows a gray level histogram 4 corresponding to a dense mammary gland.
As can be clearly understood from fig. 5a to 5d and fig. 6a to 6d, the gray level histograms corresponding to different types of glands are clearly different. Since the gland has a higher density than the fat, the overall density of the breast is higher when there are more glands. It can also be seen from the above grayscale histograms 1 to 4 that the higher the gland density, the more the overall grayscale histogram will move to the right. Thus, the type of mammary gland can be determined based on the gray histogram information.
Fig. 7 is a schematic flowchart illustrating a process of determining an image region corresponding to a tissue to be classified based on a medical image to be classified according to an exemplary embodiment of the present disclosure. The embodiment shown in fig. 7 of the present disclosure is extended on the basis of the embodiment shown in fig. 3 of the present disclosure, and the differences between the embodiment shown in fig. 7 and the embodiment shown in fig. 3 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 7, in the tissue classifying method based on medical images provided by the embodiment of the present disclosure, the step of determining the image region corresponding to the tissue to be classified based on the medical image to be classified includes the following steps.
And 11, performing image segmentation operation on the medical image to be classified to determine image segmentation information.
Illustratively, the image segmentation operation mentioned in step 11 is performed on the tissue to be classified included in the medical image to be classified. In other words, the image segmentation is to separate the tissue to be classified, so as to reduce the interference of the image region unrelated to the tissue to be classified to the gray level histogram, and further determine the gray level histogram with higher accuracy.
And step 12, determining an image area corresponding to the tissue to be classified based on the image segmentation information.
In the practical application process, firstly, image segmentation operation is carried out on a medical image to be classified to determine image segmentation information, an image area corresponding to a tissue to be classified is determined based on the image segmentation information, then, gray characteristic information corresponding to the tissue to be classified is determined based on the image area, and the type of the tissue to be classified is determined based on the gray characteristic information.
According to the tissue classification method based on the medical image, the purpose of determining the image area corresponding to the tissue to be classified based on the medical image to be classified is achieved by performing image segmentation operation on the medical image to be classified to determine the image segmentation information and determining the image area corresponding to the tissue to be classified based on the image segmentation information. Because the image segmentation operation can effectively remove the image region irrelevant to the tissue to be classified, the embodiment of the disclosure can not only reduce the calculation amount, but also further improve the accuracy of the determined gray level histogram, and further improve the accuracy of the determined type of the tissue to be classified.
In an embodiment of the present disclosure, the energy optimization algorithm includes a graph cut algorithm, and performing an energy segmentation operation on the seed region by using the energy optimization algorithm to determine the image segmentation information, including: determining a second segmentation area corresponding to the seed area by utilizing a graph cutting algorithm based on the seed area; and performing optimization operation on the segmentation boundary of the second segmentation region by using a morphological operator to determine image segmentation information.
It should be understood that Graph Cut algorithm (GC) is an image segmentation operation based on the min-Cut max flow algorithm, and further segments the image into a foreground region and a background region. Then, for example, in the embodiment of the present disclosure, the seed region may be marked as a partial image region corresponding to the foreground region, and the region unrelated to the tissue to be classified is marked as a partial image region corresponding to the background region, so as to segment the foreground region and the background region in the image based on the graph cut algorithm.
It should be understood that the basic idea of morphological operators is to use structuring elements with certain morphology to measure and extract corresponding shapes in images for the purpose of image analysis and recognition. Wherein, the morphological operator comprises erosion, expansion and the like. Since the graph cut algorithm is a segmentation operation based on the pixel gradation information and the pixel coordinate information of the image, the smoothness of the segmentation boundary of the second segmentation region determined based on the graph cut algorithm is not ideal. Based on this, the embodiment of the present disclosure performs an optimization operation on the segmentation boundary of the second segmentation region by using a morphological operator, so as to improve the smoothness of the segmentation boundary, and further determine image segmentation information with better quality.
Fig. 8 is a flowchart illustrating an image segmentation operation performed on a medical image to be classified to determine image segmentation information according to an exemplary embodiment of the present disclosure. The embodiment shown in fig. 8 of the present disclosure is extended on the basis of the embodiment shown in fig. 7 of the present disclosure, and the differences between the embodiment shown in fig. 8 and the embodiment shown in fig. 7 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 8, in the tissue classification method based on medical images provided by the embodiment of the present disclosure, the step of performing an image segmentation operation on the medical image to be classified to determine image segmentation information includes the following steps.
Step 111, inputting the medical image to be classified into a segmentation network model to determine a first segmentation region, wherein the first segmentation region corresponds to the tissue to be classified.
Illustratively, the segmentation network model mentioned in step 111 is generated based on sample image data training corresponding to the medical image to be classified. The generation process of the sample image data comprises the following steps: firstly, a plurality of images of the same type as the medical image to be classified are determined, then, segmentation line marking operation is carried out on tissues to be classified in the plurality of images, and further, sample image data comprising segmentation line information and image information are determined. The training process of the segmentation network model comprises the following steps: an initial network model is determined, then the initial network model is trained based on sample image data, and a final segmentation network model is generated.
Optionally, the model structures of the initial network model and the segmented network model are the same, and the difference between the initial network model and the segmented network model is the network parameter difference of the models. The network parameters in the initial network model are initial network parameters, then the initial network model is trained by using the sample image data, and the initial network parameters are adjusted in the training process so as to finally generate the network parameters in the segmentation network model. For example, the network parameters of the initial network model are continuously adjusted based on a gradient descent method to finally generate the network parameters in the segmented network model.
Illustratively, the initial network model is a Convolutional Neural Networks (CNN) model.
And step 112, performing a fine processing operation on the first segmentation area to determine image segmentation information.
Illustratively, the fine processing operation mentioned in step 112 refers to performing a further segmentation processing operation on the first segmented region to determine image segmentation information. That is, after the fine processing, the accuracy of the image segmentation information corresponding to the tissue to be typed is improved.
It should be noted that, the specific type of the fine processing operation is not limited in the embodiments of the present disclosure in a unified manner, as long as the fine processing operation can generate image segmentation information with better accuracy. For example, the fine processing operation is a segmentation operation based on an energy optimization algorithm.
In the practical application process, firstly, a medical image to be classified is input into a segmentation network model to determine a first segmentation area, then, the first segmentation area is subjected to fine processing operation to determine image segmentation information, an image area corresponding to a tissue to be classified is determined based on the image segmentation information, then, gray characteristic information corresponding to the tissue to be classified is determined based on the image area, and the type of the tissue to be classified is determined based on the gray characteristic information.
According to the tissue classification method based on the medical image, the medical image to be classified is input into the segmentation network model to determine the first segmentation area, the first segmentation area is subjected to fine processing operation, and the image segmentation information is determined, so that the purpose of performing image segmentation operation on the medical image to be classified to determine the image segmentation information is achieved. Since the way of determining the first segmentation region based on the segmentation network model can fully utilize the advantages of the deep learning in terms of adaptability and robustness, the embodiment of the disclosure can further improve the adaptability and robustness of the image segmentation operation. In addition, since the fine segmentation operation can further improve the segmentation accuracy of the first segmentation region, the embodiments of the present disclosure can further improve the segmentation accuracy of the image segmentation operation.
In an embodiment of the present disclosure, the split network model mentioned in the above embodiment is a U-Net network model. Because the U-Net network model can support the training process on the premise of a small amount of sample image data, the calculation amount of the training process can be greatly reduced, and the U-Net network model has more obvious advantages particularly for medical image data with a small amount of sample image data. In addition, the U-Net network model can realize image segmentation operation at the pixel point level, so that the segmentation accuracy can be fully improved.
It should be noted that the embodiments of the present disclosure are not limited to the above-mentioned U-Net network model, and other network models based on deep learning can be applied to the tissue classification method based on medical images according to the embodiments of the present disclosure.
Fig. 9 is a schematic flowchart illustrating a fine processing operation performed on a first segmentation region to determine image segmentation information according to an exemplary embodiment of the present disclosure. The embodiment shown in fig. 9 of the present disclosure is extended on the basis of the embodiment shown in fig. 8 of the present disclosure, and the differences between the embodiment shown in fig. 9 and the embodiment shown in fig. 8 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 9, in the tissue classification method based on medical images provided by the embodiment of the present disclosure, the step of performing a fine processing operation on the first segmentation region to determine the image segmentation information includes the following steps.
Step 1121, determining a seed region corresponding to the tissue to be typed based on the first segmentation region.
Illustratively, the seed region mentioned in step 1121 refers to a seed image block region corresponding to a tissue to be classified in a medical image to be classified. Namely, the seed image block region belongs to the image region corresponding to the tissue to be classified. Wherein the size of the seed image block area can be determined based on the tissue to be classified and the actual condition of the medical image to be classified.
It should be understood that, since the segmentation accuracy of the segmentation network model is limited, the first segmentation region determined based on the segmentation network model may include an image region unrelated to the tissue to be classified. For example, if the medical image to be classified is a breast molybdenum target image and the tissue to be classified is a breast, the first segmentation region preferably includes only an image region corresponding to the breast. However, in one practical case, the first divided region includes not only an image region corresponding to the breast but also an image region corresponding to the pectoral muscle in close proximity to the breast position. In addition, in another practical situation, the image area corresponding to the tissue to be classified, such as the image area corresponding to the structure without the nipple of the breast, may not be completely included in the first segmentation area.
Step 1122, perform energy segmentation on the seed region using an energy optimization algorithm to determine image segmentation information.
Illustratively, the energy segmentation operation mentioned in step 1122 refers to performing a segmentation operation on the seed region by using an energy optimization algorithm based on an energy minimization principle to determine image segmentation information.
In the practical application process, firstly, a medical image to be classified is input into a segmentation network model to determine a first segmentation region, then, a seed region corresponding to a tissue to be classified is determined based on the first segmentation region, energy segmentation operation is carried out on the seed region by using an energy optimization algorithm to determine image segmentation information, then, an image region corresponding to the tissue to be classified is determined based on the image segmentation information, then, gray characteristic information corresponding to the tissue to be classified is determined based on the image region, and the type of the tissue to be classified is determined based on the gray characteristic information.
According to the tissue typing method based on the medical image, the seed region corresponding to the tissue to be typed is determined based on the first segmentation region, and then the energy segmentation operation is performed on the seed region by using the energy optimization algorithm so as to determine the image segmentation information, so that the purpose of performing the fine processing operation on the first segmentation region to determine the image segmentation information is achieved. Since the seed region corresponds to the tissue to be classified, the accuracy of the determined image segmentation information can be further improved.
Fig. 10 is a flowchart illustrating a process of determining an image region corresponding to a tissue to be classified based on a medical image to be classified according to another exemplary embodiment of the present disclosure. The embodiment shown in fig. 10 of the present disclosure is extended on the basis of the embodiment shown in fig. 7 of the present disclosure, and the differences between the embodiment shown in fig. 10 and the embodiment shown in fig. 7 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 10, in the tissue classification method based on medical images provided by the embodiment of the present disclosure, before the step of performing image segmentation operation on the medical image to be classified to determine image segmentation information, the following steps are further included.
And step 13, adjusting window width information and/or window level information corresponding to the medical image to be classified.
The window width information and the window level information correspond to the CT image. Wherein, the window width information refers to the range of CT values displayed on the CT image. Illustratively, tissues and lesions within this range of CT values are displayed in different simulated gray scales, while tissues and lesions with CT values above this range are displayed in white shades and no gray scale differences are present. Then, increasing the window width increases the range of CT values displayed on the image, and the number of texture structures having different densities displayed increases, but the difference in gray scale between the texture structures decreases. Decreasing the window width decreases the texture displayed on the image, whereas the difference in gray levels between the textures increases. The window level information refers to the center position of the window. For example, the window width is 100H, and when the window level is 0H, the CT value ranges from-50H to + 50H.
The determined medical images to be classified may differ due to different imaging apparatuses, and even the same imaging apparatus may differ when its imaging parameters vary. Based on this, the window width information and/or the window level information corresponding to the medical image to be classified need to be adjusted based on the steps recorded in step 13, so as to realize the normalization processing of the medical image to be classified, and remove the interference pixels as much as possible, further improve the ratio of the effective pixels, and further reduce or even avoid the adverse effect of the interference pixels on the subsequent segmentation network model. Herein, the effective pixels refer to pixels that can assist the image segmentation operation, such as pixels of a breast area. An interference pixel refers to a pixel that can negatively affect the image segmentation operation, such as a pixel of a pectoral muscle region.
And step 14, performing cutting operation on the medical image to be classified to remove the first image area, wherein the first image area and the tissue to be classified have a first association relation.
Illustratively, the first image region is a background region of the medical image to be classified, which is not related to the tissue to be classified.
And step 15, performing noise removal operation on the medical image to be classified.
Illustratively, the practical implementation procedure of performing the denoising operation on the medical image to be classified is as follows: and removing white noise in the medical image to be classified by using a Gaussian filter.
And step 16, removing a second image area based on HU information of the medical image to be classified, wherein a second association relation exists between the second image area and the tissue to be classified.
It should be noted that hu (hounsfield unit) information can reflect the degree of absorption of X-rays by tissue structures, and is image characteristic information belonging to medical images (such as CT images).
Illustratively, the medical image to be classified is a breast molybdenum target image, and the second image region is an image region corresponding to remark information in the breast molybdenum target image. The remark information is the characters such as remarked patient information in the medical image to be classified. Because the HU information corresponding to the remark information such as characters in the breast molybdenum target image is closer to the HU information corresponding to the breast region, and the remark information is usually far away from the tissue structure in the breast molybdenum target image, the embodiment of the present disclosure can achieve the purpose of removing the interference factors of the same type as the remark information by means of the above step 16, and further provide a precondition for further improving the segmentation accuracy and robustness of the image segmentation operation.
In the practical application process, window width information and/or window level information corresponding to a medical image to be classified are adjusted, the medical image to be classified is cut to remove a first image area, then noise removing operation is carried out on the medical image to be classified, a second image area is removed based on HU information of the medical image to be classified, then image segmentation operation is carried out on the medical image to be classified to determine image segmentation information, an image area corresponding to a tissue to be classified is determined based on the image segmentation information, then gray characteristic information corresponding to the tissue to be classified is determined based on the image area, and the type of the tissue to be classified is determined based on the gray characteristic information.
According to the tissue typing method based on the medical images, the medical images to be typed are further optimized in a mode of performing the preprocessing operation on the medical images to be typed before the image segmentation operation is performed on the medical images to be typed, and then a precondition is provided for further improving the accuracy of the determined type of the tissue to be typed.
It should be noted that the processing operations mentioned in the above steps 13 to 16 are not all necessary, and there is no strict order relationship between the processing operations. In the practical application process, the specific steps and the sequence relation among the steps which are required to be included can be adjusted according to the practical situation.
Fig. 11a to 11d are schematic diagrams illustrating a plurality of preprocessing stages corresponding to a medical image to be classified according to an exemplary embodiment of the disclosure. Specifically, fig. 11a shows an original breast molybdenum target image without preprocessing, fig. 11b shows a breast molybdenum target image after distinguishing a background region and a tissue structure based on HU information, fig. 11c shows a breast molybdenum target image after performing a cropping operation to remove a first image region, and fig. 11d shows a breast molybdenum target image after performing an operation to adjust window width information and window level information.
Fig. 12 is a flowchart illustrating a process of determining an image region corresponding to a tissue to be classified based on a medical image to be classified according to another exemplary embodiment of the present disclosure. The embodiment shown in fig. 12 of the present disclosure is extended on the basis of the embodiment shown in fig. 7 of the present disclosure, and the differences between the embodiment shown in fig. 12 and the embodiment shown in fig. 7 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 12, in the tissue classifying method based on medical images provided by the embodiment of the present disclosure, after the step of performing image segmentation operation on the medical image to be classified to determine the image segmentation information, the following steps are further included.
Step 17, determining a third segmentation area based on the image segmentation information.
And step 18, determining key point information corresponding to the tissue to be classified based on the third segmentation area and the tissue to be classified.
In addition, in the tissue typing method based on the medical image provided by the embodiment of the disclosure, the step of determining the image region corresponding to the tissue to be typed based on the image segmentation information includes the following steps.
And step 121, performing cutting operation on the third segmentation area based on the key point information to determine an image area corresponding to the tissue to be classified.
In the practical application process, firstly, image segmentation operation is carried out on a medical image to be classified to determine image segmentation information, a third segmentation area is determined based on the image segmentation information, then, key point information corresponding to the tissue to be classified is determined based on the third segmentation area and the tissue to be classified, cutting operation is carried out on the third segmentation area based on the key point information to determine an image area corresponding to the tissue to be classified, then, gray feature information corresponding to the tissue to be classified is determined based on the image area, and the type of the tissue to be classified is determined based on the gray feature information.
According to the tissue typing method based on the medical image, the image is cut based on the key point information corresponding to the tissue to be typed, so that the quantity of interference pixels contained in the image area corresponding to the determined tissue to be typed is further reduced, and therefore a precondition is provided for further improving the accuracy of the determined gray level histogram.
The beneficial effects of the medical image based tissue typing method mentioned in the embodiment shown in fig. 12 are described in detail below in conjunction with fig. 13, 14a and 14 b.
Fig. 13 is a schematic diagram illustrating key points corresponding to a tissue to be classified according to an exemplary embodiment of the disclosure. As shown in fig. 13, the tissue to be classified is a breast, and the corresponding key point information of the breast is the coordinate information of the key points A, B and C. Specifically, keypoint a is located in the nipple area, keypoint B is located in the first pectoral muscle area, and keypoint C is located in the second pectoral muscle area.
Illustratively, the determination of keypoints A, B and C may be determined via respective keypoint network models. For example, the keypoint network model is the centret network model. For a specific training process of the keypoint network model, reference may be made to a conventional model training process, which is not described in detail in the embodiments of the present disclosure.
It should be noted that, after the image shown in fig. 13 is cut based on the key points A, B and C, the pectoral muscle region adjacent to the breast region can be cut, so that the accuracy of the image region corresponding to the tissue to be classified can be further refined.
Fig. 14a and 14b are graphs showing gray level histograms at different stages provided by an exemplary embodiment of the present disclosure. Specifically, fig. 14a shows a gray level histogram 5 determined directly based on the image shown in fig. 13. Fig. 14b shows a gray histogram 6 determined after cropping the pectoral muscle region in the image of fig. 13 based on the keypoint information.
As can be seen from fig. 14a and 14b, the accuracy of the determined grayscale histogram can be further improved after the image is clipped based on the key point information.
Fig. 15 is a schematic structural diagram of a medical image-based tissue typing device according to an exemplary embodiment of the present disclosure. As shown in fig. 15, a tissue typing device based on medical images provided by an embodiment of the present disclosure includes:
an image region determining module 100, configured to determine an image region corresponding to a tissue to be classified based on a medical image to be classified;
a gray characteristic information determining module 200, configured to determine gray characteristic information corresponding to a tissue to be classified based on an image region;
a type determining module 300 for determining the type of the tissue to be typed based on the gray characteristic information.
Fig. 16 is a schematic structural diagram of a medical image-based tissue typing device according to another exemplary embodiment of the present disclosure. The embodiment shown in fig. 16 of the present disclosure is extended on the basis of the embodiment shown in fig. 15 of the present disclosure, and the differences between the embodiment shown in fig. 16 and the embodiment shown in fig. 15 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 16, in the medical image-based tissue typing device provided in the embodiment of the present disclosure, the grayscale feature information determining module 200 includes:
a gray histogram information determining unit 210, configured to determine gray histogram information corresponding to the tissue to be classified based on the image region.
Also, in the medical image-based tissue typing apparatus provided in the embodiment of the present disclosure, the type determining module 300 includes:
a type determining unit 310 for determining the type of the tissue to be classified based on the gray histogram information.
Fig. 17 is a schematic structural diagram of an image area determining module according to an exemplary embodiment of the present disclosure. The embodiment shown in fig. 17 of the present disclosure is extended on the basis of the embodiment shown in fig. 15 of the present disclosure, and the differences between the embodiment shown in fig. 17 and the embodiment shown in fig. 15 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 17, in the medical image-based tissue typing device provided in the embodiment of the present disclosure, the image region determining module 100 includes:
an image segmentation information determination unit 110, configured to perform an image segmentation operation on a medical image to be classified to determine image segmentation information;
an image region determining unit 120, configured to determine an image region corresponding to the tissue to be classified based on the image segmentation information.
Fig. 18 is a schematic structural diagram of an image segmentation information determination unit according to an exemplary embodiment of the present disclosure. The embodiment shown in fig. 18 of the present disclosure is extended on the basis of the embodiment shown in fig. 17 of the present disclosure, and the differences between the embodiment shown in fig. 18 and the embodiment shown in fig. 17 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 18, in the medical image-based tissue typing device provided in the embodiment of the present disclosure, the image segmentation information determination unit 110 includes:
a first segmentation region determining subunit 1110, configured to input the medical image to be segmented to the segmentation network model to determine a first segmentation region, where the first segmentation region corresponds to a tissue to be segmented;
an image segmentation information determination subunit 1120 is configured to perform a fine processing operation on the first segmented region to determine image segmentation information.
Fig. 19 is a schematic structural diagram of an image segmentation information determination subunit according to an exemplary embodiment of the present disclosure. The embodiment shown in fig. 19 of the present disclosure is extended on the basis of the embodiment shown in fig. 18 of the present disclosure, and the differences between the embodiment shown in fig. 19 and the embodiment shown in fig. 18 will be emphasized below, and the descriptions of the same parts will not be repeated.
As shown in fig. 19, in the medical image-based tissue typing device provided in the embodiment of the present disclosure, the image segmentation information determination subunit 1120 includes:
a seed region determining subunit 11210 for determining a seed region corresponding to the tissue to be typed based on the first segmentation region;
a determining subunit 11220, configured to perform an energy segmentation operation on the seed region by using an energy optimization algorithm to determine image segmentation information.
Fig. 20 is a schematic structural diagram of an image area determining module according to another exemplary embodiment of the present disclosure. The embodiment shown in fig. 20 of the present disclosure is extended on the basis of the embodiment shown in fig. 17 of the present disclosure, and the differences between the embodiment shown in fig. 20 and the embodiment shown in fig. 17 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 20, in the medical image-based tissue typing device provided in the embodiment of the present disclosure, the image region determining module 100 further includes:
an adjusting unit 130, configured to adjust window width information and/or window level information corresponding to a medical image to be classified;
the first image region removing unit 140 is configured to perform a cropping operation on the medical image to be classified to remove a first image region, where a first association relationship exists between the first image region and the tissue to be classified;
the denoising unit 150 is used for denoising the medical image to be classified;
a second image region removing unit 160, configured to remove a second image region based on the HU information of the medical image to be classified, where the second image region and the tissue to be classified have a second association relationship.
Fig. 21 is a schematic structural diagram of an image area determining module according to still another exemplary embodiment of the present disclosure. The embodiment shown in fig. 21 of the present disclosure is extended on the basis of the embodiment shown in fig. 17 of the present disclosure, and the differences between the embodiment shown in fig. 21 and the embodiment shown in fig. 17 are emphasized below, and the descriptions of the same parts are omitted.
As shown in fig. 21, in the medical image-based tissue typing device provided in the embodiment of the present disclosure, the image region determining module 100 further includes:
a third divided region determining unit 170 for determining a third divided region based on the image division information;
and a key point information determining unit 180, configured to determine, based on the third segmentation region and the tissue to be classified, key point information corresponding to the tissue to be classified.
Also, in the tissue classification device based on a medical image provided by the embodiment of the present disclosure, the image region determination unit 120 includes:
an image area determining subunit 1210, configured to perform a cropping operation on the third segmentation area based on the key point information to determine an image area corresponding to the tissue to be classified.
It should be understood that the image region determining module 100, the grayscale feature information determining module 200, and the type determining module 300 in the medical image-based tissue typing device provided in fig. 15 to 21, and the image segmentation information determining unit 110, the image region determining unit 120, the adjusting unit 130, the first image region removing unit 140, the noise canceling unit 150, the second image region removing unit 160, the third segmentation region determining unit 170, and the keypoint information determining unit 180 included in the image region determining module 100, and the first segmentation region determining subunit 1110 and the image segmentation information determining subunit 1120 included in the image segmentation information determining unit 110, and the seed region determining subunit 11210 and the determining subunit 11220 included in the image segmentation information determining subunit 1120, and the image region determining subunit 1210 included in the image region determining unit 120, and the operation and function of the gray histogram information determination unit 210 included in the gray characteristic information determination module 200 and the type determination unit 310 included in the type determination module 300 may refer to the above-mentioned tissue classification method based on medical images provided in fig. 3 to 12, and are not described herein again in order to avoid repetition.
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 22. Fig. 22 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present disclosure.
As shown in fig. 22, the electronic device 40 includes one or more processors 401 and a memory 402.
The processor 401 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 40 to perform desired functions.
Memory 402 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 401 to implement the medical image-based tissue typing methods of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as a medical image to be classified including a tissue to be classified may also be stored in the computer-readable storage medium.
In one example, the electronic device 40 may further include: an input device 403 and an output device 404, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input device 403 may include, for example, a keyboard, a mouse, and the like. The output means 404 may output various information to the outside, including the type information of the determined tissue to be classified, and the like. The output devices 404 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 40 relevant to the present disclosure are shown in fig. 22, omitting components such as buses, input/output interfaces, and the like. In addition, electronic device 40 may include any other suitable components, depending on the particular application.
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the medical image-based tissue typing methods according to various embodiments of the present disclosure described above in this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions, which, when executed by a processor, cause the processor to perform the steps in the medical image-based tissue typing methods according to various embodiments of the present disclosure described above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (14)

1. A tissue typing method based on medical images is characterized in that the tissue typing method is applied to medical images to be typed which comprise tissues to be typed, and comprises the following steps:
determining an image area corresponding to the tissue to be classified based on the medical image to be classified;
determining gray characteristic information corresponding to the tissue to be classified based on the image area;
determining the type of the tissue to be classified based on the gray characteristic information.
2. The method of claim 1, wherein the determining gray scale feature information corresponding to the tissue to be classified based on the image region comprises:
determining gray level histogram information corresponding to the tissue to be classified based on the image area;
wherein the determining the type of the tissue to be classified based on the gray feature information comprises:
determining the type of the tissue to be classified based on the gray histogram information.
3. The method of claim 1 or 2, wherein the determining the type of the tissue to be typed based on the grayscale feature information comprises:
determining preset typing information corresponding to the tissue to be typed;
and determining the type of the tissue to be typed based on the gray characteristic information and the preset typing information.
4. The method of claim 1, wherein the determining an image region corresponding to the tissue to be typed based on the medical image to be typed comprises:
performing image segmentation operation on the medical image to be classified to determine image segmentation information;
and determining an image area corresponding to the tissue to be classified based on the image segmentation information.
5. The method according to claim 4, wherein the performing an image segmentation operation on the medical image to be classified to determine image segmentation information comprises:
inputting the medical image to be classified into a segmentation network model to determine a first segmentation region, wherein the first segmentation region corresponds to the tissue to be classified;
and performing a fine processing operation on the first segmentation region to determine the image segmentation information.
6. The method of claim 5, wherein the refining the first segmented region to determine image segmentation information comprises:
determining a seed region corresponding to the tissue to be typed based on the first segmentation region;
and performing energy segmentation operation on the seed region by using an energy optimization algorithm to determine the image segmentation information.
7. The method of claim 6, wherein the energy optimization algorithm comprises a graph cut algorithm, the performing an energy segmentation operation on the seed region with the energy optimization algorithm to determine the image segmentation information, comprising:
determining a second segmentation area corresponding to the seed area by utilizing the graph cutting algorithm based on the seed area;
and performing optimization operation on the segmentation boundary of the second segmentation region by using a morphological operator to determine the image segmentation information.
8. The method according to any one of claims 4 to 7, wherein before performing an image segmentation operation on the medical image to be classified to determine image segmentation information, at least one of the following is further included:
adjusting window width information and/or window level information corresponding to the medical image to be classified;
cutting the medical image to be classified to remove a first image area, wherein the first image area and the tissue to be classified have a first association relation;
denoising the medical image to be classified;
and removing a second image area based on the HU information of the medical image to be classified, wherein a second incidence relation exists between the second image area and the tissue to be classified.
9. The method according to any one of claims 4 to 7, further comprising, after performing an image segmentation operation on the medical image to be classified to determine image segmentation information:
determining a third segmentation region based on the image segmentation information;
determining key point information corresponding to the tissue to be classified based on the third segmentation area and the tissue to be classified;
wherein the determining the image region corresponding to the tissue to be classified based on the image segmentation information comprises:
and performing cutting operation on the third segmentation area based on the key point information to determine an image area corresponding to the tissue to be classified.
10. The method of claim 9, wherein the determining the keypoint information corresponding to the tissue to be classified based on the third segmentation region and the tissue to be classified comprises:
and inputting the third segmentation area into a key point network model to determine key point information corresponding to the to-be-classified tissue.
11. The method according to claim 1 or 2, characterized in that the medical image to be classified is a breast molybdenum target image and the tissue to be classified is a breast.
12. A tissue typing device based on medical images, which is applied to medical images to be typed including tissues to be typed, and comprises:
the image area determining module is used for determining an image area corresponding to the tissue to be classified based on the medical image to be classified;
the gray characteristic information determining module is used for determining the gray characteristic information corresponding to the tissue to be classified based on the image area;
and the type determining module is used for determining the type of the tissue to be classified based on the gray characteristic information.
13. A computer-readable storage medium storing a computer program for executing the medical image-based tissue typing method according to any one of claims 1 to 11.
14. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor for performing the medical image based tissue typing method of any one of the preceding claims 1 to 11.
CN201911412146.6A 2019-12-31 2019-12-31 Tissue typing method and device based on medical image and electronic equipment Pending CN111080642A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911412146.6A CN111080642A (en) 2019-12-31 2019-12-31 Tissue typing method and device based on medical image and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911412146.6A CN111080642A (en) 2019-12-31 2019-12-31 Tissue typing method and device based on medical image and electronic equipment

Publications (1)

Publication Number Publication Date
CN111080642A true CN111080642A (en) 2020-04-28

Family

ID=70321160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911412146.6A Pending CN111080642A (en) 2019-12-31 2019-12-31 Tissue typing method and device based on medical image and electronic equipment

Country Status (1)

Country Link
CN (1) CN111080642A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715259A (en) * 2015-01-22 2015-06-17 苏州工业职业技术学院 Nuclear self-adaptive optimizing and classifying method of X-ray mammary gland images
CN107481252A (en) * 2017-08-24 2017-12-15 上海术理智能科技有限公司 Dividing method, device, medium and the electronic equipment of medical image
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN109447088A (en) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of breast image identification
CN109872307A (en) * 2019-01-30 2019-06-11 腾讯科技(深圳)有限公司 Method, relevant device and the medium of lump in a kind of detection biological tissue images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715259A (en) * 2015-01-22 2015-06-17 苏州工业职业技术学院 Nuclear self-adaptive optimizing and classifying method of X-ray mammary gland images
CN107481252A (en) * 2017-08-24 2017-12-15 上海术理智能科技有限公司 Dividing method, device, medium and the electronic equipment of medical image
CN109146848A (en) * 2018-07-23 2019-01-04 东北大学 A kind of area of computer aided frame of reference and method merging multi-modal galactophore image
CN109447088A (en) * 2018-10-16 2019-03-08 杭州依图医疗技术有限公司 A kind of method and device of breast image identification
CN109872307A (en) * 2019-01-30 2019-06-11 腾讯科技(深圳)有限公司 Method, relevant device and the medium of lump in a kind of detection biological tissue images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHUAN ZHOU ET AL: "Computerized image analysis: Estimation of breast density on mammograms", 《MED PHYS》 *
ZHE LIU ET AL: "Liver CT sequence segmentation based with improved U-Net and graph cut", 《EXPERT SYSTEMS WITH APPLICATIONS》 *
刘庆庆等: "基于子区域分类的乳腺密度估计", 《计算机工程与应用》 *

Similar Documents

Publication Publication Date Title
US11669964B2 (en) Systems and methods to facilitate review of liver tumor cases
WO2021129323A1 (en) Ultrasound image lesion describing method and apparatus, computer device, and storage medium
CN111166362B (en) Medical image display method and device, storage medium and electronic equipment
US20080159613A1 (en) Method for classifying breast tissue density
CN110046627B (en) Method and device for identifying mammary gland image
CN110853024B (en) Medical image processing method, medical image processing device, storage medium and electronic equipment
CN111986206A (en) Lung lobe segmentation method and device based on UNet network and computer-readable storage medium
CN109461144B (en) Method and device for identifying mammary gland image
Ribeiro et al. Handling inter-annotator agreement for automated skin lesion segmentation
CN110956632A (en) Method and device for automatically detecting pectoralis major region in molybdenum target image
CN113554131B (en) Medical image processing and analyzing method, computer device, system and storage medium
Luo et al. Automatic liver parenchyma segmentation from abdominal CT images using support vector machines
EP1908404A1 (en) Abnormal shade candidate detection method and abnormal shade candidate detection device
CN116630762B (en) Multi-mode medical image fusion method based on deep learning
Goel et al. Improved detection of kidney stone in ultrasound images using segmentation techniques
CN114332132A (en) Image segmentation method and device and computer equipment
CN111738975B (en) Image identification method and image identification device
Nair et al. Modified level cut liver segmentation from ct images
CN111161256A (en) Image segmentation method, image segmentation device, storage medium, and electronic apparatus
Cheng et al. Dental hard tissue morphological segmentation with sparse representation-based classifier
CN114757953B (en) Medical ultrasonic image recognition method, equipment and storage medium
CN116109610A (en) Method and system for segmenting breast tumor in ultrasonic examination report image
Kayode et al. An explorative survey of image enhancement techniques used in mammography
CN111080642A (en) Tissue typing method and device based on medical image and electronic equipment
CN112884699B (en) Method and image processing device for segmenting image data and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Applicant after: Tuxiang Medical Technology Co., Ltd

Address before: Room B401, floor 4, building 1, No. 12, Shangdi Information Road, Haidian District, Beijing 100085

Applicant before: Beijing Tuoxiang Technology Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428

RJ01 Rejection of invention patent application after publication