WO2021159767A1 - 一种医学图像处理的方法、图像处理的方法及装置 - Google Patents

一种医学图像处理的方法、图像处理的方法及装置 Download PDF

Info

Publication number
WO2021159767A1
WO2021159767A1 PCT/CN2020/126063 CN2020126063W WO2021159767A1 WO 2021159767 A1 WO2021159767 A1 WO 2021159767A1 CN 2020126063 W CN2020126063 W CN 2020126063W WO 2021159767 A1 WO2021159767 A1 WO 2021159767A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
processed
pixel
medical
Prior art date
Application number
PCT/CN2020/126063
Other languages
English (en)
French (fr)
Inventor
王亮
陈韩波
孙嘉睿
朱艳春
姚建华
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to KR1020227009792A priority Critical patent/KR20220050977A/ko
Priority to JP2022524010A priority patent/JP2022553979A/ja
Priority to EP20919308.5A priority patent/EP4002268A4/en
Publication of WO2021159767A1 publication Critical patent/WO2021159767A1/zh
Priority to US17/685,847 priority patent/US20220189017A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A10/00TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
    • Y02A10/40Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping

Definitions

  • This application relates to the field of artificial intelligence, specifically to image processing technology.
  • WSI whole field of view digital slice
  • the main way to extract the pathological tissue area on the WSI image is to first reduce the WSI image to a certain scale and then convert it into a grayscale image, and then perform further image processing on the grayscale image, such as image binarization. Hole removal processing, etc., and finally the pathological tissue area is extracted on the processed image.
  • This application provides a medical image processing method, image processing method and device, which are used to generate a difference image using the color information of different channels before binarizing the image, thereby effectively using the image Color information, based on the pathological tissue area extracted from the difference image is more accurate, and has a positive impact on subsequent image analysis.
  • the first aspect of the present application provides a method for medical image processing, which is executed by a server, and includes:
  • the medical image to be processed is a color image
  • the medical image to be processed includes first image data, second image data, and third image data, and the first image data, second image data, and third image data
  • the image data respectively correspond to the color information under different attributes
  • Binarization processing is performed on the difference image to obtain a binarized image, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed.
  • the second aspect of the present application provides an image processing method, which is executed by a server, and includes:
  • first image to be processed is a color image
  • first image to be processed includes first image data, second image data, and third image data
  • first image The data, the second image data, and the third image data respectively correspond to color information in different channels;
  • the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed; according to the foreground area of the binarized image, from the first image to be processed Extract the pathological tissue area;
  • a composite image is generated, where the pathological tissue area is located in the first layer, the second to-be-processed image is located in the second layer, and the first layer is overlaid on the second layer.
  • a third aspect of the present application provides a medical image processing device, including:
  • the acquisition module is used to acquire the medical image to be processed, the medical image to be processed is a color image, and the medical image to be processed includes first image data, second image data, and third image data, and the first image data and the second image data And the third image data respectively correspond to the color information under different channels;
  • a generating module for generating a difference image according to the first image data, the second image data, and the third image data
  • the processing module is configured to perform binarization processing on the difference image to obtain a binarized image, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed.
  • the generating module is specifically configured to generate a maximum value image and a minimum value image according to the first image data, the second image data, and the third image data included in the medical image to be processed;
  • the generating module is specifically used to calculate the first pixel value at the first pixel position in the first image data, the second pixel value at the second pixel position in the second image data, and the third pixel at the third pixel position in the third image data. Value, determine the maximum pixel value and the minimum pixel value;
  • the maximum value image according to the maximum pixel value obtain the minimum value image according to the minimum pixel value, the pixel value at the fourth pixel position in the maximum value image is the maximum pixel value, and the pixel value at the fifth pixel position in the minimum value image is the minimum pixel value,
  • the first pixel position, the second pixel position, the third pixel position, the fourth pixel position, and the fifth pixel position all correspond to the position of the same pixel in the medical image to be processed;
  • the generating module is specifically configured to determine the pixel difference value according to the pixel value of the fourth pixel position in the maximum value image and the pixel value of the fifth pixel position in the minimum value image;
  • the difference image is obtained.
  • the pixel value of the sixth pixel position in the difference image is the pixel difference value.
  • the fourth pixel position, the fifth pixel position and the sixth pixel position all correspond to the same one in the medical image to be processed The position of the pixel.
  • the generating module is specifically configured to generate the difference image to be processed according to the first image data, the second image data, and the third image data;
  • Gaussian blur processing is performed on the difference image to be processed to obtain the difference image.
  • the medical image processing apparatus further includes a determining module
  • the determination module is used to determine the binarization threshold according to the difference image
  • the determining module is further configured to perform binarization processing on the difference image according to the binarization threshold to obtain a binarized image.
  • the determining module is specifically configured to obtain N pixel values corresponding to N pixels according to the difference image, where the pixel values and the pixels have a one-to-one correspondence, and N is an integer greater than 1;
  • the binarization threshold is determined.
  • the generating module is specifically used to detect the background area in the binarized image using a flooding algorithm, where the background area includes a plurality of background pixels;
  • Median filtering processing is performed on the hole-filled image to obtain a result image, and the foreground area of the result image corresponds to the pathological tissue area of the medical image to be processed.
  • the processing module is specifically used to perform median filter processing on the hole-filled image to obtain a filtered image
  • boundary line of the foreground area in the filtered image where the boundary line includes M pixels, and M is an integer greater than 1;
  • K pixels are extended outward to obtain the result image, where K is an integer greater than or equal to 1.
  • the acquisition module is specifically used to acquire the original medical image
  • the medical sub-image includes a pathological tissue area, it is determined to be a medical image to be processed
  • the medical sub-image is determined as the background image, and the background image is removed.
  • the image processing device further includes a training module
  • the generating module is also used to generate the target positive sample image according to the image to be processed and the foreground area of the image to be processed, wherein the target positive sample image belongs to a positive sample image in the positive sample set, and each positive sample image contains a pathological tissue area ;
  • the obtaining module is also used to obtain a negative sample set, wherein the negative sample set includes at least one negative sample image, and each negative sample image does not include a pathological tissue area;
  • the training module is used to train the image processing model based on the positive sample set and the negative sample set.
  • a fourth aspect of the present application provides an image processing device, including:
  • An acquisition module for acquiring a first image to be processed and a second image to be processed, wherein the first image to be processed is a color image, and the first image to be processed includes first image data, second image data, and third image data , And the first image data, the second image data, and the third image data respectively correspond to the color information under different channels;
  • a generating module for generating a difference image according to the first image data, the second image data, and the third image data
  • the processing module is used to perform binarization processing on the difference image to obtain a binarized image.
  • the foreground area of the binarized image corresponds to the target object of the medical image to be processed; the foreground corresponding to the first image to be processed
  • the extraction module is used to extract the target object from the first image to be processed according to the foreground area of the binarized image
  • the generating module is also used to generate a composite image according to the target object and the second image to be processed, wherein the target object is located in the first layer, the second image to be processed is located in the second layer, and the first layer covers the second image Above the layer.
  • the fifth aspect of the present application provides a computer-readable storage medium, in which instructions are stored, which when run on a computer, cause the computer to execute the methods of the above-mentioned aspects.
  • a method for medical image processing is provided.
  • a color medical image to be processed can be obtained, and the medical image to be processed includes first image data, second image data, and third image data.
  • the first image data, the second image data, and the third image data respectively correspond to the color information under different attributes.
  • the difference image is generated according to the first image data, the second image data, and the third image data.
  • the difference image is binarized to obtain a binarized image, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed.
  • the color information of the channel generates a difference image, which effectively utilizes the color information in the image.
  • the pathological tissue area extracted based on the difference image is more accurate and has a positive impact on subsequent image analysis.
  • FIG. 1 is a schematic diagram of an architecture of a medical image processing system in an embodiment of the application
  • FIG. 2 is a schematic diagram of an embodiment of a method for medical image processing in an embodiment of the application
  • Fig. 3 is a schematic diagram of an embodiment of a medical image to be processed in an embodiment of the application
  • FIG. 4 is a schematic diagram of an embodiment of a difference image in an embodiment of the application.
  • FIG. 5 is a schematic diagram of an embodiment of a binarized image in an embodiment of the application.
  • Fig. 6 is a schematic diagram of an embodiment of a result image in an embodiment of the application.
  • FIG. 7 is a schematic diagram of another embodiment of the result image in the embodiment of the application.
  • FIG. 8 is a schematic diagram of an embodiment of acquiring medical images to be processed in an embodiment of the application.
  • FIG. 9 is a schematic flowchart of a method for medical image processing in an embodiment of the application.
  • FIG. 10 is a schematic diagram of an embodiment of a result image in an embodiment of the application.
  • FIG. 11 is a schematic diagram of an embodiment of an image processing method in an embodiment of the application.
  • FIG. 12 is a schematic diagram of an embodiment of a medical image processing device in an embodiment of the application.
  • FIG. 13 is a schematic diagram of an embodiment of an image processing device in an embodiment of the application.
  • FIG. 14 is a schematic diagram of a server structure provided by an embodiment of the present application.
  • the embodiments of the present application provide a medical image processing method, an image processing method and a device, which are used to generate a difference image using the color information of different channels before the image is binarized, thereby effectively using the image
  • the color information in, the pathological tissue area extracted based on the difference image is more accurate, and has a positive impact on subsequent image analysis.
  • Image processing is a technology that can analyze images to achieve the desired results.
  • Image processing generally refers to the processing of digital images, while digital images refer to a large two-dimensional array obtained by shooting with industrial cameras, video cameras, and scanners. The elements of this array are called pixels, and their values are called gray values.
  • Image processing technology can help people understand the world more objectively and accurately.
  • the human visual system can help humans obtain a large amount of information from the outside world. Images and graphics are the carriers of all visual information.
  • image processing technologies may include, but are not limited to, image transformation, image coding and compression, image enhancement and restoration, image segmentation, image description, matting technology, and image classification.
  • the image processing method provided in the present application can be applied to scenes in the medical field.
  • medical images that can be processed include, but are not limited to, brain images, heart images, chest images, and cell images, and medical images may be subject to noise. , Field offset effect, local body effect and the influence of tissue movement. Because there are also differences between individuals and the shape of the tissue structure is complex, medical images are generally more blurred than ordinary images and have unevenness.
  • the medical image involved in this application is a color image, which can be a color ultrasound image or a whole-field digital pathology (WSI) image, or it can include a color digital image obtained from a microscope. Taking the WSI image as an example, the edge of the WSI image The length is usually 10,000 pixels to 100,000 pixels.
  • WSI images it is often necessary to scale or cut into small-size images for further processing.
  • image processing it is necessary to obtain the area with pathological tissue slices, and then according to the Areas for pathological analysis, such as nucleus quantitative analysis, cell membrane quantitative analysis, cytoplasm quantitative analysis, tissue microvascular analysis, and tissue microvascular analysis, etc.
  • the medical image processing method of this application can obtain the medical image to be processed, and according to the first image data, the second image data, and the third image data included in the medical image to be processed , Generate a difference image, where the first image data, the second image data, and the third image data respectively correspond to the color information under different attributes, and the difference image is further binarized to obtain a binary image,
  • the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed.
  • the difference image effectively utilizes the color information in the image, and the pathological tissue area extracted based on the difference image is more accurate, and has a positive impact on subsequent image analysis.
  • image processing can also be applied to scenes in the field of remote sensing.
  • high-resolution remote sensing images can be used in marine monitoring, land cover monitoring, marine pollution and maritime rescue, and high-resolution remote sensing images have image details.
  • the characteristics of richness, prominent geometric structure of ground objects, and complex target structure, such as complex shadows of coastline objects in high-resolution remote sensing images, large vegetation coverage, or insufficient processing of light and dark artificial facilities, because high-resolution remote sensing images are different from ordinary The image is more detailed and more complex than usual.
  • the vegetation can be subtracted from the high-resolution remote sensing image to determine the corresponding area. Therefore, based on the characteristics of high-resolution remote sensing images, through the image processing method of this application, a difference image can be generated based on the first image data, the second image data, and the third image data included in the first image to be processed.
  • the first image to be processed is a color image
  • the first image data, the second image data, and the third image data included in the first image to be processed respectively correspond to the color information in different channels
  • the generated difference image is performed twice
  • the binarized image is obtained by the binarization process, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed, and then according to the result image, the target object (such as the vegetation area) is extracted from the first to-be-processed image ).
  • the target object extracted based on the difference image is more accurate, and the details in the high-resolution remote sensing image can be obtained more accurately, thereby improving the processing of the high-resolution remote sensing image. Accuracy.
  • FIG. 1 is a schematic diagram of an architecture of the medical image processing system in an embodiment of the application.
  • the image processing system includes a server and terminal equipment.
  • the medical image processing device can be deployed on a server or on a terminal device with higher computing power.
  • the server obtains the medical image to be processed, and then the server generates a difference image according to the first image data, the second image data, and the third image data included in the medical image to be processed, Further binarization processing is performed on the difference image to obtain a binarized image, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed.
  • the server can perform medical image analysis based on the pathological tissue area.
  • the terminal device acquires the medical image to be processed, and then the terminal device generates a difference according to the first image data, the second image data, and the third image data included in the medical image to be processed. Further, the difference image is binarized to obtain a binarized image, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed.
  • the terminal device can perform medical image analysis based on the pathological tissue area.
  • the server in FIG. 1 may be one server or a server cluster or cloud computing center composed of multiple servers, and the details are not limited here.
  • the terminal device can be a tablet computer, a notebook computer, a palmtop computer, a mobile phone, a personal computer (PC), and a voice interaction device shown in Figure 1, or it can be a monitoring device, a face recognition device, etc., which are not done here. limited.
  • FIG. 1 Although only five terminal devices and one server are shown in FIG. 1, it should be understood that the example in FIG. 1 is only used to understand this solution, and the number of specific terminal devices and servers should be flexibly determined in combination with actual conditions.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Machine Learning is a multi-field interdisciplinary subject, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other subjects. Specializing in the study of how computers simulate or realize human learning behaviors in order to acquire new knowledge or skills, and reorganize the existing knowledge structure to continuously improve its own performance.
  • Machine learning is the core of artificial intelligence and the fundamental way to make computers intelligent. Its applications are in all fields of artificial intelligence.
  • Machine learning and deep learning usually include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and style teaching learning.
  • CV Computer Vision
  • Computer vision technology usually includes image processing, image recognition, image semantic understanding, image retrieval, optical character recognition (OCR), video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual Technologies such as reality, augmented reality, synchronized positioning and map construction also include common facial recognition, fingerprint recognition and other biometric recognition technologies.
  • OCR optical character recognition
  • video processing video semantic understanding, video content/behavior recognition
  • 3D technology three-dimensional object reconstruction
  • virtual Technologies such as reality, augmented reality, synchronized positioning and map construction also include common facial recognition, fingerprint recognition and other biometric recognition technologies.
  • FIG. 2 is the method of medical image processing in the embodiment of this application.
  • a schematic diagram of an embodiment, as shown in the figure, an embodiment of a method for processing medical images in an embodiment of the present application includes:
  • the medical image to be processed is a color image
  • the medical image to be processed includes first image data, second image data, and third image data, and the first image data, second image data, and It is the third image data respectively corresponding to the color information under different channels;
  • the medical image processing device may obtain a color image to be processed medical image
  • the to-be-processed medical image may include the first image data, the second image data, and the third image data, and the first image data, the second image data
  • the second image data and the third image data respectively correspond to color information in different channels.
  • the medical image to be processed may be a medical image received by the medical image processing device through a wired network, or may be a medical image stored by the medical image processing device itself.
  • the medical image to be processed may be an area captured from the WSI image, and the WSI image can be scanned through a microscope, because the slice refers to a glass slide prepared after hematoxylin or other staining methods.
  • the WSI image obtained after scanning the film through a microscope is a color image.
  • the image color mode of a color image includes, but is not limited to, red green blue (RGB) color mode, luminance-bandwidth chrominance (YUV) color mode, hue-saturation-luminance (Hue- Saturation-Value, HSV) color mode, and color information can be expressed as pixel values under different channels, such as the pixel value of the R channel, the pixel value of the G channel, and the pixel value of the B channel.
  • RGB red green blue
  • YUV luminance-bandwidth chrominance
  • HSV hue-saturation-luminance
  • color information can be expressed as pixel values under different channels, such as the pixel value of the R channel, the pixel value of the G channel, and the pixel value of the B channel.
  • WSI image formats include but are not limited to file formats such as SVS and NDPI.
  • the length and width of WSI images are usually in the range of tens of thousands of pixels, and the image size is relatively large. Direct processing of the WSI image requires a large amount of memory. Therefore, it is necessary to WSI image is cut.
  • the image with the largest resolution in the WSI image file is read as the image to be processed.
  • this embodiment can capture the medical image to be processed on the reduced WSI image, and the WSI image can be reduced by any multiple, such as 20 times or 10 times, and the length and width of the reduced WSI image are within the range of several thousand pixels. It should be understood that Since the reduction factor is artificially defined, the specific reduction factor should be flexibly determined in light of the actual situation.
  • FIG. 3 is a schematic diagram of an embodiment of the medical image to be processed in the embodiment of the application.
  • the medical image to be processed includes the tissue area of the case, and there is no other gray-scale background or pure white background. Interfere with the medical image to be processed.
  • the image color mode of the medical image to be processed is RGB as an example for description. Since the first image data, the second image data, and the third image data included in the medical image to be processed respectively correspond to different channels.
  • the first image data can be the pixel value 200 corresponding to the R channel
  • the second image data can be the pixel value 100 corresponding to the G channel
  • the third image data It can be the B channel corresponding to a pixel value of 60.
  • the first image data can be R channel corresponding to a pixel value of 100
  • the second image data can be G channel corresponding to a pixel value of 800
  • the third image data can be B channel Corresponds to a pixel value of 40.
  • HSV images or YUV images can be converted into RGB images before subsequent processing.
  • the medical image processing device can be deployed on a server, or it can be deployed on a terminal device with higher computing power. In this embodiment, the medical image processing device is deployed on a server as an example for introduction.
  • the medical image processing device can generate a difference image based on the first image data, the second image data, and the third image data. Specifically, the difference image appears as a grayscale image.
  • FIG. 4 is a schematic diagram of an embodiment of the difference image in the embodiment of the application.
  • the difference image can be the image including the pathological tissue area shown in the figure. Since the color information corresponding to different channels is used to distinguish the pixel values, the image color mode of the medical image to be processed is RGB as an example for description. If the medical image to be processed is gray, the RGB is relatively similar. If the medical image to be processed is in color, the difference between RGB is large, and there is a large color difference in the pathological tissue.
  • the medical image processing apparatus may perform binarization processing on the difference image generated in step 102 to obtain a binarized image.
  • the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed.
  • FIG. 5 is a schematic diagram of an embodiment of the binarized image in the embodiment of the application.
  • the difference image shown in (A) in FIG. For grayscale images, grayscale image-based processing can be used.
  • adaptive binarization is used for foreground processing, that is, the difference image is binarized to obtain the result shown in Figure 5.
  • B Binarized image shown. And in the binarized image, white is the foreground area including the pathological tissue area, and black is the background area not including the pathological tissue area.
  • FIG. 6 is a schematic diagram of an embodiment of the foreground area in the embodiment of the application. As shown in the figure, according to the binarized image shown in (A) in FIG. The foreground area of the tissue area, and the black is the background area that does not include the pathological tissue area. Therefore, according to the binarized image, the area corresponding to the medical image to be processed as shown in FIG. 6(B) can be generated.
  • a method for medical image processing is provided.
  • the color information difference of gray pixels under different channels is small, the color information of color pixels under different channels is relatively different. Therefore, before the image is binarized, the color information of different channels is used to generate the difference image, thereby effectively using the color information in the image, and the pathological tissue area extracted based on the difference image is more accurate. And it has a positive impact on subsequent image analysis.
  • the image data and the third image data to generate a difference image may include:
  • the medical image processing device may generate a maximum value image and a minimum value according to the first image data, the second image data, and the third image data included in the medical image to be processed. Image, and finally based on the maximum image and the minimum image, you can generate a difference image.
  • the image color mode of the medical image to be processed is RGB as an example for description. Since the first image data, the second image data, and the third image data included in the medical image to be processed respectively correspond to color information in different channels, The color information is expressed as the pixel value corresponding to the R channel, G channel and B channel, and the maximum value in the R channel, G channel and B channel is determined, and the maximum value image is determined by the maximum value. Similarly, it can be determined in The minimum value in the R channel, G channel and B channel, through which the minimum value image can be determined, and then each pixel point in the maximum value image is subtracted from each pixel point at the corresponding position in the minimum value image, Get the difference image.
  • a method for generating a difference image is provided.
  • a maximum value image and a minimum value image are generated according to the first image data, the second image data, and the third image data.
  • the color information is different.
  • the color information of the medical image to be processed included is more accurate, thereby improving the accuracy of the difference image generation.
  • the second image data and the third image data to generate the maximum value image and the minimum value image may include:
  • the maximum value image is obtained according to the maximum pixel value
  • the minimum value image is obtained according to the minimum pixel value.
  • the pixel value of the fourth pixel position in the maximum value image is the maximum pixel value
  • the pixel value of the fifth pixel position in the minimum value image is the minimum pixel value
  • the four-pixel position and the fifth pixel position both correspond to the position of the same pixel in the medical image to be processed
  • the difference image is generated, which can include:
  • the difference image is obtained.
  • the pixel value of the sixth pixel position in the difference image is the pixel difference value.
  • the fourth pixel position, the fifth pixel position and the sixth pixel position all correspond to the same one in the medical image to be processed The position of the pixel.
  • the medical image processing device can determine the maximum pixel value and the minimum pixel value corresponding to the target pixel according to the first image data, the second image data, and the third image data included in the medical image to be processed, and then According to the determined maximum pixel value and minimum pixel value. Finally, the maximum pixel value corresponding to the target pixel in the maximum image is subtracted from the minimum pixel value corresponding to the target pixel in the minimum image to obtain the difference pixel value corresponding to the target pixel in the difference image.
  • the image color mode of the medical image to be processed is RGB as an example for description.
  • the medical image to be processed including the first image data, the second image data, and the third image data
  • the Each pixel of the image has corresponding image data in the R channel, G channel and B channel.
  • the image data of the pixel in the R channel is the first pixel value
  • the image data in the G channel is the second pixel value.
  • the image data on the B channel is the third pixel value. According to the first pixel value, the second pixel value and the third pixel value, the maximum pixel value and the minimum pixel value in the R channel, G channel and B channel can be determined.
  • the maximum pixel value and minimum pixel value of the pixel position (x, y) can be calculated by the following formula:
  • Imax(x,y) Max[Ir(x,y), Ig(x,y), Ib(x,y)];
  • Imin(x,y) Min[Ir(x,y), Ig(x,y), Ib(x,y)];
  • Imax(x,y) represents the maximum pixel value
  • Imin(x,y) represents the minimum pixel value
  • Ir(x,y) represents the first pixel value
  • Ig(x,y) represents the second pixel value
  • Ib( x, y) represents the third pixel value.
  • the color information corresponding to different channels is used to distinguish the pixel values.
  • the image color mode of the medical image to be processed is RGB as an example. If the medical image to be processed is gray, the RGB is relatively similar. If the medical image is processed in color, the difference between RGB is very large, and the pathological tissue has color, and the value of the color is the pixel value required in this embodiment.
  • the foregoing formulas only take pixels corresponding to two-dimensional images as an example. In practical applications, the foregoing formulas are also applicable to multi-dimensional images to calculate the maximum pixel value and minimum pixel value, such as three-dimensions (3D) images and Four-dimensional (4 dimensions, 4D) images, etc.
  • the image target pixel position is (x1, y1)
  • the image color mode of the medical image to be processed is RGB as an example for description.
  • the target pixel position (x1, y1) is the first
  • the pixel value Ir(x1, y1) is 100
  • the second pixel value Ig(x1, y1) at the target pixel location (x1, y1) is 200
  • the third pixel value Ib( at the target pixel location (x1, y1) x1, y1) is 150.
  • the maximum pixel value Imax(x1, y1) at the target pixel position (x1, y1) is the pixel value 200 corresponding to the second pixel value Ig(x1, y1).
  • the minimum pixel value Imin(x1, y1) at the pixel point position (x1, y1) is the pixel value 100 corresponding to the first pixel value Ir(x1, y1).
  • the target pixel position is (x2, y2)
  • the image color mode of the medical image to be processed is RGB as another example.
  • the first pixel value Ir(x2) at the target pixel position (x2, y2) , Y2) is 30, the second pixel value Ig(x2, y2) of the target pixel position (x2, y2) is 80, and the third pixel value Ib(x2, y2) of the target pixel position (x2, y2) is 120.
  • the maximum pixel value Imax(x2, y2) of the target pixel point position (x2, y2) is the pixel value 120 corresponding to the third pixel value Ib(x2, y2), and the target pixel point position (x2 , Y2)
  • the minimum pixel value Imin(x2, y2) is the pixel value 30 corresponding to the first pixel value Ir(x2, y2).
  • the image color mode of the medical image to be processed is RGB
  • the medical image to be processed is a 3D image
  • the target pixel position is (x3, y3, z3) as another example for description
  • the target pixel position is ( The first pixel value Ir(x3, y3, z3) of x3, y3, z3) is 200
  • the second pixel value Ig(x3, y3, z3) of the target pixel position (x3, y3, z3) is 10
  • the third pixel value Ib(x3, y3, z3) at the pixel point location (x3, y3, z3) is 60.
  • the maximum pixel value Imax(x3, y3, z3) is the pixel value 200 corresponding to the first pixel value Ir(x3, y3, z3)
  • the minimum pixel value Imin(x3, y3, z3) of the target pixel position (x3, y3, z3) is the second
  • the pixel value corresponding to the pixel value Ig (x3, y3, z3) is 10.
  • the maximum pixel value and the minimum pixel value corresponding to the target pixel position can be subtracted to obtain the difference value corresponding to the target pixel position in the difference image Pixel values.
  • the difference pixel value can be calculated according to the maximum pixel value Imax (x, y) and the minimum pixel value Imin (x, y) through the following formula, and it is assumed that the medical image to be processed includes 10,000 pixels:
  • Idiff(x,y) Imax(x,y)-Imin(x,y);
  • Imax (x, y) represents the maximum pixel value
  • Imin (x, y) represents the minimum pixel value
  • Idiff (x, y) represents the difference pixel value at the (x, y) position.
  • the target pixel position is (x1, y1)
  • the image color mode of the medical image to be processed is RGB as an example.
  • the maximum pixel value Imax(x1) at the target pixel position (x1, y1) , Y1) is 200
  • the minimum pixel value Imin (x1, y1) of the target pixel position (x1, y1) is 100
  • the maximum pixel value Imax (x1, y1) is subtracted from the minimum pixel value Imin (x1, y1)
  • the difference pixel value corresponding to the target pixel position (x1, y1) can be obtained as 100.
  • the maximum pixel value Imax(x2, y2) is 120
  • the minimum pixel value Imin(x2, y2) of the target pixel position (x2, y2) is 30
  • the maximum pixel value Imax(x2, y2) is subtracted from the minimum pixel value Imin(x2, y2)
  • the difference pixel value corresponding to the target pixel point position (x2, y2) is 90.
  • the image color mode of the medical image to be processed is RGB
  • the medical image to be processed is a 3D image
  • the target pixel position is (x3, y3, z3) as another example for description.
  • Imax(x,y,z) Max[Ir(x,y,z), Ig(x,y,z), Ib(x,y,z)];
  • Imin(x,y,z) Min[Ir(x,y,z), Ig(x,y,z), Ib(x,y,z)];
  • Idiff(x,y,z) Imax(x,y,z)-Imin(x,y,z);
  • the maximum pixel value Imax(x3, y3, z3) of the target pixel position (x3, y3, z3) is 200
  • the minimum pixel value Imin(x3, y3, z3) of the target pixel position (x3, y3, z3) Is 10
  • the maximum pixel value Imax (x3, y3, z3) from the minimum pixel value Imin (x3, y3, z3) is 190.
  • the difference pixel value of the medical image to be processed when the difference pixel value of the medical image to be processed is small, it indicates that the first pixel value, the second pixel value, and the third pixel value of the medical image to be processed are relatively similar, which can indicate that the medical image to be processed is similar to Gray image, and when the difference pixel value of the medical image to be processed is large, it means that the first pixel value, the second pixel value, and the third pixel value of the medical image to be processed have a large difference, which can indicate the medical image to be processed.
  • the image is similar to a color image, and the image with the pathological tissue area is often a colored image, so it can be preliminarily determined whether the medical image to be processed includes the pathological tissue area according to the difference pixel value.
  • a method for generating a maximum value image and a minimum value image is provided.
  • the maximum value is determined by the pixel values of the target pixel corresponding to the first image data, the second image data, and the third image data.
  • the pixel value and the minimum pixel value, the maximum pixel value and the minimum pixel value reflect the color information of the medical image to be processed to varying degrees, and the difference pixel value is obtained by subtracting the maximum pixel value and the minimum pixel value, so that the difference pixel value can be Accurately reflect the color information of the medical image to be processed, thereby improving the accuracy of differential image generation.
  • the second image data and the third image data to generate a difference image may include:
  • Gaussian blur processing is performed on the difference image to be processed to obtain the difference image.
  • the medical image processing device can generate the difference image to be processed according to the first image data, the second image data, and the third image data included in the medical image to be processed, and then perform Gaussian processing on the difference image to be processed. Blur processing to obtain a difference image.
  • blurring can be understood as taking the average value of its surrounding pixels for each pixel of the difference image to be processed.
  • the value of the pixel can be It tends to be smooth, and on the difference image to be processed, it is equivalent to a blur effect, and the pixel will lose its details.
  • the algorithm used for blur in this embodiment is Gaussian Blur.
  • Gaussian Blur can use normal distribution (Gaussian distribution) for the processing of the difference image to be processed, so that the weighted average between pixels is more reasonable, and the distance The closer the pixel is, the greater the weight, and the farther the pixel is, the smaller the weight.
  • the pixel point (x, y) is a two-dimensional pixel point, so the two-dimensional Gaussian function can be calculated by the following formula:
  • (x, y) represents the pixel
  • G(x, y) represents the two-dimensional Gaussian function of the pixel
  • represents the standard deviation of the normal distribution.
  • the weight corresponding to (0,1) is 0.0566
  • the weight corresponding to pixel (1,1) is 0.0453
  • the weight corresponding to pixel (-1,0) is 0.0566
  • the weight corresponding to pixel (1,0) is 0.0566.
  • the pixel point (-1, -1) corresponds to a weight of 0.0453
  • the pixel point (0, -1) corresponds to a weight of 0.0566
  • the pixel point (1, -1) corresponds to a weight of 0.0453
  • the pixel point (0, 0) The sum of the weights of the 9 points of the surrounding 8 pixels is approximately equal to 0.479.
  • the sum of their weights must be equal to 1, which is to normalize the sum of weights. That is, the 9 values corresponding to the weight matrix can be divided by the total weight of 0.479 to obtain the normalized weight matrix, that is, the normalized weight of the pixel point (0, 0) is 0.147, and the pixel point (- 1,1) After normalization, the corresponding weight is 0.0947, the pixel (0,1) is normalized, and the corresponding weight is 0.0118, and the pixel (1,1) is normalized, and the corresponding weight is 0.0947.
  • the weight of the pixel point (-1, 0) after normalization is 0.0118
  • the weight of the pixel point (1, 0) after normalization is 0.0118
  • the pixel point (-1, -1) is normalized
  • the corresponding weight is 0.0947
  • the normalized pixel (0, -1) corresponds to 0.0118
  • the pixel (1, -1) normalized corresponds to 0.0947. Since using a weight matrix with a weight sum greater than 1 will make the difference image brighter, and using a weight matrix with a weight sum less than 1 will make the difference image darker, the normalized weight matrix can make the difference image appear The pathological tissue area is more accurate.
  • Gaussian blur calculation can be performed on the pixel point.
  • the pixel point (0, 0) in the weight matrix corresponds to The gray value of is 25, the gray value of pixel (-1, 1) is 14, the gray value of pixel (0, 1) is 15, and the gray value of pixel (1, 1) is Is 16, the gray value corresponding to the pixel point (-1, 0) is 24, the gray value corresponding to the pixel point (1, 0) is 26, and the gray value corresponding to the pixel point (-1, -1) is 34 , The gray value corresponding to the pixel point (0, -1) is 35 and the gray value corresponding to the pixel point (1, -1) is 36.
  • the gray value corresponding to each pixel is multiplied by the weight corresponding to each pixel to get 9 values, that is, the pixel point (0, 0) can get 3.69, and the pixel point (-1, 1) pair can get 1.32.
  • Pixel (0, 1) can get 1.77, pixel (1, 1) can get 1.51, pixel (-1, 0) can get 2.83, pixel (1, 0) can get 3.07, pixel (-1) , -1) can get 3.22, pixel (0, -1) can get 4.14 and pixel (1, -1) can get 3.41. Then add up these 9 values to get the Gaussian blur value of the pixel (0, 0).
  • another method for generating a difference image is provided.
  • Gaussian blur processing is performed on the generated difference image to be processed. Since the Gaussian blur processing can improve the processing robustness, the resulting The difference image has better processing robustness, which improves the stability of the difference image.
  • the difference image is binarized to obtain the binarized image
  • Can include:
  • the medical image processing device can determine the binarization threshold according to the difference image, and when the pixel value corresponding to the pixel in the difference image is greater than or equal to the binarization threshold, the pixel is determined to be binarized For the foreground pixel of the image, when the pixel value corresponding to the pixel in the difference image is less than the binarization threshold, the pixel is determined as the background pixel of the binarized image.
  • the binarization of the difference image can turn the grayscale image into a binarized image with a value of 0 or 1, that is to say, the binarization of the difference image can be achieved by Set the binarization threshold and transform the difference image into a binarized image of the foreground and the background of the image represented by only two values (0 or 1), where the value of the foreground is 1, and the value of the background is 0, and in practical applications, 0 corresponds to the RGB values are all 0, 1 corresponds to the RGB values are all 255, the difference image is the binarized image obtained after the binarization process, and then the binarized image is further During processing, because the geometric properties of the binarized image are only related to the positions of 0 and 1, and the gray value of the pixel is no longer involved, the processing of the binarized image becomes simple and the image processing efficiency can be improved.
  • the method of determining the binarization threshold can be divided into global threshold and local threshold.
  • the global threshold is to use a threshold to divide the entire difference image. But for different difference images, the gray depth of the difference image is different, and for the same difference image, different parts of the light and dark distribution can also be different, therefore, we use dynamic threshold binary value in this embodiment
  • the binarization method determines the binarization threshold.
  • the binarization threshold is determined based on the difference image, the pixel value corresponding to the pixel in the difference image and the binarization threshold are judged, when the pixel value corresponding to the pixel in the difference image is greater than or equal to the binarization
  • the pixel is determined as the foreground pixel of the binarized image.
  • the pixel value corresponding to the pixel in the difference image is less than the binarization threshold, the pixel is determined as the background pixel of the binarized image.
  • the pixel A is determined as the foreground pixel of the binarized image, that is, the pixel value is 1, that is, the pixel A is in the foreground area.
  • the image is in RGB mode, it is displayed in white.
  • the pixel value corresponding to pixel B is less than the binarization threshold, then the pixel B is determined as the background pixel of the binarized image, that is, the pixel value is 0, that is, the pixel B is in the background area.
  • the image is in RGB mode, it is displayed in black.
  • a method for obtaining a binarized image is provided.
  • the binarized image is generated according to the binarization process. Because the geometric nature of the binarized image does not involve the gray value of the pixel, The subsequent processing of the binarized image can be simplified, and the efficiency of generating the resulting image can be improved.
  • determining the binarization threshold according to the difference image may include:
  • N pixel values corresponding to N pixels according to the difference image, where the pixel values and the pixel points have a one-to-one correspondence, and N is an integer greater than 1;
  • the binarization threshold is calculated.
  • the medical image processing device can obtain N pixel values corresponding to N pixel points according to the difference image, and the pixel values have a one-to-one correspondence with the pixel points, and then determine the reference from the N pixel values Pixel value, the reference pixel value is the maximum value of N pixel values, and finally the binarization threshold can be calculated according to the reference pixel value and the preset ratio, where N is an integer greater than 1.
  • the binarization threshold in this embodiment is determined based on the difference image, because the difference image can be generated by subtracting the maximum value image and the minimum value image in the medical image to be processed, and the pixels in the difference image The value and the pixel have a one-to-one correspondence, so you can obtain the corresponding pixel value of multiple pixels in the difference image, and then determine the maximum value of the multiple pixel values as the reference pixel value, and then according to the reference pixel value and preset The ratio is calculated to obtain the binarization threshold.
  • this embodiment uses a preset ratio of 10% as an example for description.
  • the length and width of the reduced image of the WSI image is within a range of several thousand pixels.
  • the reduced image includes 100*100 pixels, that is It is necessary to find the maximum value among the corresponding pixel values of 10,000 pixels, for example, the maximum value is 150, then the maximum value of 150 can be determined as the reference pixel value, and then the reference pixel value 150 is multiplied by a preset ratio of 10%, that is A binarization threshold of 15 can be obtained.
  • the preset ratio may also be a value corresponding to other percentages, and the specific preset ratio should be flexibly determined in combination with actual conditions.
  • the binarization threshold can be flexibly determined by adjusting the preset ratio to improve the accuracy and flexibility of the threshold, thereby improving the accuracy of the binarization image generation Spend.
  • the flooding algorithm is used to detect the background area in the binarized image, where the background area includes multiple background pixels;
  • the medical image processing device may use a flooding algorithm to detect the background area in the binarized image, the background area may include a plurality of background pixels, and then based on the binarized image and the background area in the binarized image , Obtain the background pixels in the foreground area in the binarized image, the foreground area may include multiple foreground pixels, and then change the background pixels in the foreground area in the binarized image to foreground pixels to obtain a hole Fill the image, and finally perform median filter processing on the hole-filled image to obtain the result image.
  • the foreground area of the result image corresponds to the pathological tissue area of the medical image to be processed.
  • FIG. 7 is a schematic diagram of another embodiment of the result image in the embodiment of the application.
  • the binary image shown in FIG. 7 (A) is in a white foreground
  • the area includes multiple background pixels.
  • the black dots framed in area A1 to area A5 are all composed of background pixels.
  • the black dots framed in area A1 to area A5 are changed from background pixels to foreground pixels, and
  • the white points framed by area A6 and area A7 are composed of foreground pixels, and the white points framed by area A6 and area A7 are changed from foreground pixels to background pixels, and the result shown in Figure 7 (B) can be obtained. Fill the image with the holes.
  • median filter processing is performed on the hole-filled image shown in FIG. 7(B), and morphological processing may be further performed, that is, the result image shown in FIG. 7(C) can be obtained.
  • the filtering process is to suppress the noise of the medical image to be processed while keeping the holes as much as possible to fill the details of the image.
  • the filtering process can improve the effectiveness and reliability of the subsequent result image processing and analysis. Eliminating the noise components in the hole-filled image is the filtering operation.
  • the energy of the hole-filled image is mostly concentrated in the low and mid-frequency bands of the amplitude spectrum.
  • the median filter processing is a typical nonlinear filter, which is a nonlinear signal processing technology that can effectively suppress noise based on the sorting statistics theory.
  • the median filter processing can use the median value of the gray value of the pixel neighborhood. Instead of the gray value of the pixel, make the surrounding pixel values close to the true value to eliminate isolated noise points.
  • a method for generating a result image is provided.
  • the background pixels in the foreground area are changed to foreground pixels, and the obtained hole-filled image has better reliability.
  • the value filtering process can make the result image corresponding to the medical image to be processed clear and good visual effect without damaging the characteristic information such as the contour and edge of the image.
  • performing median filtering processing on the hole-filled image may include:
  • boundary line of the foreground area in the filtered image where the boundary line includes M pixels, and M is an integer greater than 1;
  • K pixels are extended outward to obtain the result image, where K is an integer greater than or equal to 1.
  • the medical image processing device performs median filter processing on the hole-filled image
  • the filtered image may include the foreground area to be processed, and the boundary line of the foreground area in the filtered image is obtained, and the boundary line includes M pixels, and then For each of the M pixels on the boundary line, K pixels are extended outward to obtain a result image, where M is an integer greater than 1, and K is an integer greater than or equal to 1.
  • the median filter process can replace the gray value of the pixel with the median of the gray value of the pixel's neighborhood, so that the surrounding pixel values are close to the true value to eliminate isolated noise points, and the median filter is processed While taking out impulse noise and salt and pepper noise, a filtered image that preserves the edge details of the image is obtained.
  • the flooding algorithm (Flood Fill) is used to fill the connected and similar color areas with different colors.
  • the basic principle of the flooding algorithm is to start from a pixel point to extend the coloring to the surrounding pixels until the graphics boundary.
  • the flooding algorithm requires three parameters: start node, target color, and replacement color.
  • the flooding algorithm is connected to all the nodes of the starting node through the path of the target color, and they are changed to the replacement color.
  • the flooding algorithm can be constructed in many ways, but many ways are clearly Or implicitly use queue or stack data structures. For example, the four-neighbor flooding algorithm, the eight-neighbor flooding algorithm, the Scanline Fill algorithm, and the Large-scale behavior (Large-scale behavior).
  • the traditional four-neighbor flooding algorithm is to color the pixel (x, y) and then color the four points around it.
  • the recursive method consumes more memory. If you need to color it The area is very large, which will cause overflow. Therefore, a non-recursive four-neighbor flooding algorithm can be used.
  • the eight-neighbor flooding algorithm is to color the top, bottom, left, top, left, top, and bottom of a pixel.
  • the line drawing algorithm can use the filling line to speed up the algorithm. You can color the pixels on a line first, and then expand up and down in sequence until the coloring is completed. Large-scale behaviors are data-centric or process-centric.
  • the drawing algorithm is used in this embodiment, and the boundary line of the foreground area to be processed including 1000 pixels is taken as an example. Extend K pixels respectively, assuming K is 2, then 2000 pixels are added as the foreground area in addition to the original 1000 pixels, and the result image is obtained. It should be understood that, in actual applications, the specific M pixels and K pixels should be determined flexibly in combination with actual conditions.
  • acquiring the medical image to be processed may include:
  • the medical sub-image includes a pathological tissue area, it is determined to be a medical image to be processed
  • the medical sub-image is determined as the background image, and the background image is removed.
  • the medical image processing device may first obtain the original medical image, and then use a sliding window to extract the medical sub-image from the original medical image.
  • the medical sub-image includes a pathological tissue area
  • the medical sub-image is determined as the background image, and the background image is removed.
  • the original medical image may be an image received by the medical image processing device through a wired network, or may also be an image stored by the medical image processing device itself.
  • FIG. 8 is a schematic diagram of an embodiment of obtaining medical images to be processed in an embodiment of the application.
  • the medical sub-image is extracted from the original medical image, where the area framed by B1 to B3 is the medical sub-image extracted from the original medical image, so that B1 can correspond to the medical sub-image shown in Figure 8 (B), B2 can correspond to the medical sub-image shown in Figure 8 (C), and B3 can correspond to the medical sub-image shown in Figure 8 (D).
  • FIG. 8 (B) and (C) The medical sub-image shown in Figure 8 includes the pathological tissue area, so the medical sub-images shown in Figure 8 (B) and (C) can be determined as medical images to be processed, and Figure 8 (D) shows the medical image The pathological tissue area is not included in the sub-image, so the medical sub-image shown in (D) in FIG. 8 can be determined as the background image, and the background image can be removed.
  • a method for obtaining medical images to be processed is provided.
  • the medical images to be processed are determined, so that the medical images to be processed include the pathological tissue regions.
  • Image Through the foregoing steps, the result image corresponding to the medical image to be processed can be obtained, and the result image includes the pathological tissue area, which is convenient for subsequent processing and analysis of the pathological tissue area in the result image.
  • the medical sub-image including the pathological tissue area is determined as the background image, and the background image is removed to reduce the resource occupancy rate.
  • the medical image processing method can also include:
  • Target positive sample image belongs to a positive sample image in the positive sample set, and each positive sample image contains a pathological tissue area;
  • the negative sample set includes at least one negative sample image, and each negative sample image does not include a pathological tissue area;
  • the image processing model is trained.
  • the medical image processing device may also generate a target positive sample image based on the result image.
  • the target positive sample image belongs to a positive sample image in the positive sample set, and each The positive sample image contains the pathological tissue area.
  • a negative sample set can also be obtained.
  • the negative sample set includes at least one negative sample image, and each negative sample image does not contain the pathological tissue area.
  • the image processing model can process the corresponding pathological tissue area based on a color medical image.
  • a method for training an image processing model is provided.
  • the image processing model is trained through a collection of positive sample images that include pathological tissue regions and a negative sample collection that does not include pathological tissue regions. The accuracy and reliability of the image processing model, thereby improving the efficiency and accuracy of image processing.
  • FIG. 1 A schematic diagram of the image processing method, specifically:
  • step S1 an original medical image is acquired
  • step S2 based on the original medical image, a medical image to be processed is acquired;
  • step S3 a difference image is generated according to the medical image to be processed
  • step S4 binarization processing is performed on the difference image difference to obtain a binarized image
  • step S5 a hole-filled image is obtained based on the binarized image
  • step S6 median filter processing is performed on the hole-filled image to obtain a result image.
  • step S1 the original medical image shown in Figure 9 (A) can be obtained, and then in step S2, a sliding window is used to extract medical sub-images from the original medical image shown in Figure 9 (A).
  • the medical sub-image includes a pathological tissue area
  • step S3 According to the first image data, the second image data and the third image data included in the medical image to be processed, the maximum pixel value corresponding to the target pixel is determined from the first pixel value, the second pixel value and the third pixel value And the minimum pixel value, thereby generating the maximum value image and the minimum value image, and then obtain the difference image as shown in (C) of FIG. 9 according to the maximum value image and the minimum value image. Furthermore, in step S4, N pixel values corresponding to N pixels can be obtained according to the difference image as shown in (C) in FIG. 9, and the pixel values have a one-to-one correspondence with the pixels.
  • step S5 the flooding algorithm is used to detect the background area including multiple background pixels in the binarized image, and then the foreground area in the binarized image is obtained according to the binarized image and the background area in the binarized image Change the background pixels in the foreground area of the binarized image to foreground pixels, so that the hole-filled image as shown in FIG. 9(E) can be obtained.
  • step S6 median filter processing is performed on the hole-filled image to obtain a filtered image including the foreground area to be processed.
  • the boundary line of the foreground area to be processed including M pixels is obtained for each of the M pixels on the boundary line. Pixels, K pixels are extended outward to obtain a result image as shown in (F) in FIG. 9, where N is an integer greater than 1.
  • FIG. 10 is a schematic diagram of an embodiment of the result image in the embodiment of the application, as shown in the figure, as shown in (A) in FIG.
  • FIG. 10(B) shows a medical image to be processed with regular vertical streaks.
  • the regular vertical streaks are generated by scanning the glass slide for the scanner. The generation of the regular vertical streaks depends on the scanning equipment.
  • the result image as shown in (D) in FIG. 10 can be obtained.
  • the black and white stripes can be generated by format conversion, or can be an unclear area generated by a scanner scanning a glass slide.
  • black and white stripes are added, and then through the medical image processing method provided in the embodiment of the present application, the resulting image as shown in (F) in FIG. 10 can be obtained.
  • the color information of different channels is used to generate a difference image. Because the color information of gray pixels in different channels is small, the color pixels are in different channels. The difference in color information is large, and the color information in the various medical images to be processed in Figure 10 can be effectively used.
  • the pathological tissue area extracted based on the difference image is more accurate, and it is positive for subsequent image analysis. Influence.
  • FIG. 11 is a schematic diagram of an embodiment of the image processing method in the embodiment of this application.
  • An embodiment of the processing method includes:
  • a first image to be processed and a second image to be processed wherein the first image to be processed is a color image, and the first image to be processed includes first image data, second image data, and third image data, and One image data, second image data, and third image data respectively correspond to color information in different channels;
  • the image processing apparatus may obtain a first image to be processed and a second image to be processed.
  • the first image to be processed may include first image data, second image data, and third image data, and the first image
  • the data, the second image data, and the third image data respectively correspond to color information in different channels.
  • the first to-be-processed image and the second to-be-processed image may be images received by the image processing apparatus via a wired network, or may also be images stored by the image processing apparatus itself.
  • the first to-be-processed image is similar to the to-be-processed medical image described in the foregoing step 101, and will not be repeated here.
  • the color information specifically corresponding to the first image data, the second image data, and the third image data should be flexibly determined in combination with actual conditions.
  • the image processing device can be deployed on a server, or it can be deployed on a terminal device with higher computing power. In this embodiment, the deployment of the image processing device on a server is taken as an example for introduction.
  • the first image to be processed is a photo taken on a cloudy day
  • the background of the photo is a cloudy day
  • a red car is also included.
  • the second image to be processed is a picture of the blue sky and the sea.
  • the image processing apparatus may generate a difference image according to the first image data, the second image data, and the third image data included in the first image to be processed obtained in step 201.
  • the difference image is a grayscale image.
  • the method for generating a difference image in this embodiment is similar to the corresponding embodiment in FIG. 2 described above, and will not be repeated here.
  • the difference image generated at this time can see the outline of the car.
  • the image processing device may perform binarization processing on the difference image generated in step 202 to obtain a binarized image.
  • an adaptive binarization method is used to perform foreground processing, that is, to perform binarization processing on the difference image, so as to obtain a binarized image.
  • the method for generating the binarized image in this embodiment is similar to the corresponding embodiment in FIG. 2, and will not be repeated here.
  • the binarized image generated at this time can accurately show the outline of the car.
  • the image processing apparatus may extract the target object from the first image to be processed according to the foreground region generated in step 203.
  • the target object may be a pathological tissue area.
  • the target object may be a vegetation area.
  • the target object may be a bicycle or a car.
  • the image of the car can be cut out from the first image to be processed, that is, the image of the car is the target object.
  • the image processing device will set the target object as the first layer, the second image to be processed as the second layer, and overlay the first layer on the second layer, thereby generating a composite image .
  • the image of the car is overlaid on the blue sky and white cloud photo to form a synthesized image.
  • the background of the car is no longer cloudy, but blue sky and white clouds.
  • an image processing method is provided.
  • the color information difference of grayscale pixels under different channels is small, while the color information of color pixels under different channels is relatively different. Therefore, before the image is binarized, the color information of different channels is used to generate the difference image, which effectively uses the color information in the image, and the target object extracted based on the difference image is more accurate.
  • the layer where the target object is located covers the layer where the second image to be processed is located, and the generated composite image summarizes the target object accurately, thereby improving the accuracy of the composite image and can have a positive impact on subsequent image analysis.
  • FIG. 12 is a schematic diagram of an embodiment of the medical image processing device in an embodiment of the application.
  • the medical image processing device 300 includes:
  • the acquiring module 301 is used to acquire a medical image to be processed, where the medical image to be processed is a color image, and the medical image to be processed includes first image data, second image data, and third image data, and the first image data and the second image data The second image data and the third image data respectively correspond to color information in different channels;
  • the generating module 302 is configured to generate a difference image according to the first image data, the second image data, and the third image data;
  • the processing module 303 is configured to perform binarization processing on the difference image to obtain a binarized image, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed including the first image to be processed;
  • a method for medical image processing is provided.
  • the color information difference of gray pixels under different channels is small, the color information of color pixels under different channels is relatively different. Therefore, before the image is binarized, the color information of different channels is used to generate the difference image, thereby effectively using the color information in the image, and the pathological tissue area extracted based on the difference image is more accurate. And it has a positive impact on subsequent image analysis.
  • the generating module 302 is specifically configured to generate a maximum value image and a minimum value image according to the first image data, the second image data, and the third image data included in the medical image to be processed;
  • a method for generating a difference image is provided.
  • a maximum value image and a minimum value image are generated according to the first image data, the second image data, and the third image data.
  • the color information is different.
  • the color information of the medical image to be processed included is more accurate, thereby improving the accuracy of the difference image generation.
  • the generating module 302 is specifically configured to perform according to the first pixel value of the first pixel position in the first image data, the second pixel value of the second pixel position in the second image data, and the third pixel value of the third pixel position in the third image data. Pixel value, determine the maximum pixel value and the minimum pixel value;
  • the maximum value image according to the maximum pixel value obtain the minimum value image according to the minimum pixel value, the pixel value at the fourth pixel position in the maximum value image is the maximum pixel value, and the pixel value at the fifth pixel position in the minimum value image is the minimum pixel value,
  • the first pixel position, the second pixel position, the third pixel position, the fourth pixel position, and the fifth pixel position all correspond to the position of the same pixel in the medical image to be processed;
  • the generating module 302 is specifically configured to determine the pixel difference value according to the pixel value of the fourth pixel position in the maximum value image and the pixel value of the fifth pixel position in the minimum value image;
  • the difference image is obtained.
  • the pixel value of the sixth pixel position in the difference image is the pixel difference value.
  • the fourth pixel position, the fifth pixel position and the sixth pixel position all correspond to the same one in the medical image to be processed The position of the pixel.
  • a method for generating a maximum value image is provided.
  • the maximum pixel value and the minimum pixel value are determined by the pixel values of the first image data, the second image data and the third image data corresponding to the target pixel.
  • the pixel value, the maximum pixel value and the minimum pixel value reflect the color information of the medical image to be processed to varying degrees, and the difference pixel value is obtained by subtracting the maximum pixel value and the minimum pixel value, so that the difference pixel value can accurately reflect the color information to be processed Process the color information of medical images to improve the accuracy of differential image generation.
  • the generating module 302 is specifically configured to generate the difference image to be processed according to the first image data, the second image data, and the third image data;
  • Gaussian blur processing is performed on the difference image to be processed to obtain the difference image.
  • another method of generating a difference image is provided.
  • Gaussian blur processing is performed on the generated difference image to be processed. Since the Gaussian blur processing can improve the robustness, the resulting difference The value image has better robustness, which improves the stability of the difference image.
  • the medical image processing apparatus 300 further includes a determining module 304;
  • the determining module 304 is configured to determine the binarization threshold according to the difference image
  • the determining module 304 is further configured to perform binarization processing on the difference image according to the binarization threshold to obtain a binarized image.
  • a method for obtaining a binarized image is provided.
  • the binarized image is generated according to the binarization process. Because the geometric nature of the binarized image does not involve the gray value of the pixel, The subsequent processing of the binarized image can be made simple, so that the efficiency of generating the foreground area can be improved.
  • the determining module 304 is specifically configured to obtain N pixel values corresponding to N pixels according to the difference image, where the pixel values and the pixel points have a one-to-one correspondence, and N is an integer greater than 1;
  • the binarization threshold is determined.
  • the binarization threshold can be flexibly determined by adjusting the preset ratio to improve the accuracy and flexibility of the threshold, thereby improving the accuracy of the binarization image generation Spend.
  • the generating module 302 is specifically configured to use a flooding algorithm to detect a background area in a binarized image, where the background area includes a plurality of background pixels;
  • Median filtering processing is performed on the hole-filled image to obtain a result image, and the foreground area of the result image corresponds to the pathological tissue area of the medical image to be processed.
  • a method for generating a result image is provided.
  • the background pixels in the foreground area are changed to foreground pixels, and the obtained hole-filled image has better reliability.
  • the value filtering process can make the result image corresponding to the medical image to be processed clear and good visual effect without damaging the characteristic information such as the contour and edge of the image.
  • the processing module 303 is specifically configured to perform median filtering processing on the hole-filled image to obtain a filtered image
  • boundary line of the foreground area in the filtered image where the boundary line includes M pixels, and M is an integer greater than 1;
  • K pixels are extended outward to obtain the result image, where K is an integer greater than or equal to 1.
  • the obtaining module 301 is specifically used to obtain original medical images
  • the medical sub-image includes a pathological tissue area, it is determined to be a medical image to be processed
  • the medical sub-image is determined as the background image, and the background image is removed.
  • a method for obtaining medical images to be processed is provided.
  • the medical images to be processed are determined, so that the medical images to be processed include the pathological tissue regions.
  • Image Through the foregoing steps, the result image corresponding to the medical image to be processed can be obtained, and the result image includes the pathological tissue area, which is convenient for subsequent processing and analysis of the pathological tissue area in the result image.
  • the medical sub-image including the pathological tissue area is determined as the background image, and the background image is removed to reduce the resource occupancy rate.
  • the image processing apparatus further includes a training module 305;
  • the generating module 302 is further configured to generate a target positive sample image according to the image to be processed and the foreground area of the image to be processed, wherein the target positive sample image belongs to a positive sample image in the positive sample set, and each positive sample image contains pathological tissue area;
  • the obtaining module 301 is further configured to obtain a negative sample set, where the negative sample set includes at least one negative sample image, and each negative sample image does not include a pathological tissue area;
  • the training module 305 is used to train the image processing model based on the positive sample set and the negative sample set.
  • a method for training an image processing model is provided.
  • the image processing model is trained through a collection of positive sample images that include pathological tissue regions and a negative sample collection that does not include pathological tissue regions. The accuracy and reliability of the image processing model, thereby improving the efficiency and accuracy of image processing.
  • FIG. 13 is a schematic diagram of an embodiment of the image processing device in an embodiment of the application.
  • the image processing device 400 includes:
  • the acquiring module 401 is used to acquire a first image to be processed and a second image to be processed, wherein the first image to be processed is a color image, and the first image to be processed includes first image data, second image data, and third image Data, and the first image data, the second image data, and the third image data respectively correspond to color information in different channels;
  • the generating module 402 is configured to generate a difference image according to the first image data, the second image data, and the third image data;
  • the processing module 403 is configured to perform binarization processing on the difference image to obtain a binarized image, and the foreground area of the binarized image corresponds to the pathological tissue area of the medical image to be processed;
  • the extraction module 404 is configured to extract the target object from the first image to be processed according to the foreground area of the binarized image
  • the generating module 402 is further configured to generate a composite image according to the target object and the second image to be processed, where the target object is located in the first layer, the second image to be processed is located in the second layer, and the first layer covers the second layer. Above the layer.
  • an image processing method is provided.
  • the color information difference of grayscale pixels under different channels is small, while the color information of color pixels under different channels is relatively different. Therefore, before the image is binarized, the color information of different channels is used to generate the difference image, which effectively uses the color information in the image, and the target object extracted based on the difference image is more accurate.
  • the layer where the target object is located covers the layer where the second image to be processed is located, and the generated composite image summarizes the target object accurately, thereby improving the accuracy of the composite image and can have a positive impact on subsequent image analysis.
  • the server 500 may have relatively large differences due to different configurations or performance, and may include one or more central processing units (CPU) 522 (for example, , One or more processors) and memory 532, and one or more storage media 530 (for example, one or more storage devices with a large amount of storage) for storing application programs 542 or data 544.
  • the memory 532 and the storage medium 530 may be short-term storage or persistent storage.
  • the program stored in the storage medium 530 may include one or more modules (not shown in the figure), and each module may include a series of command operations on the server.
  • the central processing unit 522 may be configured to communicate with the storage medium 530 and execute a series of instruction operations in the storage medium 530 on the server 500.
  • the server 500 may also include one or more power supplies 525, one or more wired or wireless network interfaces 550, one or more input and output interfaces 558, and/or one or more operating systems 541, such as Windows Server TM , Mac OS X TM , Unix TM , Linux TM , FreeBSD TM and so on.
  • the steps performed by the server in the above embodiment may be based on the server structure shown in FIG. 14.
  • the CPU 522 is used to execute the steps executed by the medical image processing apparatus in the embodiment corresponding to FIG. 2, and the CPU 522 is also used to execute the steps executed by the image processing apparatus in the embodiment corresponding to FIG. 1.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative, for example, the division of the units is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or It can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , Including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disks or optical disks and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本申请公开了一种医学图像处理的方法、图像处理的方法及装置,可以用于人工智能领域。本申请方法包括:获取待处理医学图像;根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像;对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与待处理医学图像的病理组织区域对应。由于对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。

Description

一种医学图像处理的方法、图像处理的方法及装置
本申请要求于2020年2月10日提交中国专利局、申请号为202010084678.8、申请名称为“一种医学图像处理的方法、图像处理的方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,具体涉及图像处理技术。
背景技术
随着医疗技术的发展,基于全视野数字切片(Whole Slide Image,WSI)图像的识别和分析在医疗方面起到了重要的作用。由于WSI图像的边长通常有几万像素,因此,需要将这类图像缩放或者切割为小尺寸图像进行分析,在这个过程中,WSI图像的大部分背景区域需要去除,而得出具有病理组织切片的区域进行后续图像分析。
目前,在WSI图像上提取病理组织区域的方式主要为,先将WSI图像缩小到一定尺度后再转化为灰度图像,然后在灰度图像上进行图像的进一步处理,例如图像二值化处理,空洞去除处理等,最后在处理后的图像上提取病理组织区域。
然而,上述方式在尺度变换之后,直接将彩色图像转换为灰度图像会丢失色彩信息,而色彩信息也是一个重要的图像特征,因此,会导致提取到的病理组织区域不够准确,从而容易对后续的图像分析产生偏差。
发明内容
本申请提供了一种医学图像处理的方法、图像处理的方法及装置,用于对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。
有鉴于此,本申请第一方面提供一种医学图像处理的方法,由服务器执行,包括:
获取待处理医学图像,其中,待处理医学图像为彩色图像,且待处理医学图像包括第一图像数据、第二图像数据以及第三图像数据,且第一图像数据、第二图像数据以及第三图像数据分别对应于不同属性下的色彩信息;
根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。本申请第二方面提供一种图像处理的方法,由服务器执行,包括:
获取第一待处理图像以及第二待处理图像,其中,第一待处理图像为彩色图像,且第一待处理图像包括第一图像数据、第二图像数据以及第三图像 数据,且第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息;
根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与待处理医学图像的病理组织区域对应;根据二值化图像的前景区域,从第一待处理图像中提取病理组织区域;
根据病理组织区域以及第二待处理图像,生成合成图像,其中,病理组织区域位于第一图层,第二待处理图像位于第二图层,第一图层覆盖于第二图层之上。
本申请第三方面提供一种医学图像处理装置,包括:
获取模块,用于获取待处理医学图像,待处理医学图像为彩色图像,且待处理医学图像包括第一图像数据、第二图像数据以及第三图像数据,且第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息;
生成模块,用于根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
处理模块,用于对差值图像进行二值化处理,得到二值化图像,所述二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。在一种可能的设计中,在本申请实施例的第三方面的一种实现方式中,
生成模块,具体用于根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成最大值图像和最小值图像;
根据最大值图像以及最小值图像,生成差值图像。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,
生成模块,具体用于根据第一图像数据中第一像素位置的第一像素值、第二图像数据中第二像素位置的第二像素值以及第三图像数据中第三像素位置的第三像素值,确定最大像素值和最小像素值;
根据最大像素值获得最大值图像,根据最小像素值获得最小值图像,最大值图像中第四像素位置的像素值为最大像素值,最小值图像中第五像素位置的像素值为最小像素值,第一像素位置、第二像素位置、第三像素位置以及第四像素位置、第五像素位置均对应于待处理医学图像中同一个像素点的位置;
生成模块,具体用于根据最大值图像中第四像素位置的像素值和最小值图像中第五像素位置的像素值确定像素差值;
根据像素差值,获得差值图像,差值图像中第六像素位置的像素值为像素差值,第四像素位置、第五像素位置和第六像素位置均对应于待处理医学图像中同一个像素点的位置。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,
生成模块,具体用于根据第一图像数据、第二图像数据以及第三图像数据,生成待处理差值图像;
对待处理差值图像进行高斯模糊处理,得到差值图像。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,医学图像处理装置还包括确定模块;
确定模块,用于根据差值图像确定二值化阈值;
确定模块,还用于根据所述二值化阈值对所述差值图像进行二值化处理,得到二值化图像。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,
确定模块,具体用于根据差值图像获取N个像素点所对应的N个像素值,其中,像素值与像素点具有一一对应的关系,N为大于1的整数;
从N个像素值中确定参考像素值,其中,参考像素值为N个像素值中的最大值;
根据参考像素值以及预设比例,确定二值化阈值。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,
生成模块,具体用于采用泛洪算法检测二值化图像中的背景区域,其中,背景区域包括多个背景像素点;
根据二值化图像以及二值化图像中的背景区域,获取二值化图像中的前景区域内的背景像素点,其中,前景区域包括多个前景像素点;
将二值化图像中的前景区域内的背景像素点变更为前景像素点,得到空洞填补图像;
对空洞填补图像进行中值滤波处理,得到结果图像,所述结果图像的前景区域与所述待处理医学图像的病理组织区域对应。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,
处理模块,具体用于对空洞填补图像进行中值滤波处理,得到滤波图像,;
获取滤波图像中前景区域的边界线,其中,边界线包括M个像素点,M为大于1的整数;
针对边界线上M个像素点中的每个像素点,向外延伸K个像素点,得到结果图像,其中,K为大于或等于1的整数。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,
获取模块,具体用于获取原始医学图像;
采用滑动窗口从原始医学图像中提取医学子图像;
若检测到医学子图像中包括病理组织区域,则确定为待处理医学图像;
若检测到医学子图像中未包括病理组织区域,则将医学子图像确定为背景图像,且去除背景图像。
在一种可能的设计中,在本申请实施例的第三方面的另一实现方式中,图像处理装置还包括训练模块;
生成模块,还用于根据待处理图像以及待处理图像的前景区域生成目标正样本图像,其中,目标正样本图像属于正样本集合中的一个正样本图像,且每个正样本图像包含病理组织区域;
获取模块,还用于获取负样本集合,其中,负样本集合包括至少一个负样本图像,且每个负样本图像不包含病理组织区域;
训练模块,用于基于正样本集合以及负样本集合,对图像处理模型进行训练。
本申请第四方面提供一种图像处理装置,包括:
获取模块,用于获取第一待处理图像以及第二待处理图像,其中,第一待处理图像为彩色图像,且第一待处理图像包括第一图像数据、第二图像数据以及第三图像数据,且第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息;
生成模块,用于根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
处理模块,用于对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与待处理医学图像的目标对象对应;第一待处理图像所对应的前景
提取模块,用于根据二值化图像的前景区域,从第一待处理图像中提取目标对象;
生成模块,还用于根据目标对象以及第二待处理图像,生成合成图像,其中,目标对象位于第一图层,第二待处理图像位于第二图层,第一图层覆盖于第二图层之上。
本申请的第五方面提供了一种计算机可读存储介质,计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行上述各方面的方法。
从以上技术方案可以看出,本申请实施例具有以下优点:
本申请实施例中,提供了一种医学图像处理的方法,首先可以获取到为彩色的待处理医学图像,并且该待处理医学图像包括第一图像数据、第二图像数据以及第三图像数据,其中第一图像数据、第二图像数据以及第三图像数据分别对应于不同属性下的色彩信息,然后根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像,进一步地对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。通过上述方式,由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,因此,在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。
附图说明
图1为本申请实施例中医学图像处理***的一个架构示意图;
图2为本申请实施例中医学图像处理的方法一个实施例示意图;
图3为本申请实施例中待处理医学图像一个实施例示意图;
图4为本申请实施例中差值图像一个实施例示意图;
图5为本申请实施例中二值化图像一个实施例示意图;
图6为本申请实施例中结果图像一个实施例示意图;
图7为本申请实施例中结果图像另一实施例示意图;
图8为本申请实施例中获取待处理医学图像一个实施例示意图;
图9为本申请实施例中医学图像处理的方法一个流程示意图;
图10为本申请实施例中结果图像一个实施例示意图;
图11为本申请实施例中图像处理的方法一个实施例示意图;
图12为本申请实施例中医学图像处理装置一个实施例示意图;
图13为本申请实施例中图像处理装置一个实施例示意图;
图14是本申请实施例提供的一种服务器结构示意图。
具体实施方式
本申请实施例提供了一种医学图像处理的方法、图像处理的方法及装置,用于对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“对应于”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、***、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
应理解,本申请实施例可以应用于对图像进行处理的场景中,图像作为人类感知世界的视觉基础,是人类获取信息、表达信息和传递信息的重要手段。图像处理可以对图像进行分析以达到所需结果的技术,图像处理一般指对数字图像进行处理,而数字图像是指用工业相机、摄像机以及扫描仪设备经过拍摄得到的一个大的二维数组,该数组的元素称为像素,其值称为灰度值。图像处理技术可以帮助人们更客观并且准确地认识世界,人的视觉***可以帮助人类从外界获取大量的信息,而图像、图形又是所有视觉信息的载体,尽管人眼的鉴别力很高,可以识别上千种颜色,但很多情况下,图像对于人眼来说是模糊的甚至是不可见的,因此通过图像处理技术,可以使模糊 甚至不可见的图像变得清晰。具体地,图像处理技术可以包括但不限于图像变换、图像编码压缩、图像增强和复原、图像分割、图像描述、抠图技术以及图像分类。
具体地,本申请提供的图像处理方法可以应用于医学领域的场景中,其中,可以进行处理的医学图像包括但不限于脑图像、心脏图像、胸部图像以及细胞图像,而医学图像可能会受到噪音、场偏移效应、局部体效应以及组织运动的影响。由于生物的个体与个体之间也具有差别,并且组织结构形状复杂,因此,医学图像与普通图像相比通常模糊度较高,且具有不均匀性。本申请涉及的医学图像为彩色图像,可以是彩超图像或者全视野数字病理切片(whole slide image,WSI)图像,也可以包括从显微镜获得的彩色数字图像,以WSI图像为例,WSI图像的边长通常在1万像素至10万像素,对于WSI图像而言往往需要缩放或者切割成小尺寸图像来进一步处理,在对图像进行处理的过程中,需要获得有病理组织切片的区域,进而根据该区域来进行病理分析,例如细胞核定量分析、细胞膜定量分析、细胞质定量分析、组织微脉管分析以及组织微脉管分析等。因此,基于医学图像的特点,通过本申请医学图像处理的方法,可以获取到待处理医学图像,并且根据该待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像,其中第一图像数据、第二图像数据以及是第三图像数据分别对应于不同属性下的色彩信息,进一步地对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,因此在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。
在又一示例中,例如图像处理还可以应用于遥感领域的场景中。由于信息技术、空间技术的飞速发展和卫星空间分辨率的不断提高,高分辨率遥感图像可以应用于海洋监测、土地覆盖监测、海洋污染以及海事救援中,而高分辨率遥感图像有着图像细节信息丰富、地物几何结构显著、以及目标结构复杂的特点,例如在高分辨率遥感图像中海岸线的物体阴影复杂、植被覆盖面积大或者明暗的人工设施处理不够明确,由于高分辨率遥感图像与普通图像相比通常细节更多并且更为复杂,当需要对高分辨率遥感图像中植被覆盖面积进行确定时,可以将植被从高分辨率遥感图像中扣除,从而确定所对应的面积。因此基于高分辨率遥感图像的特点,通过本申请图像处理的方法,可以根据第一待处理图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像,其中第一待处理图像为彩色图像,第一待处理图像所包括的第一图像数据、第二图像数据以及第三图像数据分别对应于不同 通道下的色彩信息,然后对生成的差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应,进而根据结果图像,从第一待处理图像中提取出目标对象(如植被区域)。在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,因此可以有效地利用图像中的色彩信息,基于差值图像提取到的目标对象更为准确,可以更精确的得到高分辨率遥感图像中的细节,从而提升高分辨率遥感图像处理的准确率。
本申请实施例以应用于医学领域的场景为示例进行说明,为了在医学领域的场景中,提升提取的病理组织区域的准确性,且对后续的图像分析产生积极影响。本申请提出了一种医学图像处理的方法,该方法应用于图1所示的医学图像处理***,请参阅图1,图1为本申请实施例中医学图像处理***的一个架构示意图,如图所示,图像处理***中包括服务器和终端设备。而医学图像处理装置可以部署于服务器,也可以部署于具有较高计算力的终端设备。
以医学图像处理装置部署于服务器为例,服务器获取待处理医学图像,然后服务器根据该待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像,进一步地对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。服务器可以基于病理组织区域进行医学图像分析。
以医学图像处理装置部署于终端设备为例,终端设备获取待处理医学图像,然后终端设备根据该待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像,进一步地对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。终端设备可以基于病理组织区域进行医学图像分析。
其中,图1中的服务器可以是一台服务器或多台服务器组成的服务器集群或云计算中心等,具体此处均不限定。终端设备可以为图1中示出的平板电脑、笔记本电脑、掌上电脑、手机、个人电脑(personal computer,PC)及语音交互设备,也可以为监控设备、人脸识别设备等,此处不做限定。
虽然图1中仅示出了五个终端设备和一个服务器,但应当理解,图1中的示例仅用于理解本方案,具体终端设备和服务器的数量均应当结合实际情况灵活确定。
由于本申请实施例是应用于人工智能领域的,在对本申请实施例提供的模型训练的方法开始介绍之前,先对人工智能领域的一些基础概念进行介绍。人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用***。换句话说,人工智能是计算机科学 的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互***、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。而机器学习(Machine Learning,ML)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、式教学习等技术。
随着人工智能技术研究和进步,人工智能技术在多种方向展开研究,计算机视觉技术(Computer Vision,CV)就是人工智能技术的多种研究方向中研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取信息的人工智能***。计算机视觉技术通常包括图像处理、图像识别、图像语义理解、图像检索、光学字符识别(Optical Character Recognition,OCR)、视频处理、视频语义理解、视频内容/行为识别、三维物体重建、3D技术、虚拟现实、增强现实、同步定位与地图构建等技术,还包括常见的人脸识别、指纹识别等生物特征识别技术。
本申请实施例提供的方案涉及人工智能的图像处理技术,结合上述介绍,下面将对本申请中医学图像处理的方法进行介绍,请参阅图2,图2为本申请实施例中医学图像处理的方法一个实施例示意图,如图所示本申请实施例中对医学图像处理的方法一个实施例包括:
101、获取待处理医学图像,其中,待处理医学图像为彩色图像,且待处理医学图像包括第一图像数据、第二图像数据以及第三图像数据,且第一图像数据、第二图像数据以及是第三图像数据分别对应于不同通道下的色彩信息;
本实施例中,医学图像处理装置可以获取到为彩色图像的待处理医学图像,该待处理医学图像可以包括第一图像数据、第二图像数据以及第三图像数据,并且第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息。其中,待处理医学图像可以为医学图像处理装置通过有 线网络接收到的医学图像,还可以为医学图像处理装置本身存储的医学图像。
具体地,待处理医学图像可以为WSI图像中截取下来的一个区域,该WSI图像可以通过显微镜对成片进行扫描,由于成片指的就是苏木精或者其他染色方法之后做好的玻片,通过显微镜对成片进行扫描后所得到的WSI图像即为彩色图像。其中,彩色图像的图像色彩模式包含但不仅限于红绿蓝(red green blue,RGB)色彩模式,亮度-色调-饱和度(luminance bandwidth chrominance,YUV)色彩模式,色调-饱和度-明度(Hue-Saturation-Value,HSV)色彩模式,而色彩信息可以表示为不同通道下的像素值,例如R通道的像素值,G通道的像素值,B通道的像素值。
WSI图像的格式包括但不限于SVS以及NDPI等文件格式,而WSI图像的长宽通常在几万像素范围内,图像尺寸较大,直接对该WSI图像进行处理需要较大内存,因此,需要对WSI图像进行切割。通常可以采用python的openslide工具读取WSI图像,openslide工具可以实现文件格式的转换,还可以将WSI图像中截取下来的一个区域存储为分辨率12*12的图像,在实际情况下包括但不限于存储为分辨率15*15以及50*50等分辨率,多个图像均存在同一个WSI图像文件中,在实际应用中读取WSI图像文件中分辨率最大的图像为待处理图像。并且本实施例可以在缩小的WSI图像上截取待处理医学图像,且WSI图像可以缩小任意倍数,例如20倍或10倍,缩小后的WSI图像的长宽在几千像素范围内,应理解,由于缩小倍数为人为定义的,因此具体缩小倍数应当结合实际情况灵活确定。
为了便于理解,请参阅图3,图3为本申请实施例中待处理医学图像一个实施例示意图,如图所示,待处理医学图像包括病例组织区域,并且没有其他灰度背景或纯白背景对该待处理医学图像进行干扰。为了进一步理解本实施例,以待处理医学图像的图像色彩模式为RGB为示例进行说明,由于待处理医学图像包括的第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息,若彩色图像对应的RGB为(200,100,60),则第一图像数据可以为R通道对应像素值200,第二图像数据可以为G通道对应像素值100,第三图像数据可以为B通道对应像素值60。若彩色图像对应的RGB为(100,80,40),则第一图像数据可以为R通道对应像素值100,第二图像数据可以为G通道对应像素值800,第三图像数据可以为B通道对应像素值40。
需要说明的是,对于HSV图像或者YUV图像而言,可以先将HSV图像或者YUV图像转换为RGB图像,再进行后续处理。
应理解,在实际应用中,第一图像数据、第二图像数据以及第三图像数据具体对应的色彩信息均应当结合实际情况灵活确定。并且医学图像处理装置可以部署于服务器,也可以部署于具有较高计算力的终端设备,本实施例以医学图像处理装置部署于服务器为例进行介绍。
102、根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
本实施例中,医学图像处理装置可以根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像。具体地,该差值图像表现为灰度图像。为了便于理解,请参阅图4,图4为本申请实施例中差值图像一个实施例示意图,如图所示,根据图4中(A)所示出包括的第一图像数据、第二图像数据以及第三图像数据的待处理医学图像,可以生成图4中(B)所示出的差值图像。该差值图像可以为图中所示包括病理组织区域的图像,由于用到了对应于不同通道下的色彩信息,区别出像素值,以待处理医学图像的图像色彩模式为RGB为示例进行说明,若待处理医学图像为灰色的话RGB比较相近,待处理医学图像为彩色的话RGB之间差值很大,存在病理组织的地方有较大的色差。
103、对差值图像进行二值化处理,得到二值化图像。
本实施例中,医学图像处理装置可以步骤102所生成的差值图像进行二值化处理,得到二值化图像。其中,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。为了便于理解,请参阅图5,图5为本申请实施例中二值化图像一个实施例示意图,如图所示,根据图5中(A)所示出的差值图像,由于差值图像为灰度图像,可以使用基于灰度图像的处理,具体地,本实施例中采用自适应二值化方式来进行前景处理,即对差值图像进行二值化处理,从而的得到图5中(B)所示出的二值化图像。并且该二值化图像中,白色为包括病理组织区域的前景区域,而黑色为不包括病理组织区域的背景区域。
为了便于理解,请参阅图6,图6为本申请实施例中前景区域一个实施例示意图,如图所示,根据图6中(A)所示出的二值化图像,由于白色为包括病理组织区域的前景区域,而黑色为不包括病理组织区域的背景区域,由此可以根据该二值化图像,生成如图6中(B)所示出待处理医学图像所对应的区域。
本申请实施例中,提供了一种医学图像处理的方法,通过上述方式,由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,因此,在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法一个可选实施例中,根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像,可以包括:
根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三 图像数据,生成最大值图像和最小值图像;
根据最大值图像以及最小值图像,生成差值图像。
本实施例中,医学图像处理装置在获取到待处理医学图像之后,可以根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成最大值图像和最小值图像,最后根据最大值图像以及最小值图像,即可生成差值图像。
具体地,以待处理医学图像的图像色彩模式为RGB为示例进行说明,由于待处理医学图像包括的第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息,色彩信息表示为通过R通道,G通道以及B通道所对应的像素值,确定在R通道,G通道以及B通道中的最大值,通过该最大值以确定最大值图像,同理,可确定在R通道,G通道以及B通道中的最小值,通过该最小值可以确定最小值图像,然后将最大值图像中每个像素点与最小值图像中对应位置上的每个像素点进行相减,得到差值图像。
本申请实施例中,提供了一种生成差值图像的方法,通过上述方式,根据第一图像数据、第二图像数据以及第三图像数据生成最大值图像以及最小值图像,由于不同图像数据对应的色彩信息不同,根据不同图像数据所确定的最大值图像以及最小值图像,所包括的待处理医学图像的色彩信息准确度较高,从而提升差值图像生成的准确度。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法另一可选实施例中,根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成最大值图像和最小值图像,可以包括:
根据第一图像数据中第一像素位置的第一像素值、第二图像数据中第二像素位置的第二像素值以及第三图像数据中第三像素位置的第三像素值,确定最大像素值和最小像素值;
根据最大像素值获得最大值图像,根据最小像素值获得最小值图像。
其中,最大值图像中第四像素位置的像素值为最大像素值,最小值图像中第五像素位置的像素值为最小像素值,第一像素位置、第二像素位置、第三像素位置以及第四像素位置、第五像素位置均对应于待处理医学图像中同一个像素点的位置;
根据最大值图像以及最小值图像,生成差值图像,可以包括:
根据最大值图像中第四像素位置的像素值和最小值图像中第五像素位置的像素值确定像素差值;
根据像素差值,获得差值图像,差值图像中第六像素位置的像素值为像素差值,第四像素位置、第五像素位置和第六像素位置均对应于待处理医学图像中同一个像素点的位置。
将最大值图像中目标像素点所对应的最大像素值,与最小值图像中目标像素点所对应的最小像素值相减,得到像素差值,将该像素差值作为差值图像中目标像素点所对应的差值像素值。
本实施例中,医学图像处理装置可以根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,确定目标像素点所对应的最大像素值和最小像素值,然后根据所确定的最大像素值以及最小像素值。最后将最大值图像中目标像素点所对应的最大像素值,与最小值图像中目标像素点所对应的最小像素值相减,得到差值图像中目标像素点所对应的差值像素值。
为了便于理解,以待处理医学图像的图像色彩模式为RGB为示例进行说明,对于包括的第一图像数据、第二图像数据以及第三图像数据的待处理医学图像而言,对于该待处理医学图像的每一个像素点,均在R通道,G通道以及B通道有对应的图像数据,例如像素点在R通道的图像数据为第一像素值,在G通道的图像数据为第二像素值,在B通道的图像数据为第三像素值,根据第一像素值,第二像素值以及第三像素值可以确定在R通道,G通道以及B通道中的最大像素值和最小像素值。
进一步地,可以通过下式对像素点位置(x,y)的最大像素值以及最小像素值进行计算:
Imax(x,y)=Max[Ir(x,y),Ig(x,y),Ib(x,y)];
Imin(x,y)=Min[Ir(x,y),Ig(x,y),Ib(x,y)];
其中,Imax(x,y)表示最大像素值,Imin(x,y)表示最小像素值,Ir(x,y)表示第一像素值,Ig(x,y)表示第二像素值,Ib(x,y)表示第三像素值。
本实施例中用到了对应于不同通道下的色彩信息,区别出像素值,再次以待处理医学图像的图像色彩模式为RGB为示例进行说明,若待处理医学图像为灰色的话RGB比较相近,待处理医学图像为彩色的话RGB之间差值很大,存在病理组织的地方有颜色,有颜色的值就是本实施例所需的像素值。应理解,前述公式仅以二维图像对应的像素点作为示例进行说明,在实际应用中,前述公式也适用于多维图像计算最大像素值以及最小像素值,例如三维(3 dimensions,3D)图像以及四维(4 dimensions,4D)图像等。
为了进一步理解本实施例,以像目标像素点位置为(x1,y1),且待处理医学图像的图像色彩模式为RGB作为一种示例进行说明,目标像素点位置(x1,y1)的第一像素值Ir(x1,y1)为100,目标像素点位置(x1,y1)的第二像素值Ig(x1,y1)为200,目标像素点位置(x1,y1)的第三像素值Ib(x1,y1)为150,通过前述公式可知,目标像素点位置(x1,y1)的最大像素值Imax(x1,y1)为第二像素值Ig(x1,y1)所对应的像素值200,目标像素点位置(x1,y1)的最小像素值Imin(x1,y1)为第一像素值Ir(x1,y1)所对应的像素值100。
其次,以目标像素点位置为(x2,y2),且待处理医学图像的图像色彩模式为RGB作为另一种示例进行说明,目标像素点位置(x2,y2)的第一像素值Ir(x2,y2)为30,目标像素点位置(x2,y2)的第二像素值Ig(x2,y2)为80,目标像素点位置(x2,y2)的第三像素值Ib(x2,y2)为120,通过前述公式可知,目标像素点位置(x2,y2)的最大像素值Imax(x2,y2)为第三像素值Ib(x2,y2)所对应的像素值120,目标像素点位置(x2,y2)的最小像素值Imin(x2,y2)为第一像素值Ir(x2,y2)所对应的像素值30。
再次,以待处理医学图像的图像色彩模式为RGB,且该待处理医学图像为3D图像,目标像素点位置为(x3,y3,z3)作为另一种示例进行说明,目标像素点位置为(x3,y3,z3)的第一像素值Ir(x3,y3,z3)为200,目标像素点位置(x3,y3,z3)的第二像素值Ig(x3,y3,z3)为10,目标像素点位置(x3,y3,z3)的第三像素值Ib(x3,y3,z3)为60,通过前述公式可知,目标像素点位置(x3,y3,z3)的最大像素值Imax(x3,y3,z3)为第一像素值Ir(x3,y3,z3)所对应的像素值200,目标像素点位置(x3,y3,z3)的最小像素值Imin(x3,y3,z3)为第二像素值Ig(x3,y3,z3)所对应的像素值10。
再进一步地,当得到目标像素点位置对应的最大像素值以及最小像素值之后,可以将该最大像素值与该最小像素值相减,从而得到差值图像中目标像素点位置所对应的差值像素值。具体地,可以通过下式根据最大像素值Imax(x,y)以及最小像素值Imin(x,y)计算得到差值像素值,并假设待处理医学图像中包括有1万个像素点:
Idiff(x,y)=Imax(x,y)-Imin(x,y);
其中,Imax(x,y)表示最大像素值,Imin(x,y)表示最小像素值,Idiff(x,y)表示在(x,y)位置的差值像素值。
为了便于理解,以目标像素点位置为(x1,y1),且待处理医学图像的图像色彩模式为RGB作为一种示例进行说明,目标像素点位置(x1,y1)的最大像素值Imax(x1,y1)为200,目标像素点位置(x1,y1)的最小像素值Imin(x1,y1)为100,将最大像素值Imax(x1,y1)与最小像素值Imin(x1,y1)相减,即可得到目标像素点位置(x1,y1)所对应的差值像素值为100。其次,以目标像素点位置为(x2,y2),且待处理医学图像的图像色彩模式为RGB作为另一种示例进行说明,目标像素点位置(x2,y2)的最大像素值Imax(x2,y2)为120,目标像素点位置(x2,y2)的最小像素值Imin(x2,y2)为30,将最大像素值Imax(x2,y2)与最小像素值Imin(x2,y2)相减,即可得到目标像素点位置(x2,y2)所对应的差值像素值为90。
可选地,以待处理医学图像的图像色彩模式为RGB,且该待处理医学图像为3D图像,目标像素点位置为(x3,y3,z3)作为另一种示例进行说明。基于上述公式,可以推导出以下公式:
Imax(x,y,z)=Max[Ir(x,y,z),Ig(x,y,z),Ib(x,y,z)];
Imin(x,y,z)=Min[Ir(x,y,z),Ig(x,y,z),Ib(x,y,z)];
Idiff(x,y,z)=Imax(x,y,z)-Imin(x,y,z);
假设目标像素点位置(x3,y3,z3)的最大像素值Imax(x3,y3,z3)为200,目标像素点位置(x3,y3,z3)的最小像素值Imin(x3,y3,z3)为10,将最大像素值Imax(x3,y3,z3)与最小像素值Imin(x3,y3,z3)相减,即可得到目标像素点位置(x3,y3,z3)所对应的差值像素值为190。
具体地,当待处理医学图像的差值像素值较小时,则说明该待处理医学图像的第一像素值,第二像素值以及第三像素值较为相近,可以说明该待处理医学图像类似为灰色图像,而当待处理医学图像的差值像素值较大时,则说明该待处理医学图像的第一像素值,第二像素值以及第三像素值相差较大,可以说明该待处理医学图像类似为彩色图像,而存在病理组织区域的图像常为有颜色的图像,因此可以根据该差值像素值初步判断该待处理医学图像是否包括病理组织区域。
本申请实施例中,提供了一种生成最大值图像和最小值图像的方法,通过上述方式,通过第一图像数据、第二图像数据以及第三图像数据对应目标像素点的像素值,确定最大像素值以及最小像素值,最大像素值以及最小像素值不同程度的反映待处理医学图像的色彩信息,并由最大像素值以及最小像素值相减得到差值像素值,使得该差值像素值能够准确的反映待处理医学图像的色彩信息,从而提升差值图像生成的准确度。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法另一可选实施例中,根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像,可以包括:
根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成待处理差值图像;
对待处理差值图像进行高斯模糊处理,得到差值图像。
本实施例中,医学图像处理装置可以根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成待处理差值图像,然后再对待处理差值图像进行高斯模糊处理,从而得到差值图像。
具体地,模糊可以理解成对待处理差值图像的每一个像素点都取其周边像素点的平均值,当像素点取其周边像素点的平均值时,在像素点的数值取值上,可以趋于平滑化,而在待处理差值图像上,就相当于产生模糊效果,该像素点会失去细节。对于待处理差值图像而言,其中像素点都是连续的,因此越靠近的像素点关系越密切,越远离的像素点关系越疏远。因此本实施例中对模糊采用的算法为高斯模糊(Gaussian Blur),高斯模糊可以将正态分布(高斯分布)用于待处理差值图像处理,使得像素点之间的加权平均更合理,距离越近的像素点权重越大,距离越远的像素点权重越小。
进一步地,以像素点为(x,y)为示例进行说明,对于像素点(x,y)而言,该像素点为二维像素点,由此可以通过以下公式计算得到二维高斯函数:
Figure PCTCN2020126063-appb-000001
其中,(x,y)表示像素点,G(x,y)表示像素点的二维高斯函数,σ表示正态分布的标准偏差。
为了便于理解,以该像素点具体为(0,0)为示例进行说明,那么像素点(0,0)其周边8个像素点可以为(-1,1),(0,1),(1,1),(-1,0),(1,0),(-1,-1),(0,-1)以及(1,-1),为了进一步地计算权重矩阵,需要设定σ的值。假定σ=1.5,则可以得到模糊半径为1的权重矩阵,例如在权重矩阵中像素点(0,0)对应的权重为0.0707,像素点(-1,1)对应的权重为0.0453,像素点(0,1)对应的权重为0.0566,像素点(1,1)对应的权重为0.0453,像素点(-1,0)对应的权重为0.0566,像素点(1,0)对应的权重为0.0566,像素点(-1,-1)对应的权重为0.0453,像素点(0,-1)对应的权重为0.0566以及像素点(1,-1)对应的权重为0.0453,像素点(0,0)其周边8个像素点这9个点的权重总和约等于0.479,若仅计算这9个点的加权平均,需要必须让它们的权重之和等于1,也就是对权重总和进行归一化,即可以将权重矩阵对应的9个值分别除以权重总和0.479,从而得到归一化之后的权重矩阵,即像素点(0,0)归一化后所对应的权重为0.147,像素点(-1,1)归一化后所对应的权重为0.0947,像素点(0,1)归一化后所对应的权重为0.0118,像素点(1,1)归一化后所对应的权重为0.0947,像素点(-1,0)归一化后所对应的权重为0.0118,像素点(1,0)归一化后所对应的权重为0.0118,像素点(-1,-1)归一化后所对应的权重为0.0947,像素点(0,-1)归一化后所对应的权重为0.0118,以及像素点(1,-1)归一化后所对应的权重为0.0947。由于使用权重总和大于1的权重矩阵会让差值图像偏亮,而使用权重总和小于1的权重矩阵会让差值图像偏暗,因此进行归一化后的权重矩阵能够使得差值图像所呈现的病理组织区域更为准确。
进一步地,当获取到归一化后的权重矩阵后,即可以对该像素点进行高斯模糊计算,例如灰度值为0至255的情况下,在权重矩阵中像素点(0,0)对应的灰度值为25,像素点(-1,1)对应的灰度值为14,像素点(0,1)对应的灰度值为15,像素点(1,1)对应的灰度值为16,像素点(-1,0)对应的灰度值为24,像素点(1,0)对应的灰度值为26,像素点(-1,-1)对应的灰度值为34,像素点(0,-1)对应的灰度值为35以及像素点(1,-1)对应的灰度值为36。每个像素点对应的灰度值点乘每个像素点对应的权重,可以得到9个值,即像素点(0,0)可以得到3.69,像素点(-1,1)对可以得到1.32,像素点(0,1)可以得到1.77,像素点(1,1)可以得到1.51, 像素点(-1,0)可以得到2.83,像素点(1,0)可以得到3.07,像素点(-1,-1)可以得到3.22,像素点(0,-1)可以得到4.14以及像素点(1,-1)可以得到3.41。然后将这9个值加起来,就是像素点(0,0)的高斯模糊的值。
再进一步地,对待处理差值图像中所包括的所有像素点重复前述像素点(0,0)类似的步骤,即可得到进行高斯模糊处理后的差值图像。
本申请实施例中,提供了另一种生成差值图像的方法,通过上述方式,对生成待处理差值图像行高斯模糊处理,由于高斯模糊处理可以提升处理鲁棒性,由此所得到的差值图像有较好的处理鲁棒性,从而提升成差值图像的稳定性。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法另一可选实施例中,对差值图像进行二值化处理,得到二值化图像,可以包括:
根据差值图像确定二值化阈值;
根据所述二值化阈值对所述差值图像进行二值化处理,得到二值化图像。
本实施例中,医学图像处理装置可以根据差值图像确定二值化阈值,当差值图像中像素点所对应的像素值大于或等于二值化阈值时,则将像素点确定为二值化图像的前景像素点,当差值图像中像素点所对应的像素值小于二值化阈值时,则将像素点确定为二值化图像的背景像素点。
具体地,通过设定二值化阈值,对差值图像进行二值化处理可以把灰度图像变成0或者1取值的二值化图像,也就是说差值图像的二值化可以通过设定二值化阈值,把差值图像变换成仅用两个值(0或1)来分别表示的图像前景和图像背景的二值化图像,其中前景取值为1,背景值取值为0,而在实际应用中,0对应于RGB值均为0,1对应于RGB值均为255,差值图像经过二值化处理后所得的二值化图像,再对二值化图像作进一步处理时,由于二值化图像的几何性质只与0和1的位置有关,不再涉及到像素的灰度值,使得对二值化图像的处理变得简单,从而可以提升图像处理效率。而确定二值化阈值的方法可以分为全局阈值和局部阈值。其中,全局阈值是对整个差值图像采用一个阈值进行划分。但对于不同的差值图像,差值图像的灰度深度是存在差异的,并且对于同一差值图像,不同部位其明暗分布也可以是不同的,因此,我们本实施例中采用动态阈值二值化方法确定二值化阈值。
当根据差值图像确定二值化阈值后,对差值图像中像素点所对应的像素值与二值化阈值进行判断,当差值图像中像素点所对应的像素值大于或等于二值化阈值时,则将像素点确定为二值化图像的前景像素点。当差值图像中像素点所对应的像素值小于二值化阈值时,则将像素点确定为二值化图像的背景像素点。例如,当像素点A所对应的像素值大于二值化阈值,则将该像素点A确定为二值化图像的前景像素点,即像素值为1,也就是该像素点A处于前景区域,在图像为RGB模式时,显示为白色。而当像素点B所对应 的像素值小于二值化阈值,则将该像素点B确定为二值化图像的背景像素点,即像素值为0,也就是该像素点B处于背景区域,在图像为RGB模式时,显示为黑色。
本申请实施例中,提供了一种得到二值化图像的方法,通过上述方式,根据二值化处理生成二值化图像,由于二值化图像的几何性质不涉及到像素的灰度值,可以使得后续对二值化图像的处理变得简单,从而可以提升生成结果图像的效率。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法另一可选实施例中,根据差值图像确定二值化阈值,可以包括:
根据差值图像获取N个像素点所对应的N个像素值,其中,像素值与像素点具有一一对应的关系,N为大于1的整数;
从N个像素值中确定参考像素值,其中,参考像素值为N个像素值中的最大值;
根据参考像素值以及预设比例,计算得到二值化阈值。
本实施例中,医学图像处理装置可以根据差值图像获取N个像素点所对应的N个像素值,并且该像素值与像素点具有一一对应的关系,然后从N个像素值中确定参考像素值,该参考像素值为N个像素值中的最大值,最后可以根据参考像素值以及预设比例,计算得到二值化阈值,其中N为大于1的整数。
具体地,本实施例中二值化阈值是根据差值图像所确定的,由于差值图像可以根据待处理医学图像中的最大值图像与最小值图像相减生成,并且差值图像中的像素值与像素点具有一一对应的关系,因此可以获取差值图像中多个像素点对应像素值,然后将多个像素值中的最大值确定为参考像素值,然后根据参考像素值以及预设比例计算得到二值化阈值。为了便于理解,本实施例以预设比例为10%为示例进行说明,例如WSI图像缩小后的图像的长宽在几千像素范围内,假设缩小后的图像包括100*100个像素点,即需要在10000个像素点对应像素值中找出最大的值,例如最大值为150,即可以确定该最大值150为参考像素值,然后根据参考像素值150与相乘预设比例10%,即可得到二值化阈值15。应理解,在实际应用中,预设比例还可以为其他百分比所对应的值,具体预设比例应当结合实际情况灵活确定。
本申请实施例中,提供了另一种得到二值化阈值的方法,通过上述方式,可以通过由最大像素值确定的参考像素值以及预设比例的二值化阈值,由于差值图像灰度深度是存在差异的,并且不同区域其明暗分布也可以是不同的,因此,可以通过调整预设比例灵活确定二值化阈值,提升阈值准确度以及灵活性,从而提升二值化图像生成的准确度。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学 图像处理的方法另一可选实施例中,还可以包括:
采用泛洪算法检测二值化图像中的背景区域,其中,背景区域包括多个背景像素点;
根据二值化图像以及二值化图像中的背景区域,获取二值化图像中的前景区域内的背景像素点,其中,前景区域包括多个前景像素点;
将二值化图像中的前景区域内的背景像素点变更为前景像素点,得到空洞填补图像;
对空洞填补图像进行中值滤波处理,得到结果图像,结果图像的前景区域与待处理医学图像的病理组织区域对应。
本实施例中,医学图像处理装置可以采用泛洪算法检测二值化图像中的背景区域,该背景区域可以包括多个背景像素点,然后根据二值化图像以及二值化图像中的背景区域,获取二值化图像中的前景区域内的背景像素点,该前景区域可以包括多个前景像素点,进而将二值化图像中的前景区域内的背景像素点变更为前景像素点,得到空洞填补图像,最后对空洞填补图像进行中值滤波处理,即可得到结果图像,结果图像的前景区域与待处理医学图像的病理组织区域对应。
具体地,对差值图像进行二值化处理后,所得到二值化图像中,可能出现二值化图像中前景区域是黑色空洞,作为前景区域,需要将该黑色空洞检测出来。为了便于理解,请参阅图7,图7为本申请实施例中结果图像另一实施例示意图,如图所示,图7中(A)所示出的二值化图像,在呈现白色的前景区域中包括有多个背景像素点,其中区域A1至区域A5所框出黑点均由背景像素点组成,将区域A1至区域A5所框出黑点由背景像素点变更为前景像素点,而其中区域A6与区域A7所框出白点由前景像素点组成,将区域A6与区域A7所框出白点由前景像素点变更为背景像素点,即可得到图7中(B)所示出的空洞填补图像。
进一步地,然后对图7中(B)所示出的空洞填补图像进行中值滤波处理,还可以进一步进行形态学处理,即可以得到图7中(C)所示出的待结果图像。其中滤波处理即在尽量保留空洞填补图像细节特征的条件下对待处理医学图像的噪声进行抑制,通过滤波处理可以提升后续结果图像处理和分析的有效性和可靠性。消除空洞填补图像中的噪声成分即为滤波操作,空洞填补图像的能量大部分集中在幅度谱的低频和中频段,而在较高频段,空洞填补图像的信息经常被噪声影响,因此可以对空洞填补图像进行滤波操作适应图像处理的要求,消除图像数字化时所混入的噪声。而中值滤波处理是一种典型的非线性滤波,是基于排序统计理论的一种能够有效抑制噪声的非线性信号处理技术,中值滤波处理可以用像素点邻域灰度值的中值来代替该像素点的灰度值,让周围的像素值接近真实的值从而消除孤立的噪声点。
本申请实施例中,提供了一种生成结果图像的方法,通过上述方式,将 前景区域内的背景像素点变更为前景像素点,所得到空洞填补图像具有较好的可靠性,其次,通过中值滤波处理,能够在不损坏图像的轮廓及边缘等特征信息的基础上,使得待处理医学图像所对应的结果图像清晰并且视觉效果好。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法另一可选实施例中,对空洞填补图像进行中值滤波处理,可以包括:
对空洞填补图像进行中值滤波处理,得到滤波图像,得到结果图像,所述结果图像的前景区域与所述待处理医学图像的病理组织区域对应;
获取滤波图像中前景区域的边界线,其中,边界线包括M个像素点,M为大于1的整数;
针对边界线上M个像素点中的每个像素点,向外延伸K个像素点,得到结果图像,其中,K为大于或等于1的整数。
本实施例中,医学图像处理装置对空洞填补图像进行中值滤波处理,该滤波图像可以包括待处理前景区域,获取滤波图像中前景区域的边界线,并且该边界线包括M个像素点,进而针对边界线上M个像素点中的每个像素点,向外延伸K个像素点,得到结果图像,其中M为大于1的整数,K为大于或等于1的整数。具体地,中值滤波处理可以用像素点邻域灰度值的中值来代替该像素点的灰度值,让周围的像素值接近真实的值从而消除孤立的噪声点,通过中值滤波处理在取出脉冲噪声、椒盐噪声的同时,得到保留图像的边缘细节的滤波图像。
进一步地,通过泛洪算法(Flood Fill)填充具有不同颜色的连接的,颜色相似的区域,泛洪算法的基本原理就是从一个像素点出发,以此向周边的像素点扩充着色,直到图形的边界。泛洪算法需要采用三个参数:起始节点(start node),目标颜色(target color)以及替换颜色(replacement color)。泛洪算法通过目标颜色的路径连接到起始节点的所有节点,并将它们更改为替换颜色,应理解,在实际应用中,可以通过多种方式构建泛洪算法,但多种方式都明确地或隐式地使用队列或堆栈数据结构。例如,四邻域泛洪算法,八邻域泛洪算法,描绘线算法(Scanline Fill)以及大规模行为(Large-scale behaviour)。其中,传统的四邻域泛洪算法的思想是对于像素点(x,y),将其着色之后将其周围的上下左右四个点分别进行着色,而递归方式较为消耗内存,若所需着色的面积非常大,会导致溢出现象,因此,可以采用非递归方式的四邻域泛洪算法。而八邻域泛洪算法是将一个像素点的上下左右,左上,左下,右上,右下都进行着色。描绘线算法可以利用填充线来加速算法,可以先将一条线上的像素点进行着色,然后依次向上下扩张,直到着色完成。大规模行为以数据为中心,或者以流程为中心。
由于空洞填补图像的边界线不规则,因此本实施例中采用描绘线算法, 以待处理前景区域包括1000个像素点的边界线为示例进行说明,利用形态学处理的方式,将1000个像素点分别向外延伸K个像素点,假设K为2,则在原来的1000个像素点之外增加了2000个像素点作为前景区域,从而得到结果图像。应理解,在实际应用中,具体M个像素点以及K个像素点均应当结合实际情况灵活确定。
本申请实施例中,提供了另一种生成结果图像的方法,通过上述方式,通过中值滤波处理,能够在不损坏图像的轮廓及边缘等特征信息的基础上,使得滤波图像清晰并且视觉效果好。其次,通过泛洪算法对滤波图像进行形态学处理,提升结果图像的准确度以及一体性。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法另一可选实施例中,获取待处理医学图像,可以包括:
获取原始医学图像;
采用滑动窗口从原始医学图像中提取医学子图像;
若检测到医学子图像中包括病理组织区域,则确定为待处理医学图像;
若检测到医学子图像中未包括病理组织区域,则将医学子图像确定为背景图像,且去除背景图像。
本实施例中,医学图像处理装置可以先获取到原始医学图像,然后采用滑动窗口从原始医学图像中提取医学子图像,当检测到医学子图像中包括病理组织区域时,则确定为待处理医学图像,当检测到医学子图像中未包括病理组织区域时,则将医学子图像确定为背景图像,且去除该背景图像。具体地,其中原始医学图像可以为医学图像处理装置通过有线网络接收到的图像,还可以为医学图像处理装置本身存储的图像。
为了便于理解,请参阅图8,图8为本申请实施例中获取待处理医学图像一个实施例示意图,如图所示,图8中(A)所示出的为原始医学图像,采用滑动窗口从原始医学图像中提取医学子图像,其中B1至B3所框出的区域即为从原始医学图像中提取医学子图像,从而B1可以对应得到如图8中(B)所示出医学子图像,B2可以对应得到如图8中(C)所示出医学子图像,B3可以对应得到如图8中(D)所示出医学子图像,由此可见,图8中(B)以及(C)中所示出医学子图像中包括病理组织区域,因此可以将图8中(B)以及(C)所示出医学子图像确定为待处理医学图像,而图8中(D)所示出医学子图像中未包括病理组织区域,因此可以将图8中(D)所示出医学子图像确定为背景图像,且去除背景图像。
本申请实施例中,提供了一种获取待处理医学图像的方法,通过上述方式,通过检测医学子图像是否包括有病理组织区域,确定待处理医学图像,使得包括有病理组织区域的待处理医学图像通过前述步骤,能够获取的待处理医学图像所对应的结果图像,并且结果图像包括有病理组织区域,便于后续对该结果图像中病理组织区域的处理以及分析。其次,将为包括有病理组 织区域的医学子图像确定为背景图像,且去除该背景图像,减少资源占用率。
可选地,在上述图2对应的实施例的基础上,本申请实施例提供的医学图像处理的方法另一可选实施例中,根据二值化图像生成结果图像之后,医学图像处理的方法还可以包括:
根据结果图像生成目标正样本图像,其中,目标正样本图像属于正样本集合中的一个正样本图像,且每个正样本图像包含病理组织区域;
获取负样本集合,其中,负样本集合包括至少一个负样本图像,且每个负样本图像不包含病理组织区域;
基于正样本集合以及负样本集合,对图像处理模型进行训练。
本实施例中,在根据二值化图像生成结果图像之后,医学图像处理装置还可以根据结果图像生成目标正样本图像,该目标正样本图像属于正样本集合中的一个正样本图像,并且每个正样本图像包含病理组织区域,同时,还可以获取负样本集合,该负样本集合包括至少一个负样本图像,并且每个负样本图像不包含病理组织区域,最后可以基于所获取的正样本集合以及负样本集合,对图像处理模型进行训练。该图像处理模型能够基于一张彩色的医学图像处理出相应的病理组织区域。
本申请实施例中,提供了一种训练图像处理模型的方法,通过上述方式,通过包含病理组织区域的正样本图像合集,以及不包含病理组织区域的负样本集合对图像处理模型进行训练,提升图像处理模型的准确度以及可靠性,从而提升图像处理的效率以及准确度。
具体地,本申请实施例可以提升提取的病理组织区域的准确性,且对后续的图像分析产生积极影响,为了便于理解本申请实施例,请参阅图9,图9为本申请实施例中医学图像处理的方法一个流程示意图,具体地:
在步骤S1中,获取原始医学图像;
在步骤S2中,基于原始医学图像,获取待处理医学图像;
在步骤S3中,根据待处理医学图像生成差值图像;
在步骤S4中,对差值图像差进行二值化处理,得到二值化图像;
在步骤S5中,基于二值化图像得到空洞填补图像;
在步骤S6中,对空洞填补图像进行中值滤波处理,得到结果图像。
其中,在步骤S1中可以获取到如图9中(A)所示出的原始医学图像,然后在步骤S2中采用滑动窗口从图9中(A)所示出的原始医学图像中提取医学子图像,当检测到医学子图像中包括病理组织区域,则确定为待处理医学图像,从而获取到如图9中(B)所示出的待处理医学图像,进一步地,在步骤S3中,可以根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,从第一像素值、第二像素值以及第三像素值中确定目标像素点所对应的最大像素值以及最小像素值,从而生成最大值图像以及最小值图像,然后根据最大值图像以及最小值图像得到如图9中(C)所示出的 差值图像。再进一步地,在步骤S4中,可以根据如图9中(C)所示出的差值图像获取N个像素点所对应的N个像素值,该像素值与像素点具有一一对应的关系,将N个像素值中的最大值确定为参考像素值,通过根据参考像素值以及预设比例,计算得到二值化阈值,并且当差值图像中像素点所对应的像素值大于或等于二值化阈值时,则将像素点确定为二值化图像的前景像素点,当差值图像中像素点所对应的像素值小于二值化阈值时,则将像素点确定为二值化图像的背景像素点,从而可以得到如图9中(D)所示出的二值化图像。在步骤S5中,采用泛洪算法检测二值化图像中包括多个背景像素点的背景区域,然后根据二值化图像以及二值化图像中的背景区域,获取二值化图像中的前景区域内的背景像素点,将二值化图像中的前景区域内的背景像素点变更为前景像素点,从而可以得到如图9中(E)所示出空洞填补图像。在步骤S6中,对空洞填补图像进行中值滤波处理,得到包括待处理前景区域的滤波图像获取待处理前景区域包括M个像素点的边界线,针对边界线上M个像素点中的每个像素点,向外延伸K个像素点,从而得到如图9中(F)所示出的结果图像,其中N为大于1的整数。
进一步地,对不同的待处理医学图像可以生成结果图像,请参阅图10,图10为本申请实施例中结果图像一个实施例示意图,如图所示,图10中(A)所示出的存在纯白和灰色的待处理医学图像,通过本申请实施例所提供的医学图像处理方法,可以得到如图10中(B)所示出的结果图像。而图10中(C)所示出的存在规律性竖条纹的待处理医学图像,该规律性竖条纹通过为扫描仪扫描玻片所产生的条纹,该规律性竖条纹的产生取决于扫描设备,然后通过本申请实施例所提供的医学图像处理方法,可以得到如图10中(D)所示出的结果图像。其次,图10中(E)所示出的存在黑白条纹的待处理医学图像,该黑白条纹可以为格式转换所生成的,也可以为扫描仪扫描玻片所产生的不清楚的区域,该区域部分就增加黑白条纹,然后通过本申请实施例所提供的医学图像处理方法,可以得到如图10中(F)所示出的结果图像。可以看到,在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,可以有效地利用图10中所出现的各种待处理医学图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。
结合上述介绍,下面将对本申请中图像处理的方法进行介绍,请参阅图11,图11为本申请实施例中图像处理的方法一个实施例示意图,如图所示,本申请实施例中对图像处理的方法一个实施例包括:
201、获取第一待处理图像以及第二待处理图像,其中,第一待处理图像为彩色图像,且第一待处理图像包括第一图像数据、第二图像数据以及第三图像数据,且第一图像数据、第二图像数据以及是第三图像数据分别对应于 不同通道下的色彩信息;
本实施例中,图像处理装置可以获取到第一待处理图像以及第二待处理图像,该第一待处理图像可以包括第一图像数据、第二图像数据以及第三图像数据,并且第一图像数据、第二图像数据以及是第三图像数据分别对应于不同通道下的色彩信息。其中第一待处理图像以及第二待处理图像可以为图像处理装置通过有线网络接收到的图像,还可以为图像处理装置本身存储的图像。具体地,第一待处理图像与前述步骤101中所描述的待处理医学图像类似,在此不再赘述。
应理解,在实际应用中,第一图像数据、第二图像数据以及第三图像数据具体对应的色彩信息均应当结合实际情况灵活确定。并且图像处理装置可以部署于服务器,也可以部署于具有较高计算力的终端设备,本实施例以图像处理装置部署于服务器为例进行介绍。
具体地,假设第一待处理图像为阴天拍摄的一张照片,该照片的背景是阴天,还包括一辆红色的小汽车。第二待处理图像则是一张蓝天大海的照片。
202、根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
本实施例中,图像处理装置可以根据步骤201所获取的第一待处理图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成差值图像。具体地,该差值图像为灰度图像。本实施例中生成差值图像的方法与前述图2对应实施例类似,在此不再赘述。
具体地,此时生成的差值图像能够看出小汽车的轮廓。
203、对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与所述待处理医学图像的病理组织区域对应;
本实施例中,图像处理装置可以对步骤202所生成的差值图像进行二值化处理,得到二值化图像。具体地,本实施例中采用自适应二值化方式来进行前景处理,即对差值图像进行二值化处理,从而的得到二值化图像。本实施中生成二值化图像的方法与前述图2对应实施例类似,在此不再赘述。
具体地,此时生成的二值化图像能够准确地展示出小汽车的轮廓。
204、根据二值化图像的前景区域,从第一待处理图像中提取目标对象;
本实施例中,图像处理装置可以根据步骤203所生成的前景区域,从第一待处理图像中提取目标对象。若第一待处理图像为医学图像,则目标对象可以为病理组织区域。若第一待处理图像为高分辨率遥感图像,则目标对象可以为植被区域。若第一待处理图像为实时路况监控图像,则目标对象可以为自行车或者汽车。
具体地,此时可以从第一待处理图像中抠除小汽车的图像,即小汽车的图像即为目标对象。
205、根据目标对象以及第二待处理图像,生成合成图像,其中,目标对 象位于第一图层,第二待处理图像位于第二图层,第一图层覆盖于第二图层之上。
本实施例中,图像处理装置将把目标对象设置为第一图层,第二待处理图像设置为第二图层,并且将第一图层覆盖于第二图层之上,从而生成合成图像。
具体地,将小汽车的图像覆盖于蓝天白云照片之上,形成一张合成后的图像,在该图像上可以看到小汽车的背景不再是阴天,而是蓝天白云。
本申请实施例中,提供了一种图像处理的方法,通过上述方式,由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,因此,在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的目标对象更为准确,从而根据该目标对象所在图层覆盖于第二待处理图像所在图层,所生成的合成图像汇总目标对象准确,从而提升合成图像的准确度,并可以对后续的图像分析产生积极影响。
下面对本申请中的医学图像处理装置进行详细描述,请参阅图12,图12为本申请实施例中医学图像处理装置一个实施例示意图,医学图像处理装置300包括:
获取模块301,用于获取待处理医学图像,其中,待处理医学图像为彩色图像,且待处理医学图像包括第一图像数据、第二图像数据以及第三图像数据,且第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息;
生成模块302,用于根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
处理模块303,用于对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与待处理医学图像的病理组织区域中包括第一待处理图像所对应;
本申请实施例中,提供了一种医学图像处理的方法,通过上述方式,由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,因此,在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的病理组织区域更为准确,且对后续的图像分析产生积极影响。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,
生成模块302,具体用于根据待处理医学图像中所包括的第一图像数据、第二图像数据以及第三图像数据,生成最大值图像和最小值图像;
根据最大值图像以及最小值图像,生成差值图像。
本申请实施例中,提供了一种生成差值图像的方法,通过上述方式,根据第一图像数据、第二图像数据以及第三图像数据生成最大值图像以及最小值图像,由于不同图像数据对应的色彩信息不同,根据不同图像数据所确定的最大值图像以及最小值图像,所包括的待处理医学图像的色彩信息准确度较高,从而提升差值图像生成的准确度。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,
生成模块302,具体用于根据第一图像数据中第一像素位置的第一像素值、第二图像数据中第二像素位置的第二像素值以及第三图像数据中第三像素位置的第三像素值,确定最大像素值和最小像素值;
根据最大像素值获得最大值图像,根据最小像素值获得最小值图像,最大值图像中第四像素位置的像素值为最大像素值,最小值图像中第五像素位置的像素值为最小像素值,第一像素位置、第二像素位置、第三像素位置以及第四像素位置、第五像素位置均对应于待处理医学图像中同一个像素点的位置;
生成模块302,具体用于根据最大值图像中第四像素位置的像素值和最小值图像中第五像素位置的像素值确定像素差值;
根据像素差值,获得差值图像,差值图像中第六像素位置的像素值为像素差值,第四像素位置、第五像素位置和第六像素位置均对应于待处理医学图像中同一个像素点的位置。
本申请实施例中,提供了一种生成最大值图像的方法,通过上述方式,通过第一图像数据、第二图像数据以及第三图像数据对应目标像素点的像素值,确定最大像素值以及最小像素值,最大像素值以及最小像素值不同程度的反映待处理医学图像的色彩信息,并由最大像素值以及最小像素值相减得到差值像素值,使得该差值像素值能够准确的反映待处理医学图像的色彩信息,从而提升差值图像生成的准确度。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,
生成模块302,具体用于根据第一图像数据、第二图像数据以及第三图像数据,生成待处理差值图像;
对待处理差值图像进行高斯模糊处理,得到差值图像。
本申请实施例中,提供了另一种生成差值图像的方法,通过上述方式,对生成待处理差值图像行高斯模糊处理,由于高斯模糊处理可以提升鲁棒性,由此所得到的差值图像有较好的鲁棒性,从而提升成差值图像的稳定性。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,医学图像处理装置300还包括确定模块304;
确定模块304,用于根据差值图像确定二值化阈值;
确定模块304,还用于根据所述二值化阈值对所述差值图像进行二值化处理,得到二值化图像。
本申请实施例中,提供了一种得到二值化图像的方法,通过上述方式,根据二值化处理生成二值化图像,由于二值化图像的几何性质不涉及到像素的灰度值,可以使得后续对二值化图像的处理变得简单,从而可以提升生成前景区域的效率。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,
确定模块304,具体用于根据差值图像获取N个像素点所对应的N个像素值,其中,像素值与像素点具有一一对应的关系,N为大于1的整数;
从N个像素值中确定参考像素值,其中,参考像素值为N个像素值中的最大值;
根据参考像素值以及预设比例,确定二值化阈值。
本申请实施例中,提供了另一种得到二值化阈值的方法,通过上述方式,可以通过由最大像素值确定的参考像素值以及预设比例的二值化阈值,由于差值图像灰度深度是存在差异的,并且不同区域其明暗分布也可以是不同的,因此,可以通过调整预设比例灵活确定二值化阈值,提升阈值准确度以及灵活性,从而提升二值化图像生成的准确度。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,
生成模块302,具体用于采用泛洪算法检测二值化图像中的背景区域,其中,背景区域包括多个背景像素点;
根据二值化图像以及二值化图像中的背景区域,获取二值化图像中的前景区域内的背景像素点,其中,前景区域包括多个前景像素点;
将二值化图像中的前景区域内的背景像素点变更为前景像素点,得到空洞填补图像;
对空洞填补图像进行中值滤波处理,得到结果图像,所述结果图像的前景区域与所述待处理医学图像的病理组织区域对应。
本申请实施例中,提供了一种生成结果图像的方法,通过上述方式,将前景区域内的背景像素点变更为前景像素点,所得到空洞填补图像具有较好的可靠性,其次,通过中值滤波处理,能够在不损坏图像的轮廓及边缘等特征信息的基础上,使得待处理医学图像所对应的结果图像清晰并且视觉效果好。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,
处理模块303,具体用于对空洞填补图像进行中值滤波处理,得到滤波 图像;
获取滤波图像中前景区域的边界线,其中,边界线包括M个像素点,M为大于1的整数;
针对边界线上M个像素点中的每个像素点,向外延伸K个像素点,得到结果图像,其中,K为大于或等于1的整数。
本申请实施例中,提供了另一种生成结果图像的方法,通过上述方式,通过中值滤波处理,能够在不损坏图像的轮廓及边缘等特征信息的基础上,使得滤波图像清晰并且视觉效果好。其次,通过泛洪算法对滤波图像进行形态学处理,提升结果图像的准确度以及一体性。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,
获取模块301,具体用于获取原始医学图像;
采用滑动窗口从原始医学图像中提取医学子图像;
若检测到医学子图像中包括病理组织区域,则确定为待处理医学图像;
若检测到医学子图像中未包括病理组织区域,则将医学子图像确定为背景图像,且去除背景图像。
本申请实施例中,提供了一种获取待处理医学图像的方法,通过上述方式,通过检测医学子图像是否包括有病理组织区域,确定待处理医学图像,使得包括有病理组织区域的待处理医学图像通过前述步骤,能够获取待处理医学图像所对应的结果图像,并且结果图像包括有病理组织区域,便于后续对该结果图像中病理组织区域的处理以及分析。其次,将为包括有病理组织区域的医学子图像确定为背景图像,且去除该背景图像,减少资源占用率。
可选地,在上述图12所对应的实施例的基础上,本申请实施例提供的医学图像处理装置300的另一实施例中,图像处理装置还包括训练模块305;
生成模块302,还用于根据待处理图像以及待处理图像的前景区域生成目标正样本图像,其中,目标正样本图像属于正样本集合中的一个正样本图像,且每个正样本图像包含病理组织区域;
获取模块301,还用于获取负样本集合,其中,负样本集合包括至少一个负样本图像,且每个负样本图像不包含病理组织区域;
训练模块305,用于基于正样本集合以及负样本集合,对图像处理模型进行训练。
本申请实施例中,提供了一种训练图像处理模型的方法,通过上述方式,通过包含病理组织区域的正样本图像合集,以及不包含病理组织区域的负样本集合对图像处理模型进行训练,提升图像处理模型的准确度以及可靠性,从而提升图像处理的效率以及准确度。
下面对本申请中的图像处理装置进行详细描述,请参阅图13,图13为本申请实施例中图像处理装置一个实施例示意图,图像处理装置400包括:
获取模块401,用于获取第一待处理图像以及第二待处理图像,其中,第一待处理图像为彩色图像,且第一待处理图像包括第一图像数据、第二图像数据以及第三图像数据,且第一图像数据、第二图像数据以及第三图像数据分别对应于不同通道下的色彩信息;
生成模块402,用于根据第一图像数据、第二图像数据以及第三图像数据,生成差值图像;
处理模块403,用于对差值图像进行二值化处理,得到二值化图像,二值化图像的前景区域与待处理医学图像的病理组织区域对应;
提取模块404,用于根据二值化图像的前景区域,从第一待处理图像中提取目标对象;
生成模块402,还用于根据目标对象以及第二待处理图像,生成合成图像,其中,目标对象位于第一图层,第二待处理图像位于第二图层,第一图层覆盖于第二图层之上。
本申请实施例中,提供了一种图像处理的方法,通过上述方式,由于灰度像素点在不同通道下的色彩信息差异较小,而彩色像素点在不同通道下的色彩信息差异较大,因此,在对图像进行二值化处理之前,先利用不同通道的色彩信息生成差值图像,从而有效地利用了图像中的色彩信息,基于差值图像提取到的目标对象更为准确,从而根据该目标对象所在图层覆盖于第二待处理图像所在图层,所生成的合成图像汇总目标对象准确,从而提升合成图像的准确度,并可以对后续的图像分析产生积极影响。
图14是本申请实施例提供的一种服务器结构示意图,该服务器500可因配置或性能不同而产生比较大的差异,可以包括一个或一个以***处理器(central processing units,CPU)522(例如,一个或一个以上处理器)和存储器532,一个或一个以上存储应用程序542或数据544的存储介质530(例如一个或一个以上海量存储设备)。其中,存储器532和存储介质530可以是短暂存储或持久存储。存储在存储介质530的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对服务器中的一系列指令操作。更进一步地,中央处理器522可以设置为与存储介质530通信,在服务器500上执行存储介质530中的一系列指令操作。
服务器500还可以包括一个或一个以上电源525,一个或一个以上有线或无线网络接口550,一个或一个以上输入输出接口558,和/或,一个或一个以上操作***541,例如Windows Server TM,Mac OS X TM,Unix TM,Linux TM,FreeBSD TM等等。
上述实施例中由服务器所执行的步骤可以基于该图14所示的服务器结构。
本实施例中,CPU 522用于执行图2对应的实施例中医学图像处理装置执行的步骤,CPU 522还用于执行图1对应的实施例中图像处理装置执行的 步骤。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的***,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (15)

  1. 一种医学图像处理的方法,由服务器执行包括:
    获取待处理医学图像,所述待处理医学图像为彩色图像,且所述待处理医学图像包括第一图像数据、第二图像数据以及第三图像数据,且所述第一图像数据、所述第二图像数据以及所述第三图像数据分别对应于不同通道下的色彩信息;
    根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成差值图像;
    对所述差值图像进行二值化处理,得到二值化图像,所述二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。
  2. 根据权利要求1所述的方法,所述根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成差值图像,包括:
    根据所述待处理医学图像中所包括的所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成最大值图像和最小值图像;
    根据所述最大值图像以及所述最小值图像,生成所述差值图像。
  3. 根据权利要求2所述的方法,所述根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成最大值图像和最小值图像,包括:
    根据所述第一图像数据中第一像素位置的第一像素值、所述第二图像数据中第二像素位置的第二像素值以及所述第三图像数据中第三像素位置的第三像素值,确定最大像素值和最小像素值;
    根据所述最大像素值获得最大值图像,根据所述最小像素值获得最小值图像,所述最大值图像中第四像素位置的像素值为所述最大像素值,所述最小值图像中第五像素位置的像素值为所述最小像素值,所述第一像素位置、所述第二像素位置、所述第三像素位置以及所述第四像素位置、所述第五像素位置均对应于所述待处理医学图像中同一个像素点的位置;
    所述根据所述最大值图像以及所述最小值图像,生成差值图像,包括:
    根据所述最大值图像中所述第四像素位置的像素值和所述最小值图像中所述第五像素位置的像素值确定像素差值;
    根据所述像素差值,获得差值图像,所述差值图像中第六像素位置的像素值为所述像素差值,所述第四像素位置、所述第五像素位置和所述第六像素位置均对应于所述待处理医学图像中同一个像素点的位置。
  4. 根据权利要求1所述的方法,所述根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成差值图像,包括:
    根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成待处理差值图像;
    对所述待处理差值图像进行高斯模糊处理,得到差值图像。
  5. 根据权利要求1所述的方法,其特征在于,所述对所述差值图像进行 二值化处理,得到二值化图像,包括:
    根据所述差值图像确定二值化阈值;
    根据所述二值化阈值对所述差值图像进行二值化处理,得到二值化图像。
  6. 根据权利要求5所述的方法,所述根据所述差值图像确定二值化阈值,包括:
    根据所述差值图像获取N个像素点所对应的N个像素值,其中,所述像素值与所述像素点具有一一对应的关系,所述N为大于1的整数;
    从所述N个像素值中确定参考像素值,其中,所述参考像素值为所述N个像素值中的最大值;
    根据所述参考像素值以及预设比例,确定二值化阈值。
  7. 根据权利要求1所述的方法,所述方法还包括:
    采用泛洪算法检测所述二值化图像中的背景区域,其中,所述背景区域包括多个背景像素点;
    根据所述二值化图像以及所述二值化图像中的背景区域,获取所述二值化图像中所述前景区域内的背景像素点,其中,所述前景区域包括多个前景像素点;
    将所述二值化图像中的前景区域内的背景像素点变更为前景像素点,得到空洞填补图像;
    对所述空洞填补图像进行中值滤波处理,得到结果图像,所述结果图像的前景区域与所述待处理医学图像的病理组织区域对应。
  8. 根据权利要求7所述的方法,所述对所述空洞填补图像进行中值滤波处理,得到结果图像包括:
    对所述空洞填补图像进行中值滤波处理,得到滤波图像;
    获取所述滤波图像中前景区域的边界线,其中,所述边界线包括M个像素点,所述M为大于1的整数;
    针对所述边界线上所述M个像素点中的每个像素点,向外延伸K个像素点,得到结果图像,其中,所述K为大于或等于1的整数。
  9. 根据权利要求1所述的方法,所述获取待处理医学图像,包括:
    获取原始医学图像;
    采用滑动窗口从所述原始医学图像中提取医学子图像;
    若检测到所述医学子图像中包括病理组织区域,则确定为所述待处理医学图像;
    若检测到所述医学子图像中未包括病理组织区域,则将所述医学子图像确定为背景图像,且去除所述背景图像。
  10. 根据权利要求1至9中任一项所述的方法,所述方法还包括:
    根据所述待处理图像以及所述待处理图像的所述前景区域生成目标正样本图像,其中,所述目标正样本图像属于正样本集合中的一个正样本图像, 且每个正样本图像包含病理组织区域;
    获取负样本集合,其中,所述负样本集合包括至少一个负样本图像,且每个负样本图像不包含病理组织区域;
    基于所述正样本集合以及所述负样本集合,对图像处理模型进行训练。
  11. 一种图像处理的方法,由服务器执行,包括:
    获取第一待处理图像以及第二待处理图像,其中,所述第一待处理图像为彩色图像,且所述第一待处理图像包括第一图像数据、第二图像数据以及第三图像数据,且所述第一图像数据、所述第二图像数据以及所述第三图像数据分别对应于不同通道下的色彩信息;
    根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成差值图像;
    对所述差值图像进行二值化处理,得到二值化图像,所述二值化图像的前景区域与所述待处理医学图像的病理组织区域对应;区域;
    根据所述二值化图像的前景区域,从所述第一待处理图像中提取目标对象;
    根据所述目标对象以及所述第二待处理图像,生成合成图像,其中,所述目标对象位于第一图层,所述第二待处理图像位于第二图层,所述第一图层覆盖于所述第二图层之上。
  12. 一种医学图像处理装置,包括:
    获取模块,用于获取待处理医学图像,所述待处理医学图像为彩色图像,且所述待处理医学图像包括第一图像数据、第二图像数据以及第三图像数据,且所述第一图像数据、所述第二图像数据以及所述第三图像数据分别对应于不同通道下的色彩信息;
    生成模块,用于根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成差值图像;
    处理模块,用于对所述差值图像进行二值化处理,得到二值化图像,所述二值化图像的前景区域与所述待处理医学图像的病理组织区域对应。
  13. 一种图像处理装置,包括:
    获取模块,用于获取第一待处理图像以及第二待处理图像,所述第一待处理图像为彩色图像,且所述第一待处理图像包括第一图像数据、第二图像数据以及第三图像数据,且所述第一图像数据、所述第二图像数据以及所述第三图像数据分别对应于不同通道下的色彩信息;
    生成模块,用于根据所述第一图像数据、所述第二图像数据以及所述第三图像数据,生成差值图像;
    处理模块,用于对所述差值图像进行二值化处理,得到二值化图像,所述二值化图像的前景区域与所述待处理医学图像的目标对象对应;
    提取模块,用于根据所述二值化图像的前景区域,从所述第一待处理图 像中提取所述目标对象;
    所述生成模块,还用于根据所述目标对象以及所述第二待处理图像,生成合成图像,其中,所述目标对象位于第一图层,所述第二待处理图像位于第二图层,所述第一图层覆盖于所述第二图层之上。
  14. 一种计算机设备,包括:存储器、收发器、处理器以及总线***;
    其中,所述存储器用于存储程序;
    所述处理器用于执行所述存储器中的程序,以实现如上述权利要求1至10中任一项所述的方法,或,实现如上述权利要求11所述的方法;
    所述总线***用于连接所述存储器以及所述处理器,以使所述存储器以及所述处理器进行通信。
  15. 一种计算机可读存储介质,包括指令,当其在计算机上运行时,使得计算机执行如权利要求1至10中任一项所述的方法,或,执行如权利要求11所述的方法。
PCT/CN2020/126063 2020-02-10 2020-11-03 一种医学图像处理的方法、图像处理的方法及装置 WO2021159767A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR1020227009792A KR20220050977A (ko) 2020-02-10 2020-11-03 의료 이미지 처리 방법, 이미지 처리 방법 및 장치
JP2022524010A JP2022553979A (ja) 2020-02-10 2020-11-03 医用画像処理方法、画像処理方法、医用画像処理装置、画像処理装置、コンピュータ装置およびプログラム
EP20919308.5A EP4002268A4 (en) 2020-02-10 2020-11-03 METHOD FOR PROCESSING MEDICAL IMAGES, IMAGE PROCESSING METHOD AND APPARATUS
US17/685,847 US20220189017A1 (en) 2020-02-10 2022-03-03 Medical image processing method and apparatus, image processing method and apparatus, terminal and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010084678.8 2020-02-10
CN202010084678.8A CN111275696B (zh) 2020-02-10 2020-02-10 一种医学图像处理的方法、图像处理的方法及装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/685,847 Continuation US20220189017A1 (en) 2020-02-10 2022-03-03 Medical image processing method and apparatus, image processing method and apparatus, terminal and storage medium

Publications (1)

Publication Number Publication Date
WO2021159767A1 true WO2021159767A1 (zh) 2021-08-19

Family

ID=71000325

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/126063 WO2021159767A1 (zh) 2020-02-10 2020-11-03 一种医学图像处理的方法、图像处理的方法及装置

Country Status (6)

Country Link
US (1) US20220189017A1 (zh)
EP (1) EP4002268A4 (zh)
JP (1) JP2022553979A (zh)
KR (1) KR20220050977A (zh)
CN (1) CN111275696B (zh)
WO (1) WO2021159767A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118135340A (zh) * 2024-05-06 2024-06-04 天津市肿瘤医院(天津医科大学肿瘤医院) 基于肺区域分割的肺影像病灶预标记方法、***及介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275696B (zh) * 2020-02-10 2023-09-15 腾讯医疗健康(深圳)有限公司 一种医学图像处理的方法、图像处理的方法及装置
CN112070708B (zh) 2020-08-21 2024-03-08 杭州睿琪软件有限公司 图像处理方法、图像处理装置、电子设备、存储介质
CN112149509B (zh) * 2020-08-25 2023-05-09 浙江中控信息产业股份有限公司 深度学习与图像处理融合的交通信号灯故障检测方法
CN114979589B (zh) * 2021-02-26 2024-02-06 深圳怡化电脑股份有限公司 图像处理方法、装置、电子设备及介质
CN113160974B (zh) * 2021-04-16 2022-07-19 山西大学 一种基于超图聚类的精神疾病生物型发掘方法
CN113989304A (zh) * 2021-11-10 2022-01-28 心医国际数字医疗***(大连)有限公司 图像处理方法、装置、电子设备及存储介质
CN115205156B (zh) * 2022-07-27 2023-06-30 上海物骐微电子有限公司 无失真的中值滤波边界填充方法及装置、电子设备、存储介质
CN115934990B (zh) * 2022-10-24 2023-05-12 北京数慧时空信息技术有限公司 基于内容理解的遥感影像推荐方法
CN115830459B (zh) * 2023-02-14 2023-05-12 山东省国土空间生态修复中心(山东省地质灾害防治技术指导中心、山东省土地储备中心) 基于神经网络的山地林草生命共同体损毁程度检测方法
CN117252893B (zh) * 2023-11-17 2024-02-23 科普云医疗软件(深圳)有限公司 一种乳腺癌病理图像的分割处理方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461143A (zh) * 2018-10-12 2019-03-12 上海联影医疗科技有限公司 图像显示方法、装置、计算机设备和存储介质
CN110575178A (zh) * 2019-09-10 2019-12-17 贾英 一种运动状态判断的诊断监控综合医疗***及其判断方法
CN110705425A (zh) * 2019-09-25 2020-01-17 广州西思数字科技有限公司 一种基于图卷积网络的舌象多标签分类学习方法
CN111275696A (zh) * 2020-02-10 2020-06-12 腾讯科技(深圳)有限公司 一种医学图像处理的方法、图像处理的方法及装置

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1153564A (zh) * 1994-06-03 1997-07-02 神经医药体系股份有限公司 基于密度纹理的分类***和方法
JPH08279046A (ja) * 1995-04-06 1996-10-22 Mitsubishi Rayon Co Ltd パターン検査装置
US6704456B1 (en) * 1999-09-02 2004-03-09 Xerox Corporation Automatic image segmentation in the presence of severe background bleeding
US20050136509A1 (en) * 2003-09-10 2005-06-23 Bioimagene, Inc. Method and system for quantitatively analyzing biological samples
US20060036372A1 (en) * 2004-03-18 2006-02-16 Bulent Yener Method and apparatus for tissue modeling
WO2010011356A2 (en) * 2008-07-25 2010-01-28 Aureon Laboratories, Inc. Systems and methods of treating, diagnosing and predicting the occurrence of a medical condition
JP5295044B2 (ja) * 2009-08-27 2013-09-18 Kddi株式会社 マスク画像を抽出する方法及びプログラム並びにボクセルデータを構築する方法及びプログラム
CN102360500B (zh) * 2011-07-08 2013-06-12 西安电子科技大学 基于Treelet曲波域去噪的遥感图像变化检测方法
JP5995215B2 (ja) * 2012-05-14 2016-09-21 学校法人東京理科大学 癌細胞領域抽出装置、方法、及びプログラム
CN102982519B (zh) * 2012-11-23 2015-04-01 南京邮电大学 一种视频图像的前景识别提取和拼接方法
CN103325117B (zh) * 2013-06-17 2016-08-10 中国石油天然气集团公司 一种基于matlab的岩心图像处理方法及***
CN104036490B (zh) * 2014-05-13 2017-03-29 重庆大学 适用于移动通信网络传输中的前景分割方法
CN106469267B (zh) * 2015-08-20 2019-12-17 深圳市腾讯计算机***有限公司 一种验证码样本收集方法及***
CN105740844A (zh) * 2016-03-02 2016-07-06 成都翼比特自动化设备有限公司 基于图像识别技术的绝缘子炸裂故障检测方法
CN106295645B (zh) * 2016-08-17 2019-11-29 东方网力科技股份有限公司 一种车牌字符识别方法和装置
WO2018180386A1 (ja) * 2017-03-30 2018-10-04 国立研究開発法人産業技術総合研究所 超音波画像診断支援方法、およびシステム
CN107563373B (zh) * 2017-07-28 2021-06-04 一飞智控(天津)科技有限公司 基于立体视觉的无人机降落区域主动安全检测方法及应用
CN107609468B (zh) * 2017-07-28 2021-11-16 一飞智控(天津)科技有限公司 用于无人机降落区域主动安全检测的类别优化聚合分析方法及应用
CN107644429B (zh) * 2017-09-30 2020-05-19 华中科技大学 一种基于强目标约束视频显著性的视频分割方法
JP2018152095A (ja) * 2018-04-19 2018-09-27 株式会社ニコン 画像処理装置、撮像装置及び画像処理プログラム
CN108924525B (zh) * 2018-06-06 2021-07-06 平安科技(深圳)有限公司 图像亮度调整方法、装置、计算机设备及存储介质
CN109708813B (zh) * 2018-06-28 2021-07-27 浙江森拉特暖通设备有限公司 供暖片漏水状态实时检测平台
CN109784344B (zh) * 2019-01-24 2020-09-29 中南大学 一种用于地平面标识识别的图像非目标滤除方法
CN110675420B (zh) * 2019-08-22 2023-03-24 华为技术有限公司 一种图像处理方法和电子设备
CN110472616B (zh) * 2019-08-22 2022-03-08 腾讯科技(深圳)有限公司 图像识别方法、装置、计算机设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461143A (zh) * 2018-10-12 2019-03-12 上海联影医疗科技有限公司 图像显示方法、装置、计算机设备和存储介质
CN110575178A (zh) * 2019-09-10 2019-12-17 贾英 一种运动状态判断的诊断监控综合医疗***及其判断方法
CN110705425A (zh) * 2019-09-25 2020-01-17 广州西思数字科技有限公司 一种基于图卷积网络的舌象多标签分类学习方法
CN111275696A (zh) * 2020-02-10 2020-06-12 腾讯科技(深圳)有限公司 一种医学图像处理的方法、图像处理的方法及装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4002268A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118135340A (zh) * 2024-05-06 2024-06-04 天津市肿瘤医院(天津医科大学肿瘤医院) 基于肺区域分割的肺影像病灶预标记方法、***及介质

Also Published As

Publication number Publication date
JP2022553979A (ja) 2022-12-27
EP4002268A1 (en) 2022-05-25
US20220189017A1 (en) 2022-06-16
CN111275696B (zh) 2023-09-15
EP4002268A4 (en) 2022-12-21
CN111275696A (zh) 2020-06-12
KR20220050977A (ko) 2022-04-25

Similar Documents

Publication Publication Date Title
WO2021159767A1 (zh) 一种医学图像处理的方法、图像处理的方法及装置
CN110570353B (zh) 密集连接生成对抗网络单幅图像超分辨率重建方法
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN108446617B (zh) 抗侧脸干扰的人脸快速检测方法
US20230214976A1 (en) Image fusion method and apparatus and training method and apparatus for image fusion model
CN104050471B (zh) 一种自然场景文字检测方法及***
WO2018145470A1 (zh) 一种图像检测方法和装置
TW202014984A (zh) 一種圖像處理方法、電子設備及存儲介質
WO2017084204A1 (zh) 一种二维视频流中的人体骨骼点追踪方法及***
CN109685045B (zh) 一种运动目标视频跟踪方法及***
CN109918971B (zh) 监控视频中人数检测方法及装置
CN111160194B (zh) 一种基于多特征融合的静态手势图像识别方法
CN112561813B (zh) 人脸图像增强方法、装置、电子设备及存储介质
CN113762009B (zh) 一种基于多尺度特征融合及双注意力机制的人群计数方法
Cai et al. Perception preserving decolorization
CN113781421A (zh) 基于水下的目标识别方法、装置及***
Zhang et al. Salient target detection based on the combination of super-pixel and statistical saliency feature analysis for remote sensing images
CN115713469A (zh) 基于通道注意力和形变生成对抗网络的水下图像增强方法
CN108711160A (zh) 一种基于hsi增强性模型的目标分割方法
CN113129214A (zh) 一种基于生成对抗网络的超分辨率重建方法
CN112070041B (zh) 一种基于cnn深度学习模型的活体人脸检测方法和装置
Kour et al. A review on image processing
Fang et al. Detail maintained low-light video image enhancement algorithm
Honnutagi et al. Underwater video enhancement using manta ray foraging lion optimization-based fusion convolutional neural network
Xu et al. DANet-SMIW: An Improved Model for Island Waterline Segmentation Based on DANet

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20919308

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020919308

Country of ref document: EP

Effective date: 20220216

ENP Entry into the national phase

Ref document number: 20227009792

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2022524010

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE