WO2021159778A1 - 图像处理方法、装置、智能显微镜、可读存储介质和设备 - Google Patents

图像处理方法、装置、智能显微镜、可读存储介质和设备 Download PDF

Info

Publication number
WO2021159778A1
WO2021159778A1 PCT/CN2020/127037 CN2020127037W WO2021159778A1 WO 2021159778 A1 WO2021159778 A1 WO 2021159778A1 CN 2020127037 W CN2020127037 W CN 2020127037W WO 2021159778 A1 WO2021159778 A1 WO 2021159778A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
classified
size
pixel
physical size
Prior art date
Application number
PCT/CN2020/127037
Other languages
English (en)
French (fr)
Inventor
王亮
孙嘉睿
朱艳春
姚建华
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2021159778A1 publication Critical patent/WO2021159778A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image
    • G06T2207/10061Microscopic image from scanning electron microscope
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30021Catheter; Guide wire
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Definitions

  • This application relates to the field of image processing technology, and in particular to an image processing method, device, smart microscope, computer-readable storage medium, and computer equipment.
  • image classification processing technology can be applied to identify and classify objects waiting to be classified in digital slices such as breast ducts.
  • the digital slice is to scan a traditional glass slice through a smart microscope to obtain a high-resolution digital image.
  • the digital slice can also be processed with high-precision and multi-field splicing on a computer device.
  • the related image classification processing technology currently applied to classify the objects to be classified in the digital slices is usually to classify the objects to be classified in the digital slices at a specific scale, but because the digital slices generally contain a variety of Factors such as the size of the object to be classified, and so on, cause the accuracy of this technology to classify the object to be classified in the digital slice to be low.
  • an image processing method for processing a digital image processing method, device, smart microscope, computer-readable storage medium, and computer equipment.
  • an embodiment of the present invention provides an image processing method, which is executed by a computer device, and includes: obtaining a digital slice containing at least two sizes of objects to be classified; and obtaining the digital slice according to the digital slice. Images of objects in at least two scales; wherein, among the images of the objects to be classified in at least two scales, the larger-scale image has a smaller image size and a higher image size than a smaller-scale image.
  • Image resolution among the images of the object to be classified in the at least two scales, the image whose image size matches the size of the object to be classified is used as the image to be classified of the object to be classified; Based on the classification model corresponding to the image resolution of the image to be classified, classify the objects to be classified in the image to be classified to obtain classification results of the objects to be classified in various sizes; The classification results of the objects to be classified are merged to obtain classification results of the objects to be classified in at least two sizes in the digital slice.
  • an embodiment of the present invention provides an image processing device, which includes: a slice acquisition module for acquiring a digital slice containing at least two sizes of objects to be classified; an image acquisition module for The digital slice is used to obtain images of the object to be classified in at least two scales; wherein, among the images of the object to be classified in at least two scales, an image of a larger scale has a larger scale than an image of a smaller scale.
  • an image matching module for comparing the object to be classified in an image of the at least two scales, the image size and the size of the object to be classified The matched image is used as the to-be-classified image of the to-be-classified object;
  • the result acquisition module is used to determine the to-be-classified object in the to-be-classified image based on the classification model corresponding to the image resolution of the to-be-classified image Perform the classification to obtain the classification results of the objects to be classified in each size;
  • the result fusion module is used to fuse the classification results of the objects to be classified in the various sizes to obtain at least two sizes of the objects to be classified in the digital slice Classification results.
  • an embodiment of the present invention provides a smart microscope, including: an image scanning device and an image analysis device; wherein the image scanning device is used to scan the object to be classified to obtain the number of the object to be classified The slice is transmitted to the image analysis device; the image analysis device is used to perform the steps of the image processing method as described above.
  • the embodiment of the present invention provides one or more non-volatile storage media storing computer-readable instructions.
  • the processor When the computer-readable instructions are executed by one or more processors, the processor The following steps are performed: obtaining a digital slice containing at least two sizes of objects to be classified; according to the digital slices, obtaining images of the objects to be classified in at least two scales; wherein, the objects to be classified are in at least two sizes.
  • the larger-scale image has a smaller image size and a higher image resolution than a smaller-scale image; the image of the object to be classified at the at least two scales
  • the image whose size matches the size of the object to be classified is used as the image to be classified of the object to be classified; based on the classification model corresponding to the image resolution of the image to be classified, the The objects to be classified in the classified image are classified to obtain the classification results of the objects to be classified of various sizes; the classification results of the objects to be classified of the various sizes are merged to obtain at least two sizes of the objects to be classified in the digital slice.
  • the classification result of the classification object is a smaller image size and a higher image resolution than a smaller-scale image; the image of the object to be classified at the at least two scales
  • the image whose size matches the size of the object to be classified is used as the image to be classified of the object to be classified; based on the classification model corresponding to the image resolution of the image to be classified, the The objects to be classified in the classified image
  • an embodiment of the present invention provides a computer device, including a memory and a processor, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processing The device executes the following steps: acquiring a digital slice containing at least two sizes of the object to be classified; according to the digital slice, acquiring images of the object to be classified in at least two scales; wherein the object to be classified is at least Among the images at the two scales, the larger-scale image has a smaller image size and a higher image resolution than the smaller-scale image; In the image, the image whose size matches the size of the object to be classified is used as the image to be classified of the object to be classified; based on the classification model corresponding to the image resolution of the image to be classified, the The objects to be classified in the image to be classified are classified to obtain classification results of the objects to be classified of various sizes; the classification results of the objects to be classified of each size are fused to obtain at least two sizes of the digital slices The classification result of the object to be classified
  • Fig. 1 is an application environment diagram of an image processing method in an embodiment
  • FIG. 2 is a schematic flowchart of an image processing method in an embodiment
  • Figure 3(a) is a schematic diagram of an image at one scale in an embodiment
  • Figure 3(b) is a schematic diagram of an image under another scale in an embodiment
  • FIG. 4 is a schematic flowchart of the step of obtaining classification results of objects to be classified of various sizes in an embodiment
  • FIG. 5 is a schematic flowchart of a step of obtaining images of an object to be classified in at least two scales in an embodiment
  • Fig. 6 is a schematic structural diagram of a segmentation model in an embodiment
  • FIG. 7 is a schematic diagram of a segmentation result in an embodiment
  • FIG. 8 is a schematic diagram of an image range occupied by an object to be classified in an embodiment
  • FIG. 9 is a schematic flowchart of an image processing method in another embodiment.
  • FIG. 10 is a schematic flowchart of an image processing method in an application example
  • Figure 11 is a schematic diagram of a digital slice in an application example
  • Fig. 12 is a schematic diagram of a multi-category segmentation result in an application example
  • Figure 13 is a schematic diagram of the segmentation result of a large catheter in an application example
  • Figure 14 is a schematic diagram of a small catheter classification process in an application example
  • Figure 15 is a structural block diagram of an image processing device in an embodiment
  • Figure 16 is a block diagram of a smart microscope in an embodiment.
  • Fig. 17 is a structural block diagram of a computer device in an embodiment.
  • AI Artificial Intelligence
  • artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive discipline, covering a wide range of fields, including both hardware-level technology and software-level technology.
  • Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation and interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning and deep learning.
  • Digital Slides also known as virtual slides, contain lesion information on glass slides.
  • the digital slice can be zoomed on computer equipment such as a personal computer.
  • the digital slice can be used on the computer equipment to observe any position on the glass slice, and the corresponding position can also be further enlarged to, for example, 5 times, 10 times, 20 times, and 40 times. Observation at the same magnification is the same as magnifying and reducing glass slices on a traditional microscope.
  • FIG. 1 is an application environment diagram of the image processing method in an embodiment.
  • the processing device 100 may be a computer device with image processing capabilities such as image acquisition, analysis, and display.
  • the computer device may specifically be at least one of a mobile phone, a tablet computer, a desktop computer, and a notebook computer; in addition, the image processing device 100 also It can be a smart microscope, where the smart microscope incorporates artificial intelligence's vision, voice, natural language processing technology, and augmented reality (AR) technology.
  • the user can input control commands such as voice commands to the smart display, and the smart microscope can be based on
  • the instruction performs operations such as automatic identification, detection, quantitative calculation, and report generation. It can also display the detection results in real time in the field of view shown by the user's eyepiece, prompting timely without disturbing the user's review process, which can improve processing Efficiency and accuracy.
  • the image processing device 100 may obtain a digital slice containing at least two sizes of objects to be classified by scanning or the like, and obtain the objects to be classified in at least two scales according to the digital slices.
  • the larger-scale image has a smaller image size and higher image resolution than a smaller-scale image; then, image processing The device 100 may use an image whose image size matches the size of the object to be classified among the images of the object to be classified in at least two scales as the image to be classified of the object to be classified; then, the image processing device 100 may first obtain The classification models corresponding to the image resolution of the image to be classified.
  • These classification models can be pre-configured in the image processing device 100, and then the image processing device 100 can perform the classification based on the classification model corresponding to the image resolution of the image to be classified.
  • the objects to be classified in the image are classified, so that the classification results of the objects to be classified of various sizes can be obtained; finally, the image processing device 100 fuses the classification results of the objects to be classified of various sizes to obtain at least two sizes in the digital slice
  • This method can be applied to image processing equipment such as computer equipment, smart microscopes, etc. to accurately classify objects to be classified in at least two sizes contained in digital slices, and improve the accuracy of classifying objects to be classified in digital slices. .
  • the image processing device 100 can be used to scan the object to be classified locally, and perform classification processing on the objects to be classified in various sizes in the scanned digital slice.
  • the classification of objects to be classified can also be completed by means of remote communication.
  • the classification processing of non-local objects to be classified can be realized based on the fifth generation mobile networks (5th generation mobile networks or 5th generation wireless systems, 5th-Generation, referred to as 5G or 5G technology).
  • the user can be such as a mobile phone or a tablet.
  • a terminal device such as a computer obtains the digital slice containing the object to be classified, and then can send the digital slice to the remote image processing device 100 in real time based on the 5G communication network, and then the image processing device 100 can perform the classification on the object to be classified in the digital slice.
  • Classification processing the classification result is transmitted back to the user's terminal device through the 5G communication network, so that the user can grasp the classification result through the terminal device, thanks to the strong real-time characteristics of 5G communication technology, even if the remote image processing device 100
  • To classify the objects to be classified in the digital slices collected by users on-site can also enable users to obtain corresponding classification results in real-time on-site, and can also reduce the pressure of image data processing on the user side while ensuring real-time performance.
  • FIG. 2 is a schematic flowchart of an image processing method in an embodiment. This embodiment mainly uses this method to apply to the image processing in FIG. 1 above.
  • the device 100 is taken as an example.
  • the image processing device 100 may specifically be a mobile phone, a tablet computer, or a desktop computer with image processing capabilities such as image acquisition, analysis, and display. And laptops and other computer equipment.
  • the image processing method specifically includes the following steps:
  • Step S201 Acquire a digital slice containing at least two sizes of objects to be classified
  • the image processing device 100 may scan the object to be classified by the image scanning device to obtain a digital slice of the object to be classified.
  • the image scanning device can be used as a part of the image processing device 100, and can also be used as an external device of the image processing device 100.
  • the image scanning device may be controlled by the image processing device 100 to scan a carrier such as a slide glass waiting to be classified to obtain a digital slice of the object to be classified and transmit it to the image processing device 100 for analysis and processing.
  • the digital slice may be a WSI The (Whole Slide Image, full-field digital pathology slice) image can be arbitrarily zoomed in and out on the image processing device 100.
  • the digital slice obtained by the image processing device 100 generally contains at least two sizes of objects to be classified.
  • the image processing device 100 may, after obtaining a digital slice containing at least two sizes of objects to be classified, perform classification on the objects to be classified in different sizes. Objects are marked to distinguish them. Taking breast ducts as the object to be classified as an example, breast ducts can be divided into two sizes according to a size threshold. Breast ducts with a size larger than the size threshold can be regarded as large ducts, and smaller than the size threshold can be regarded as small ducts.
  • the image processing device 100 may mark a breast duct that occupies more than or equal to 2048 pixels under a 20-fold lens as a large duct, and vice versa, it may mark a small duct.
  • Step S202 Obtain images of the object to be classified in at least two scales according to the digital slice;
  • the image processing device 100 may obtain an image of the object to be classified in at least two scales from the digital slice, wherein, among the images of the object to be classified in the at least two scales, the larger-scale image is compared with Smaller-scale images have smaller image sizes and higher image resolutions. Wherein, the actual physical size corresponding to each pixel in the image of different scales is the same, so that the object to be classified can correspond to the same physical size in the image of each scale.
  • the image processing device 100 may perform scaling processing on the digital slice to obtain images of the object to be classified at different scales.
  • the scale can correspond to the magnification of the microscope, the larger the magnification, the larger the scale, and vice versa.
  • a 20x lens has a larger scale than a 5x lens.
  • an image with a larger scale has a smaller image size and a higher image resolution.
  • the breast duct obtained by the image processing device 100 The image under the 5X lens is shown in the first example image 310 in Fig. 3(a), and the image of the breast duct under the 20X lens is shown in the third example images 331 to 334 in Fig. 3(b), where, Any one of the third example images 331 to 334 can be used as an image of the mammary duct under a 20-fold lens.
  • the first example arrow 3110 and the third example arrow 3310 show the position of the breast duct in the first example image 310 and the third example images 331 to 334, respectively.
  • the third example images 331 to 334 can be stitched into an image with the same image size as that of the first example image 310, and the resolution of the third example images 331 to 334 is higher than that of the first example image 310. That is, the image resolution of the image corresponding to the larger field of view is lower than the image resolution of the image corresponding to the smaller field of view.
  • the image processing device 100 can also label the objects to be classified in the images of various scales.
  • binarization or multi-value can be used to label the objects to be classified in the images of various scales, as shown in FIG. 3( As shown in a), the second example image 320 is a binarized annotated image corresponding to the first example image 310, and the second example arrow 3210 shows the result of the binarization of the mammary duct under a 5x lens; As shown in 3(b), the fourth example images 341 to 344 are binarized annotated images corresponding to the third example images 331 to 334, and the second example arrow 3410 shows the binary value of the mammary duct under a 20-fold lens.
  • the image processing device 100 can segment the object to be classified and the background in the image of the corresponding scale by means of binarization labeling, so as to subsequently classify the object to be classified obtained by the segmentation.
  • multi-valued labeling can also be used to segment the object to be classified from the background.
  • the multi-valued labeling method may specifically be to label the objects to be classified as different according to the different sizes of the objects to be classified. Color, so as to determine the size range of the object to be classified according to the color of the object to be classified.
  • step S203 among the images of the object to be classified in at least two scales, the image whose image size matches the size of the object to be classified is used as the image to be classified of the object to be classified.
  • the image processing device 100 selects an image matching the size of the object to be classified as the image to be classified according to the size of the object to be classified, so that the subsequent steps perform classification processing on the object to be classified based on the object to be classified.
  • the objects to be classified include large catheters and small catheters.
  • the definition of large and small ducts can refer to the above, that is, the image processing device 100 can mark the breast ducts occupying more than or equal to 2048 pixels under the 20-fold lens as large ducts, and vice versa, it can be marked as small ducts.
  • the image processing apparatus 100 takes an image of 1024 pixels ⁇ 1024 pixels under a 5 magnification lens as an image matching the size of a large duct in a breast duct, and an image of 2048 pixels ⁇ 2048 pixels under a 20 magnification lens as an image with Image that matches the size of the small duct in the breast duct.
  • the image size of the 1024 pixels ⁇ 1024 pixel image under the 5x lens is also larger than the size of the small duct, because the 1024 pixels under the 5x lens
  • the image of ⁇ 1024 pixels can also be used to classify the small catheter, but because the image of 2048 pixels ⁇ 2048 pixels under the 20 magnification lens also meets the size matching condition, and the image of 2048 pixels ⁇ 2048 pixels under the 20 magnification lens has It has a higher resolution than the 1024 pixels ⁇ 1024 pixels image under the 5x mirror, so the relevant features of the small catheter can be obtained more clearly, which is more conducive to its classification. Therefore, the image processing device 100 will 2048 pixels ⁇ 2048 pixels under the 20x mirror. The 2048-pixel image is classified as an image matching the size of the small catheter.
  • Step S204 based on the classification model corresponding to the image resolution of the image to be classified, classify the object to be classified in the image to be classified, and obtain classification results of the object to be classified in each size.
  • the image processing device 100 may be pre-configured with multiple classification models for the object to be classified, and the multiple classification models may correspond to different image resolutions. Therefore, after the image processing device 100 obtains the image to be classified, it can select a corresponding classification model according to the image resolution of the image to be classified, and then the image processing device 100 inputs the image to be classified into the classification model, and the classification model can The objects to be classified in the image to be classified are classified, and the corresponding classification results are obtained and output.
  • the image processing device 100 selects the corresponding classification model based on the image resolution of the image to be classified to classify the object to be classified on it.
  • the image resolution is relatively high, and the part of the object to be classified Features are easy to extract, which is more beneficial to accurately classify them, so local feature classification models can be used to classify them.
  • the image resolution is relatively low, and the objects to be classified can be classified based on their overall characteristics such as outline size on the image. For example, image semantic classification models can be used to classify them.
  • the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, wherein the first scale is smaller than the second scale.
  • FIG. 4 is a schematic flowchart of the step of obtaining the classification results of the objects to be classified of various sizes in an embodiment.
  • the above step S204 is based on the classification model corresponding to the image resolution of the image to be classified. Classify the objects to be classified, and obtain the classification results of objects to be classified of various sizes, which may specifically include:
  • Step S401 When the image to be classified is the first image, use the image semantic classification model as the classification model corresponding to the image resolution of the first image to classify the object to be classified in the first image to obtain the The classification result of the object to be classified;
  • Step S402 When the image to be classified is the second image, use the local feature classification model as the classification model corresponding to the image resolution of the second image to classify the object to be classified in the second image to obtain the The classification result of the object to be classified.
  • the image processing device 100 recognizes the object to be classified on the first image, that is, the image to be classified is the first image;
  • the size of the object matches the image size of the second image, and the image processing device 100 recognizes the object to be classified on the second image, that is, the image to be classified is the second image.
  • the first scale of the first image is smaller than the second scale of the second image, so the image resolution of the first image is smaller than that of the second image.
  • image processing The device 100 uses the image semantic classification model as its classification model to classify the object to be classified on the first image, where the image semantic classification model can perform the classification of the object to be classified based on the overall contour feature of the object to be classified on the first image.
  • the image semantic classification model can be based on a semantic segmentation network model implemented by a network model such as FC-DenseNet.
  • the image processing device 100 may use a local feature classification model as its classification model to classify the object to be classified on the second image, where the local feature classification model can classify the object to be classified.
  • Each part of the classification object is segmented, and the local features of each part of the object to be classified are extracted, so as to realize the classification of the object to be classified.
  • the image processing device 100 can take an image of 1024 pixels ⁇ 1024 pixels under a 5x lens as the first image, and use 2048 pixels ⁇ 2048 pixels under a 20x lens as the first image.
  • the 2048-pixel image is used as the second image.
  • the image processing device 100 may perform semantic classification on the large catheter based on the FC-DenseNet network model and the multi-category tags, and obtain the classification results according to the corresponding category tags.
  • the image processing device 100 may segment the cells of the small catheter in the second image, and then extract the cell feature to obtain the classification result of the small catheter.
  • the image processing device 100 can accurately classify the objects to be classified in the corresponding images to be classified by using a classification model adapted to the image resolution, and obtain classification results of objects to be classified in various sizes at the same time. Improve classification efficiency.
  • step S205 the classification results of the objects to be classified in various sizes are merged to obtain classification results of the objects to be classified in at least two sizes in the digital slice.
  • the image processing device 100 merges the to-be-classified results of various sizes, so as to obtain the classification results of the to-be-classified objects of various sizes in the digital slice.
  • the breast ducts may include large ducts and small ducts.
  • the image processing device 100 can classify the large ducts and the small ducts in different images to be classified, and finally merge the classification results of the large ducts and the small ducts.
  • the classification results of various sizes of breast ducts in digital slices can be obtained at the same time.
  • the image processing device 100 obtains a digital slice containing at least two sizes of an object to be classified, and obtains an image of the object to be classified in at least two scales according to the digital slice, wherein the larger-scale image corresponds to Compared with a smaller scale image, it has a smaller image size and a higher image resolution. Then, the image processing device 100 uses the image whose image size matches the size of the object to be classified among the images of the object to be classified in the at least two scales as the image to be classified of the object to be classified.
  • the image processing device 100 can According to the actual size of the object to be classified, the image to be classified is adaptively selected, and the classification model corresponding to the image resolution of the image to be classified is used to classify the object to be classified.
  • the classification results of the classified objects are merged to achieve the effect of accurately classifying the objects to be classified in various sizes included in the digital slice, and to improve the accuracy of classifying the objects to be classified in the digital slice.
  • FIG. 5 is a schematic flowchart of the step of obtaining images of the object to be classified in at least two scales in an embodiment.
  • step S202 according to the digital slice, the object to be classified is obtained in at least Images under the two scales can be realized in the following ways:
  • Step S501 Obtain the physical size of the pixel in at least two scales as the physical size of the target pixel;
  • Step S502 Determine the image resolution of the image of the object to be classified in at least two scales as the target image resolution
  • step S503 the image size of the image containing the object to be classified in the digital slice is reduced to an image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain an image of the object to be classified in at least two scales.
  • the image processing device 100 can reduce the image size of the image containing the object to be classified in the digital slice according to the physical size of the pixel and the image resolution corresponding to different scales to obtain the object to be classified in at least two types.
  • the size of the image under the scale, and the size of the object to be classified in the image of each scale is kept consistent, which improves the accuracy of classifying the object to be classified.
  • the image processing device 100 can obtain and read the physical size of the pixel corresponding to the aforementioned at least two scales as the physical size of the target pixel, and the image resolution required to read the image of the object to be classified in the at least two scales, As the target image resolution.
  • the image processing device 100 can select the image containing the object to be classified from the digital slice, and reduce the image size of the image containing the object to be classified in the digital slice to the same size as the target pixel.
  • the physical size and the image size corresponding to the resolution of the target image are used to obtain images of the object to be classified in at least two scales.
  • the image size of the image containing the object to be classified in the digital slice is reduced to the image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain the image size of the object to be classified in at least two scales.
  • the image processing device 100 may also determine the image size of the image containing the object to be classified in the digital slice in the following manner. The specific steps include:
  • the pixel size of the largest original image of the digital slice is generally different, which usually depends on different image scanning devices.
  • the solution of this embodiment can realize the digital slice of various pixel sizes and output the actual physical of the object to be classified. Images at all scales of the same size.
  • the image processing device 100 can read the pixel size of the largest original image of the digital slice (such as a digital slice scanned by a 40x mirror) as the original pixel physical size, and then the image processing device 100 can obtain the target image resolution and target After the physical size of the pixel, the image size of the image containing the object to be classified in the corresponding digital slice is calculated according to the physical size of the original pixel, the resolution of the target image, and the physical size of the target pixel.
  • the image processing device 100 can read the pixel size of the largest original image of the digital slice (such as a digital slice scanned by a 40x mirror) as the original pixel physical size, and then the image processing device 100 can obtain the target image resolution and target After the physical size of the pixel, the image size of the image containing the object to be classified in the corresponding digital slice is calculated according to the physical size of the original pixel, the resolution of the target image, and the physical size of the target pixel.
  • the foregoing determination of the image size of the image containing the object to be classified in the digital slice according to the physical size of the original pixel, the resolution of the target image and the physical size of the target pixel, the specific steps may include:
  • the pixel physical size ratio refers to the ratio of the physical size of the target pixel to the physical size of the original pixel.
  • the image processing device 100 may specifically determine the image size of the image containing the object to be classified in the digital slice according to the product of the target image resolution and the pixel physical size ratio.
  • the shape of the image and the pixel is generally rectangular, that is, for the image size, it has the image horizontal size and the image vertical size, and for the pixel physical size, it corresponds to the vertical physical size of the pixel and the vertical physical size of the pixel. Therefore, the aforementioned target pixel physical size ratio may include the pixel horizontal physical size ratio and the pixel vertical physical size ratio; the target pixel physical size may include the target pixel horizontal physical size and the target pixel vertical physical size, and the original pixel physical size may include The original pixel horizontal physical size and the original pixel vertical physical size, and the target image resolution may include the target image horizontal resolution and the target image vertical resolution.
  • the above step of obtaining the pixel physical size ratio may include:
  • the ratio of the horizontal physical size of the target pixel to the horizontal physical size of the original pixel is taken as the horizontal physical size ratio of the pixel
  • the ratio of the vertical physical size of the target pixel to the vertical physical size of the original pixel is taken as the vertical physical size of the pixel.
  • the image processing device 100 may first determine the horizontal physical size of the target pixel and the vertical physical size of the target pixel, and the ratio of the original pixel horizontal physical size to the vertical physical size of the pixel, and then the image processing device 100 may calculate the horizontal physical size of the target pixel and the vertical physical size of the pixel.
  • the ratio of the horizontal physical size of the original pixel is obtained to obtain the horizontal physical size ratio of the pixel; the image processing device 100 may also calculate the ratio of the vertical physical size of the target pixel to the vertical physical size of the original pixel to obtain the vertical physical size ratio of the pixel.
  • the above step of determining the image size of the image containing the object to be classified in the digital slice based on the target image resolution and the pixel physical size ratio may include:
  • the target image resolution and the pixel physical size ratio determine the image size of the image containing the object to be classified in the digital slice for description.
  • the product of the horizontal resolution of the target image and the horizontal physical size ratio of the pixel is taken as the horizontal size of the image containing the object to be classified in the digital slice, and the product of the vertical resolution of the target image and the vertical physical size ratio of the pixel is taken as the digital slice.
  • the vertical size of the image of the object to be classified is taken as the horizontal size of the image containing the object to be classified in the digital slice.
  • the image processing device 100 may first determine the horizontal resolution of the target image and the vertical resolution of the target image. Then, the image processing device 100 multiplies the horizontal resolution of the target image by the pixel horizontal physical size ratio to obtain the image horizontal size of the image containing the object to be classified in the digital slice; the image processing device 100 can also compare the vertical resolution of the target image with the pixel The longitudinal physical size ratio is multiplied to obtain the image longitudinal size of the image containing the object to be classified in the digital slice. In this way, the image processing device 100 can obtain the image size of the image containing the object to be classified in the digital slice according to the horizontal size of the image and the vertical size of the image.
  • the following uses a specific example to describe in detail the method of obtaining the image size of the image containing the object to be classified in the digital slice.
  • the image processing device 100 reads the pixel size of the largest original image of the digital slice.
  • the pixel size includes pixelsize_x and pixelsize_y, which represent the physical size of the pixel in the horizontal and vertical directions, respectively.
  • the pixel size (including pixelsize_x and pixelsize_y) of the digital slice image obtained by scanning with a 40x lens is about 0.25 microns.
  • the pixel size of a digital slice image obtained by scanning with a 20x lens is generally 0.5. About micrometers. Based on this, for example, the vertical resolution of an image corresponding to a certain scale is M pixels, and the horizontal resolution is N pixels.
  • the vertical physical size and horizontal physical size of the pixels of the image are H and W microns, respectively.
  • the image processing device 100 can obtain an image with an area size of h_wsi ⁇ w_wsi from the image data of the digital slice from the image data of the digital slice through the openslide toolkit of python, and then This image is scaled to the size of M ⁇ N pixels, and then the image corresponding to a certain scale is obtained.
  • the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, wherein the first scale is smaller than the second scale;
  • the image whose image size matches the size of the object to be classified as the image to be classified of the object to be classified may specifically include:
  • the first image whose image size is larger than the size of the object to be classified is used as the image to be classified of the object to be classified.
  • the image processing device 100 may first obtain the size of the object to be classified, and then compare the size of the object to be classified with the image size of the second image, if the size of the object to be classified is larger than the image size of the second image Size, it means that the object to be classified is not completely contained in the second image, and therefore the image processing device 100 cannot accurately classify it.
  • the image processing device 100 can set the image size larger than the The object to be classified is classified in the first image of the size of the object to improve classification accuracy.
  • the image processing device 100 determines that the size of the object to be classified is smaller than the image size of the second image, since the second image has a larger image resolution than the first image, the image processing device 100 can use the Classify the object to be classified in, so as to improve the accuracy of classification.
  • the first image whose image size is larger than the size of the object to be classified is used as the image to be classified of the object to be classified, and the following steps Get the size of the object to be classified, including:
  • the image processing device 100 may obtain the contour feature of the object to be classified in the image, and determine the size of the object to be classified according to the contour feature.
  • the contour feature may include the object to be classified.
  • the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, and the first scale is smaller than the second scale.
  • the image processing device 100 can obtain the contour of the object to be classified on the first image. Point coordinates, according to the contour point coordinates, the size of the object waiting to be classified as the circumscribed rectangle formed by the outer contour of the object to be classified can be obtained. Since the actual physical size of the object to be classified is the same in the images at each scale, The size of an object to be classified determined on an image can be applied to images of different scales for size comparison.
  • a pre-trained segmentation model for the object to be classified may be used to obtain the contour feature of the object to be classified.
  • the above step of obtaining the contour feature of the object to be classified may specifically include:
  • the segmentation model corresponding to the first scale is used to obtain the segmentation result of the object to be classified on the first image; according to the segmentation result, the contour feature is obtained.
  • the image processing device 100 may use segmentation models corresponding to different scales to segment the objects to be classified in the images of corresponding sizes, and obtain their contour features according to the segmentation results.
  • the segmentation model can be applied to images of different scales to segment the object to be classified from the background of the image. Different scales can correspond to different segmentation models, so that the image processing device 100 can pre-install the At different scales, the corresponding training data is used for training to obtain a segmentation model that can identify and segment the object to be classified under various scales.
  • the fully convolution densely connected network FC-DenseNet network model can be used to realize the segmentation model of the object to be classified.
  • the structure of the FC-DenseNet network model is shown in Figure 6 below.
  • Figure 6 is a schematic diagram of the structure of a segmentation model in an embodiment, where DB stands for Dense Block, which means dense module, C means Convolution, which means convolution, and TD means Transitions Down. That is downsampling, TU means Transitions Up means upsampling, CT means concatenation means merging, SC means Skip Connection means skip connection or skip link, Input means input image, and output means output segmentation classification result.
  • the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, and the first scale is smaller than the second scale.
  • the image processing device 100 can train the FC-DenseNet network model in the images corresponding to the first scale and the second scale based on the image data of the object to be classified to obtain the segmentation model corresponding to the first scale and the second scale.
  • the segmentation models corresponding to the first scale and the second scale respectively obtain the segmentation results of the object to be classified on the first image and the second image. Take the large and small ducts of the breast duct as an example to describe the segmentation results in detail. Refer to FIG. 7, which is a schematic diagram of a segmentation result in an embodiment.
  • FIG. 1 The catheter segmentation results of the first image and the second image of 2048 pixels ⁇ 2048 pixels under a 20-fold lens, it can be seen that the application of two scales of models can be more complementary to segment the large catheter and the small catheter, thereby improving the segmentation efficiency of the object to be classified And accuracy.
  • the image processing device 100 may use the segmentation model corresponding to the first scale to obtain the segmentation result of the object to be classified on the first image, and obtain the contour feature according to the segmentation result.
  • the first image may completely contain the image to be classified. Therefore, by segmenting the object to be classified on the first image, the contour feature of the object to be classified can be completely obtained.
  • the above step of obtaining the size of the object to be classified according to the contour feature may specifically include:
  • the image range occupied by the object to be classified in the first image is obtained; the image range occupied by the object to be classified in the first image is taken as the size of the object to be classified.
  • the image processing device 100 can acquire the contour feature of the object to be classified on the first image, and the contour feature can include the coordinates of the outer contour point of the object to be classified on the first image, so that the image processing feature 100 can be based on The outer contour point coordinates determine the image range occupied by the object to be classified in the first image, and use the image range as the size of the object to be classified.
  • FIG. 8 is a schematic diagram of the image range occupied by the object to be classified in an embodiment.
  • the image range occupied by the object to be classified in the first image may include a horizontal image range and a vertical image range.
  • the horizontal image range and the vertical image range occupied by an image can frame the circumscribed rectangle of the outer contour of the object to be classified, so that the image processing device 100 can use the size of the circumscribed rectangle as the size of the object to be classified.
  • the size of the object to be classified is larger than the image size of the second image.
  • the following steps can also be used to determine whether the size of the object to be classified is greater than the first image.
  • the image size of the image including:
  • the image processing device 100 may determine that the size of the object to be classified is larger than the image size of the second image.
  • the image processing device 100 can make the horizontal image range occupied by the object to be classified in the first image larger than the horizontal image size of the second image and the vertical image range occupied by the object to be classified in the first image is larger than the vertical image range of the second image. At least one condition in the image size is used as a judgment condition that the size of the object to be classified is greater than the image size of the second image. When the image processing device 100 determines that any of the foregoing conditions is satisfied, the image processing device 100 can determine the size of the object to be classified. The size is larger than the image size of the second image, which improves the efficiency of matching the image size with the size of the object to be classified.
  • an image processing method is also provided, as shown in FIG. 9, which is a schematic flowchart of an image processing method in another embodiment, and the image processing method can be applied to the image processing shown in FIG. 1 Device 100, the method specifically includes:
  • Step S901 The image processing device 100 obtains a digital slice containing at least two sizes of objects to be classified;
  • Step S902 the image processing device 100 obtains the physical size of the pixel in at least two scales as the physical size of the target pixel;
  • Step S903 The image processing device 100 determines the image resolution of the image of the object to be classified in at least two scales as the target image resolution;
  • the larger-scale image has a smaller image size and higher image resolution than a smaller-scale image.
  • Step S904 the image processing device 100 obtains the original pixel physical size of the digital slice, and determines the image size of the image containing the object to be classified in the digital slice according to the original pixel physical size, the target image resolution and the target pixel physical size;
  • Step S905 the image processing device 100 reduces the image size of the image containing the object to be classified in the digital slice to an image size corresponding to the physical size of the target pixel and the resolution of the target image, to obtain an image of the object to be classified in at least two scales ;
  • step S906 the image processing device 100 uses the image whose image size matches the size of the object to be classified among the images of the object to be classified in at least two scales as the image to be classified of the object to be classified;
  • Step S907 When the image to be classified is the first image, the image processing device 100 uses the image semantic classification model to classify the object to be classified in the first image to obtain the classification result of the object to be classified of the corresponding size; when the image to be classified is In the case of the second image, the image processing device 100 uses the local feature classification model to classify the object to be classified in the second image to obtain a classification result of the object to be classified in a corresponding size;
  • the images of the object to be classified in at least two scales may include the first image corresponding to the first scale and the second image corresponding to the second scale, and the first scale is smaller than the second scale.
  • step S908 the image processing device 100 merges the classification results of the objects to be classified in various sizes to obtain classification results of the objects to be classified in at least two sizes in the digital slice.
  • the above image processing method can adaptively select the images to be classified according to the actual size of the objects to be classified, and simultaneously use the image semantic classification model and the local feature classification model in the corresponding images to be classified at the same time to classify different sizes of the images to be classified.
  • the objects are classified, and finally the classification results of the objects to be classified of various sizes are merged to obtain the classification results of the objects to be classified of various sizes on the digital slice, and the technical effect of accurate classification is achieved.
  • steps of the above flowchart are displayed in sequence according to the instructions of the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless specifically stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least part of the steps in the above flowchart may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least a part of sub-steps or stages of other steps.
  • FIG. 10 is a schematic flow diagram of the image processing method in an application example. The specific steps include:
  • Step 1 Multi-scale data generation, including:
  • WSI image file (WSI: Whole Slide Image, full-field digital pathology slice), the desired image pixel size M ⁇ N of the image at a certain scale, and the pixel physical size H and W.
  • Output WSI patch image, and the corresponding annotated image.
  • the pixel length of the largest original image of the WSI image is generally different, which mainly depends on different WSI image scanning devices.
  • the cells in the image have actual physical lengths. Therefore, in this application example, all the pixels of the image should correspond to the same physical size to correctly achieve segmentation and classification. Therefore, in this step, WSI for various pixel sizes can be further implemented through the following steps, and images of various scales corresponding to a uniform physical size can be output.
  • read the pixel length of the WSI maximum original image including pixelssize_x and pixelssize_y, which are the physical lengths of the pixels in the horizontal direction and the vertical direction, respectively.
  • the pixelsize of the WSI image scanned through a 40x mirror is about 0.25 microns
  • the pixelsize of the WSI image scanned through a 20x mirror is generally about 0.5 microns.
  • the calculation method of the pixel length and width occupied by the image to be cropped on the image is as follows:
  • the 2048 ⁇ 2048 of the second set of data here is equivalent to the 512-pixel image of the 5x lens. It can be seen that the field of view is one-fourth of the first set of data, so some large catheters may have more than one image. The size increases the difficulty of identification. This is also the reason for using the first set of scale data, but the definition of the second set of data is twice that of the first set of data, so it is more suitable for smaller catheters and fine segmentation of cell regions.
  • the schematic diagram of obtaining the images corresponding to each scale from the original WSI map is as follows. Corresponding to the WSI image at each scale, obtain the labeled image of the image area where the breast duct is located. According to the deep learning training requirements, generate a multi-value or binary mask image, as shown in the figure Figure 3(a) and Figure 3(b) respectively show images of objects to be classified and their corresponding labeled images in two scales cut out of WSI image files.
  • Figure 3(a) corresponds to a 5x lens 1024 Pixel size
  • Figure 3(b) corresponds to 2048 pixel size of 20x mirror.
  • Step 2 Catheter segmentation:
  • Input Contains the image of the breast duct and the binarized image of the breast duct;
  • Output Binarized image of mammary duct contour.
  • segmentation model 1 using the first set of 5x lens 1024 ⁇ 1024 image data, aimed at segmenting large catheters
  • segmentation model 2 using the second set of 20 ⁇ lens 2048 ⁇ 2048 image data, aiming at fine catheters segmentation.
  • the segmentation network can use the fully convolution densely connected network FC-DenseNet to implement the catheter segmentation model. This method uses the dense block to replace the Conv block in the traditional U-net.
  • FC-DenseNet is that the amount of parameters is relatively low, which makes theoretical generalization better, and has better results than the Encoder-decoder structure. Refer to Figure 6 for the specific structure of the model.
  • Figure 7 shows the segmentation results obtained through the aforementioned two segmentation models.
  • the segmentation models of the two scales are more complementary to each other so that both the large catheter and the small catheter are segmented.
  • Step 3 Judgment of duct area:
  • the maximum length and maximum width of a single catheter can be calculated, and the current image (1024 ⁇ 1024 image with 5x lens or 2048 ⁇ 2048 image with 20x lens) can be converted into the catheter pixel coordinates. , Convert to the pixel coordinates of the equivalent 20x lens, and then determine whether the current catheter range is greater than 2048pixel pixels. Since the classification process allows to obtain images under various magnifications, it is necessary to define 2048pixels under the 20x lens. The pipe of the pixel is a large pipe, so it is necessary to convert first to determine whether the pipe is a large pipe.
  • Input the result of judging the size of the large duct, and the WSI image contains the images of various breast ducts;
  • the large catheter can be processed first, that is, using the FC-DenseNet network model and based on multi-category tags, the data can be trained to obtain a multi-category segmentation model, and then the segmentation results can be further obtained.
  • the result of the large catheter is retained as the large
  • the result of catheter classification is a complete WSI image.
  • the WSI image is an image formed after stitching a single image containing breast ducts, which is convenient for viewing the results.
  • Figure 12 shows the multiple images based on the FC-DenseNet network model.
  • the results of category segmentation include large ducts and small ducts.
  • the figure shows the prediction results, including carcinoma-in-situ duct 1210, normal duct 1220, and UDH duct 1230 (UDH, normal ductal hyperplasia, typical ductal hyperplasia of the breast).
  • the remaining images are confirmed as the classification results of the large catheters.
  • the three predicted large catheters are all carcinoma-in-situ, which are consistent with the pre-marked results of the professional technicians.
  • an algorithm based on cell segmentation can be used to classify the small catheter.
  • the specific steps may include cell segmentation on the small catheter image, and cell segmentation based on the result of cell segmentation. Feature extraction, so as to obtain the current catheter classification results.
  • This application example shows the image processing method provided by this application, which can be achieved through a parallel strategy
  • the algorithm is used to segment the ducts first, and then to classify the large ducts and small ducts respectively.
  • the large duct classification method can use an end-to-end semantic segmentation algorithm, and its advantage is that it can be used in a larger
  • the classification problem of the large catheter is processed at one time, and the parallel small catheter classification method is used at the same time, that is, the 2048 image of the 20x lens is used for cell segmentation and classification based on a single small catheter, because the 2048 image of the 20x lens It is twice the image definition used by the large catheter classification method, so it can more accurately predict the lesion type of the small catheter, so as to deal with the classification of large catheters and small catheters more accurately, quickly and effectively.
  • FIG. 15 is a structural block diagram of an image processing apparatus in an embodiment, and an image processing apparatus 1500 is provided, and the apparatus 1500 includes:
  • the slice acquisition module 1501 is configured to acquire a digital slice containing at least two sizes of objects to be classified;
  • the image acquisition module 1502 is configured to acquire images of the object to be classified in at least two scales according to the digital slice; among the images of the object to be classified in the at least two scales, the larger-scale image is compared with the smaller-scale image.
  • the image has a smaller image size and a higher image resolution;
  • the image matching module 1503 is configured to use the image whose image size matches the size of the object to be classified among the images of the object to be classified in at least two scales as the image to be classified of the object to be classified;
  • the result obtaining module 1504 is configured to classify the objects to be classified in the image to be classified based on the classification model corresponding to the image resolution of the image to be classified, and obtain classification results of the objects to be classified in each size respectively;
  • the result fusion module 1505 is used for fusing classification results of objects to be classified of various sizes to obtain classification results of objects to be classified in at least two sizes in the digital slice.
  • the image acquisition module 1502 is further configured to: acquire the physical size of the pixel in at least two scales as the physical size of the target pixel; determine the image resolution of the image of the object to be classified in the at least two scales, As the target image resolution; the image size of the image containing the object to be classified in the digital slice is reduced to an image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain an image of the object to be classified in at least two scales.
  • the image acquisition module 1502 is further configured to: reduce the image size of the image containing the object to be classified in the digital slice to an image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain the Obtain the original pixel physical size of the digital slice before the image of the object in at least two scales; determine the image size of the image containing the object to be classified in the digital slice according to the original pixel physical size, the target image resolution and the target pixel physical size.
  • the image acquisition module 1502 is further configured to: acquire the pixel physical size ratio; the pixel physical size ratio is the ratio of the target pixel physical size to the original pixel physical size; determine according to the target image resolution and the pixel physical size ratio
  • the digital slice contains the image size of the image to be classified.
  • the image acquisition module 1502 is further configured to: use the ratio of the horizontal physical size of the target pixel to the horizontal physical size of the original pixel as the ratio of the horizontal physical size of the pixel, and compare the vertical physical size of the target pixel to the vertical physical size of the original pixel
  • the ratio of is used as the pixel vertical physical size ratio; where the target pixel physical size ratio includes the pixel horizontal physical size ratio and the pixel vertical physical size ratio; the target pixel physical size includes the target pixel horizontal physical size and the target pixel vertical physical size; the original pixel physical size Including the original pixel horizontal physical size and the original pixel vertical physical size; the product of the target image horizontal resolution and the pixel horizontal physical size ratio is used as the image horizontal size of the image containing the object to be classified in the digital slice, and the vertical resolution of the target image
  • the product of the ratio of the vertical physical size of the pixel is taken as the vertical image size of the image containing the object to be classified in the digital slice; the image size includes the horizontal image size and the vertical image size;
  • the images of the object to be classified in at least two scales include a first image corresponding to the first scale and a second image corresponding to the second scale; the first scale is smaller than the second scale; the image matching module 1503. It is further configured to: when the size of the object to be classified is greater than the image size of the second image, use the first image whose image size is greater than the size of the object to be classified as the image to be classified of the object to be classified.
  • the image matching module 1503 is further configured to: when the size of the object to be classified is greater than the image size of the second image, use the first image whose image size is greater than the size of the object to be classified as the object to be classified. Before the image to be classified, the contour feature of the object to be classified is obtained; according to the contour feature, the size of the object to be classified is obtained.
  • the image matching module 1503 is further configured to: use the segmentation model corresponding to the first scale to obtain the segmentation result of the object to be classified on the first image; and obtain the contour feature according to the segmentation result.
  • the image matching module 1503 is further configured to: obtain the image range occupied by the object to be classified in the first image according to the contour feature; take the image range occupied by the object to be classified in the first image as the image range of the object to be classified size.
  • the image matching module 1503 is further configured to: when the size of the object to be classified is greater than the image size of the second image, use the first image whose image size is greater than the size of the object to be classified as the object to be classified Before the image to be classified, when the horizontal image range occupied by the object to be classified in the first image is greater than the horizontal image size of the second image, and the vertical image range occupied by the object to be classified in the first image is greater than the vertical image size of the second image When at least one of the conditions in is satisfied, it is determined that the size of the object to be classified is greater than the image size of the second image; wherein the image range occupied by the object to be classified in the first image includes a horizontal image range and a vertical image range.
  • the images of the object to be classified in at least two scales include a first image corresponding to the first scale and a second image corresponding to the second scale; the first scale is smaller than the second scale; the result obtaining module 1504. It is further configured to: when the image to be classified is the first image, use the image semantic classification model as the classification model corresponding to the image resolution of the first image to classify the object to be classified in the first image to obtain the first image.
  • a smart microscope is also provided, as shown in FIG. 16, which is a structural block diagram of the smart microscope in an embodiment.
  • the smart microscope 1600 may include: an image scanning device 1610 and an image analysis device 1620 ;in,
  • the image scanning device 1610 is used to scan the object to be classified to obtain a digital slice of the object to be classified, and transmit it to the image analysis device 1620;
  • the image analysis device 1620 is configured to execute the steps of the image processing method described in any of the above embodiments.
  • the smart microscope provided in the above embodiment can be applied to classify breast ducts waiting to be classified.
  • the image scanning device 1610 obtains digital slices containing breast ducts of various sizes, and sends them to the image analysis device 1620 for classification.
  • the image analysis device The 1620 may be equipped with a processor with image processing functions, by which the steps of the image processing method as described in any of the above embodiments are executed to classify breast ducts of various sizes, so as to realize the classification of the digital slices.
  • Various sizes of breast ducts can be accurately classified, and the classification accuracy can be improved.
  • Fig. 17 is a structural block diagram of a computer device in an embodiment.
  • the computer device may specifically be used as the image processing device 100 in FIG. 1.
  • the computer equipment includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system, and may also store computer-readable instructions.
  • the processor can realize the image processing method.
  • the internal memory may also store computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor can execute the image processing method.
  • the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen. It can be an external keyboard, touchpad, or mouse.
  • FIG. 17 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device including a memory and a processor, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor executes the steps of the image processing method described above.
  • the steps of the image processing method may be the steps in the image processing method of each of the foregoing embodiments.
  • a computer-readable storage medium which stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the processor executes the steps of the above-mentioned image processing method.
  • the steps of the image processing method may be the steps in the image processing method of each of the foregoing embodiments.
  • a computer-readable instruction product or computer-readable instruction is provided.
  • the computer-readable instruction product or computer-readable instruction includes a computer instruction, and the computer instruction is stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the steps in the foregoing method embodiments.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM can be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法、装置、智能显微镜、存储介质和设备,该方法包括:获取包含有至少两种尺寸的待分类对象的数字切片(S201),根据数字切片获取该待分类对象在至少两种尺度下的图像(S202),将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像(S203),基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果(S204);将各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果(S205)。从而对数字切片所包含的各种尺寸的待分类对象进行精准分类,提高分类的准确性。

Description

图像处理方法、装置、智能显微镜、可读存储介质和设备
本申请要求于2020年02月14日提交中国专利局,申请号为202010095182.0,申请名称为“图像处理方法、装置、智能显微镜、可读存储介质和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别是涉及一种图像处理方法、装置、智能显微镜、计算机可读存储介质和计算机设备。
背景技术
随着图像处理技术的发展,出现了图像分类处理技术,该技术可以应用于对数字切片中的如乳腺导管等待分类对象进行识别和分类处理。其中,数字切片是将传统的玻璃切片通过智能显微镜等扫描得到高分辨数字图像,该数字切片还可以在计算机设备上进行高精度和多视野拼接处理。
然而,目前应用于对数字切片中的待分类对象进行分类的相关图像分类处理技术,通常是在特定尺度下对数字切片中的待分类对象进行分类处理,但由于数字切片上一般包含有多种尺寸的待分类对象等因素,导致这种技术对数字切片中的待分类对象进行分类的准确性较低。
发明内容
根据本申请提供的各种实施例,提供一种图像处理方法、装置、智能显微镜、计算机可读存储介质和计算机设备。
在一个实施例中,本发明实施例提供一种图像处理方法,由计算机设备执行,包括:获取包含有至少两种尺寸的待分类对象的数字切片;根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺 寸和较高的图像分辨率;将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像;基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
在一个实施例中,本发明实施例提供一种图像处理装置,该装置包括:切片获取模块,用于获取包含有至少两种尺寸的待分类对象的数字切片;图像获取模块,用于根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;图像匹配模块,用于将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像;结果获取模块,用于基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;结果融合模块,用于将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
在一个实施例中,本发明实施例提供一种智能显微镜,包括:图像扫描设备和图像分析设备;其中,所述图像扫描设备,用于对待分类对象进行扫描,得到所述待分类对象的数字切片,传输至所述图像分析设备;所述图像分析设备,用于执行如上所述的图像处理方法的步骤。
在一个实施例中,本发明实施例提供一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述处理器执行如下步骤:获取包含有至少两种尺寸的待分类对象的数字切片;根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述 待分类对象的待分类图像;基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
在一个实施例中,本发明实施例提供一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:获取包含有至少两种尺寸的待分类对象的数字切片;根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像;基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例中图像处理方法的应用环境图;
图2为一个实施例中图像处理方法的流程示意图;
图3(a)为一个实施例中一种尺度下的图像的示意图;
图3(b)为一个实施例中另一种尺度下的图像的示意图;
图4为一个实施例中获取各尺寸的待分类对象的分类结果步骤的流程示意图;
图5为一个实施例中获取待分类对象在至少两种尺度下的图像步骤的流程示意图;
图6为一个实施例中一种分割模型的结构示意图;
图7为一个实施例中一种分割结果的示意图;
图8为一个实施例中待分类对象占据的图像范围的示意图;
图9为另一个实施例中图像处理方法的流程示意图;
图10为一个应用示例中图像处理方法的流程示意图;
图11为一个应用示例中数字切片的示意图;
图12为一个应用示例中多类别分割结果的示意图;
图13为一个应用示例中大导管分割结果的示意图;
图14为一个应用示例中小导管分类的流程示意图;
图15为一个实施例中图像处理装置的结构框图;
图16为一个实施例中智能显微镜的结构框图;及
图17为一个实施例中计算机设备的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用***。
换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作和交互***、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习和深度学习等几大方向。
数字切片(Digital Slides),又名虚拟切片,它包含有玻璃切片上的病变信息。数字切片可以在例如个人电脑等计算机设备上进行缩放,在计算机设备上利用数字切片可以观测到玻璃切片上的任意位置,也可以进一步将相应位置放大到例如5倍、10倍、20倍和40倍等倍数下观察,如同在传统显微镜上对玻璃切片进行放大和缩小一样。
本申请提供的图像处理方法,可以应用于如图1所示的应用环境当中,图1为一个实施例中图像处理方法的应用环境图,该应用环境可以包括图像处理设备100,其中,该图像处理设备100可以是具备图像采集、分析和显示等图像处理能力的计算机设备,该计算机设备具体可以是手机、平板电脑、台式电脑和笔记本电脑中的至少一种;另外,该图像处理设备100还可以是智能显微镜,其中,智能显微镜融入了人工智能的视觉、语音、自然语言处理技术,及增强现实(AR)技术,用户可以通过向智能显示为输入如语音指令等控制指令,智能显微镜可以根据该指令执行例如自动识别、检测、定量计算和生成报告等操作,还可以将检测结果实时显示到用户所看目镜所示的视野当中,及时提醒且不会打扰用户的阅片流程,能提高处理效率和准确度。
具体的,本申请提供的图像处理方法,图像处理设备100可以通过扫描等方式获取包含有至少两种尺寸的待分类对象的数字切片,并根据该数字切片获取待分类对象在至少两种尺度下的图像;其中,该待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;然后,图像处理设备100可以将该待分类对象在至少两种尺度下的图像中的,图像尺寸与待分类对象的尺寸匹配的图像,作为该待分类对象的待分类图像;接着,图像处理设备100可以先获取与该待分类图像的图像分辨率对应的分类模型,这些分类模型可以预先配置在图像处理设备100当中,随后 图像处理设备100即可基于与待分类图像的图像分辨率对应的分类模型,对待分类图像中的待分类对象进行分类,从而可以分别得到各尺寸的待分类对象的分类结果;最后,图像处理设备100将各尺寸的待分类对象的分类结果融合,得到数字切片中至少两种尺寸的待分类对象的分类结果。该方法可以应用于如计算机设备、智能显微镜等图像处理设备当中,完成对数字切片所包含的至少两种尺寸的待分类对象进行精准分类,提高了对数字切片中待分类对象进行分类的准确性。
如图1所示的应用场景当中,可以利用图像处理设备100在本地对待分类对象进行扫描,并对扫描得到的数字切片中的多种尺寸的待分类对象进行分类处理。此外,还可以通过远程通信的方式完成对待分类对象的分类处理。示例性的,可以基于第五代移动通信技术(5th generation mobile networks或5th generation wireless systems、5th-Generation,简称5G或5G技术)来实现非本地对待分类对象的分类处理,用户可以如手机、平板电脑等终端设备获取包含有待分类对象的数字切片,然后可以基于5G通信网络将数字切片实时发送至远端的图像处理设备100,然后图像处理设备100即可对该数字切片中的待分类对象进行分类处理,将分类结果通过5G通信网络回传至用户的终端设备,以使用户可以通过终端设备掌握分类结果,得益于5G通信技术实时性强的特点,即使由远端的图像处理设备100来对用户在现场采集的数字切片的待分类对象进行分类处理,也能够使得用户能够在现场实时获取相应的分类结果,在确保实时性的条件下也能够减轻用户端的图像数据处理压力。
下面结合相关附图和实施例,对本申请提供的图像处理方法做进一步说明。
在一个实施例中,提供了一种图像处理方法,如图2所示,图2为一个实施例中图像处理方法的流程示意图,本实施例主要以该方法应用于上述图1中的图像处理设备100来举例说明,如上对图1所示中的图像处理设备100的说明,该图像处理设备100具体可以采用具备如图像采集、分析和显示等图像处理能力的例如手机、平板电脑、台式电脑和笔记本电脑等计算机设备。参照图2,该图像处理方法具体包括如下步骤:
步骤S201,获取包含有至少两种尺寸的待分类对象的数字切片;
本步骤中,图像处理设备100可以通过图像扫描设备对待分类对象进行扫描,获取该待分类对象的数字切片。其中,图像扫描设备可以作为图像处理设备100的一部分,也可以作为图像处理设备100的外部设备。图像扫描设备可以受图像处理设备100的控制对如载玻片等待分类对象的载体进行扫描,以获取该待分类对象的数字切片并传输给图像处理设备100进行分析处理,该数字切片可以是WSI(Whole Slide Image,全视野数字病理切片)图像,可以由图像处理设备100上进行任意的放大和缩小。
图像处理设备100所获取的数字切片一般包含有至少两种尺寸的待分类对象,图像处理设备100可以在获取到包含有至少两种尺寸的待分类对象的数字切片后,对不同尺寸的待分类对象进行标记以对其进行区分。以乳腺导管作为待分类对象为例,可以按照一尺寸阈值将乳腺导管分为两种尺寸,可以将尺寸大于该尺寸阈值的乳腺导管作为大导管,将尺寸小于该尺寸阈值则作为小导管,示例性的,图像处理设备100可以将20倍镜下占据有大于或者等于2048个像素点的乳腺导管标记为大导管,反之可以标记为小导管。
步骤S202,根据数字切片,获取待分类对象在至少两种尺度下的图像;
本步骤中,图像处理设备100可以从数字切片中获取待分类对象在至少两种尺度下的图像,其中,该待分类对象在至少两种尺度下的图像当中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率。其中,不同尺度的图像中的各像素所对应的实际物理尺寸相同,这样能够使得待分类对象在各尺度的图像中对应于同一物理尺寸。
具体的,图像处理设备100可以对数字切片进行缩放处理,以获取待分类对象在不同尺度下的图像。其中,尺度可以对应于显微镜的放大倍数,放大倍数越大则尺度越大,反之亦然。示例性的,20倍镜相对于5倍镜具有更大大尺度。
而图像处理设备100所获取的待分类对象在不同尺度下的图像当中,具有更大尺度的图像则具有较小的图像尺寸和较高的图像分辨率。以5倍镜和10倍镜作为两种尺度,以及以乳腺导管作为待分类对象为例进行说明,参考图3(a)和图3(b)所示,图像处理设备100所获取的乳腺导管在5倍镜下的图像如图3(a)所示的第一示例图像310,乳腺导管在20倍镜下的图像如图3(b)的第 三示例图像331至334所示,其中,第三示例图像331至334中的任一张都可以作为乳腺导管在20倍镜下的图像。第一示例箭头3110和第三示例箭头3310分别示出了乳腺导管在第一示例图像310和第三示例图像331至334中的位置。对此,第三示例图像331至334可以拼接为一张图像尺寸与第一示例图像310的图像尺寸相同的图像,而第三示例图像331至334的分辨率均高于第一示例图像310,即较大视野所对应的图像的图像分辨率要比较小视野所对应的图像的图像分辨率低。
此外,图像处理设备100还可以对各尺度的图像中的待分类对象进行标注,例如可以采用二值化或多值化等方式对各尺度的图像中的待分类对象进行标注,如图3(a)所示,第二示例图像320为对应于第一示例图像310的二值化标注图像,第二示例箭头3210示出了在5倍镜下对乳腺导管的二值化标注结果;如图3(b)所示,第四示例图像341至344为对应于第三示例图像331至334的二值化标注图像,第二示例箭头3410示出了在20倍镜下对乳腺导管的二值化标注结果。其中,通过二值化标注的方式,图像处理设备100可以将相应尺度的图像中的待分类对象与背景进行分割,以便后续对分割得到的待分类对象进行分类。除二值化以外,也可以采用多值化标注的方式将待分类对象与背景分割,示例性的,多值化标注的方式具体可以是根据待分类对象的不同尺寸将待分类对象标注为不同颜色,以便于后续根据待分类对象的颜色判断出该待分类对象所属尺寸范围。
步骤S203,将待分类对象在至少两种尺度下的图像中的,图像尺寸与待分类对象的尺寸匹配的图像,作为待分类对象的待分类图像。
本步骤主要是图像处理设备100根据待分类对象的尺寸选择与之尺寸匹配的图像作为待分类图像,以便后续步骤基于该待分类对象,对其尺寸匹配的待分类对象进行分类处理。示例性的,以5倍镜下1024像素×1024像素的图像,以及20倍镜下2048像素×2048像素的图像作为两种尺度下的图像为例,设待分类对象包括大导管和小导管,对于大小导管的定义可以参考如上所述,即图像处理设备100可以将20倍镜下占据有大于或者等于2048个像素点的乳腺导管标记为大导管,反之可以标记为小导管。在该情况下,图像处理设备100将5倍 镜下1024像素×1024像素的图像作为与乳腺导管中的大导管的尺寸相匹配的图像,将20倍镜下2048像素×2048像素的图像作为与乳腺导管中的小导管的尺寸相匹配的图像。
其中,对于与乳腺导管中的小导管的尺寸相匹配的图像的选择,由于5倍镜下1024像素×1024像素的图像的图像尺寸也比该小导管的尺寸大,因为5倍镜下1024像素×1024像素的图像也能应用于对该小导管进行分类处理,但由于20倍镜下2048像素×2048像素的图像也满足该尺寸匹配条件,且20倍镜下2048像素×2048像素的图像具有比5倍镜下1024像素×1024像素的图像更高的分辨率,故能更清晰地获取到小导管相关特征,更利于对其进行分类,因此图像处理设备100将20倍镜下2048像素×2048像素的图像作为与小导管尺寸匹配的图像进行分类处理。
步骤S204,基于与待分类图像的图像分辨率对应的分类模型,对待分类图像中的待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果。
本步骤中,图像处理设备100可以预先配置针对于待分类对象的多种分类模型,该多种分类模型可以与不同的图像分辨率相对应。由此,图像处理设备100在得到待分类图像后,可以根据该待分类图像的图像分辨率选择相应的分类模型,然后图像处理设备100将待分类图像输入到分类模型当中,分类模型可以对该待分类图像中的待分类对象进行分类,得到并输出相应的分类结果。其中,由于不同待分类图像是与不同尺寸的待分类对象相互匹配的,即不同尺寸的待分类对象,可以在不同尺寸的待分类图像当中被相应的分类模型进行分类处理,这样图像处理设备100可以获取各分类模型输出的分类结果,从而得到各尺寸的待分类对象的分类结果。
本步骤,图像处理设备100结合待分类图像的图像分辨率选取对应的分类模型对其上的待分类对象进行分类,对于小尺寸的待分类图像,其图像分辨率比较高,待分类对象的局部特征容易提取,更有益于对其进行准确分类,因此可以采用局部特征分类模型对其进行分类。而对于大尺寸的待分类图像,其图像分辨率比较低,可以从待分类对象在图像上的如轮廓大小等整体特征对其进行分类处理,例如可以采用图像语义分类模型对其进行分类处理。
示例性的,待分类对象在至少两种尺度下的图像,可以包括对应于第一尺度的第一图像和对应于第二尺度的第二图像,其中,第一尺度小于第二尺度。如图4所示,图4为一个实施例中获取各尺寸的待分类对象的分类结果步骤的流程示意图,上述步骤S204的基于与待分类图像的图像分辨率对应的分类模型,对待分类图像中的待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果,具体可以包括:
步骤S401,当待分类图像为第一图像时,利用图像语义分类模型作为与第一图像的图像分辨率对应的分类模型,对第一图像中的待分类对象进行分类,得到第一图像中的待分类对象的分类结果;
步骤S402,当待分类图像为第二图像时,利用局部特征分类模型作为与第二图像的图像分辨率对应的分类模型,对第二图像中的待分类对象进行分类,得到第二图像中的待分类对象的分类结果。
本实施例中,当待分类对象的尺寸匹配于第一图像的图像尺寸时,图像处理设备100在第一图像上对该待分类对象进行识别,即待分类图像为第一图像;当待分类对象的尺寸匹配于第二图像的图像尺寸,图像处理设备100在第二图像上对该待分类对象进行识别,即待分类图像为第二图像。而第一图像的第一尺度小于第二图像的第二尺度,故第一图像的图像分辨率比第二图像的图像分辨率要小,对于具有较小图像分辨率的第一图像,图像处理设备100采用图像语义分类模型作为其分类模型,对第一图像上的待分类对象进行分类,其中,图像语义分类模型可以基于待分类对象在第一图像上的整体轮廓特征对该待分类对象进行分类,该图像语义分类模型可以基于FC-DenseNet等网络模型实现的语义分割网络模型。而对于具有较大图像分辨率的第二图像,图像处理设备100则可以采用局部特征分类模型作为其分类模型,对第二图像上的待分类对象进行分类,其中,局部特征分类模型可以将待分类对象的各局部进行分割,该待分类对象各局部的局部特征提取,从而实现对该待分类对象的分类。
以乳腺导管中的大导管和小导管为例,对本实施例进行具体说明,图像处理设备100可以将5倍镜下1024像素×1024像素的图像作为第一图像,将20倍镜下2048像素×2048像素的图像作为第二图像,对于第一图像,图像处理设备 100可以基于FC-DenseNet网络模型以及多类别标签对大导管进行语义分类,根据对应的类别标签获取分类结果。对于第二图像,图像处理设备100可以对第二图像中的小导管的细胞进行分割,再经过细胞特征提取得到该小导管的分类结果。
本实施例提供的上述方案,图像处理设备100可以采用与图像分辨率相适应的分类模型对相应待分类图像中的待分类对象进行精准分类,同时获得各种尺寸的待分类对象的分类结果,提高分类效率。
步骤S205,将各尺寸的待分类对象的分类结果融合,得到数字切片中至少两种尺寸的待分类对象的分类结果。
本步骤,图像处理设备100将各尺寸的待分类结果进行融合,从而可以得到数字切片当中各种尺寸的待分类对象的分类结果。以乳腺导管为例,乳腺导管可以包括大导管和小导管,图像处理设备100可以在不同的待分类图像中对大导管和小导管进行分类,最后将大导管和小导管的分类结果进行融合,可以同时得到数字切片中各种尺寸的乳腺导管的分类结果。
上述图像处理方法,图像处理设备100获取包含有至少两种尺寸的待分类对象的数字切片,根据该数字切片获取该待分类对象在至少两种尺度下的图像,其中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率。然后,图像处理设备100将待分类对象在该至少两种尺度下的图像中的,图像尺寸与待分类对象的尺寸匹配的图像作为待分类对象的待分类图像,这样,图像处理设备100即可根据待分类对象的实际尺寸适应性地选取待分类图像,并利用与该待分类图像的图像分辨率对应的分类模型对其上的待分类对象进行分类,最后图像处理设备100将各尺寸的待分类对象的分类结果融合,以达到对数字切片所包含的各种尺寸的待分类对象进行精准分类的效果,提高对数字切片中的待分类对象进行分类的准确性。
在一个实施例中,如图5所示,图5为一个实施例中获取待分类对象在至少两种尺度下的图像步骤的流程示意图,步骤S202中的根据数字切片,获取待分类对象在至少两种尺度下的图像,可以通过如下方式实现:
步骤S501,获取在至少两种尺度下的像素物理尺寸,作为目标像素物理尺 寸;
步骤S502,确定待分类对象在至少两种尺度下的图像的图像分辨率,作为目标图像分辨率;
步骤S503,将数字切片中包含有待分类对象的图像的图像尺寸,缩小至与目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到待分类对象在至少两种尺度下的图像。
由于图像中的待分类对象是具有实际的物理尺寸的,所以不同尺度下各图像的像素应该对应于同一个物理尺寸,才能正确的实现分割和分类。基于此,本实施例,图像处理设备100可以根据不同尺度下对应的像素物理尺寸以及图像分辨率,将数字切片中包含有待分类对象的图像的图像尺寸进行缩小,得到待分类对象在至少两种尺度下的图像,且在各尺度的图像中待分类对象的大小保持一致,提高对待分类对象进行分类的准确性。
具体的,图像处理设备100可以获取读取前述至少两种尺度对应的像素物理尺寸,作为目标像素物理尺寸,以及读取待分类对象在该至少两种尺度下的图像所需的图像分辨率,作为目标图像分辨率。根据该目标像素物理尺寸和目标图像分辨率,图像处理设备100可以在数字切片中选取出包含有待分类对象的图像,并将该数字切片中包含有待分类对象的图像的图像尺寸缩小至与目标像素物理尺寸和目标图像分辨率对应的图像尺寸,从而得到待分类对象在至少两种尺度下的图像。
在一些实施例中,在上述将数字切片中包含有待分类对象的图像的图像尺寸,缩小至与目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到待分类对象在至少两种尺度下的图像之前,图像处理设备100还可以通过如下方式,确定出数字切片中包含有待分类对象的图像的图像尺寸,具体步骤包括:
获取数字切片的原始像素物理尺寸,根据原始像素物理尺寸以及目标图像分辨率和目标像素物理尺寸,确定数字切片中包含有待分类对象的图像的图像尺寸。
其中,数字切片的最大原始图像的像素尺寸一般是不同的,这通常取决于不同的图像扫描设备,本实施例的方案可以实现针对各种不同像素尺寸的数字切 片,输出待分类对象的实际物理尺寸一致的各尺度下的图像。
具体而言,图像处理设备100可以读取数字切片的最大原始图像(如40倍镜扫描的数字切片)的像素尺寸作为原始像素物理尺寸,然后图像处理设备100可以在得到目标图像分辨率和目标像素物理尺寸后,根据原始像素物理尺寸、目标图像分辨率和目标像素物理尺寸计算出与之对应的数字切片中包含有待分类对象的图像的图像尺寸。
在一个实施例中,上述根据原始像素物理尺寸以及目标图像分辨率和目标像素物理尺寸,确定数字切片中包含有待分类对象的图像的图像尺寸,具体步骤可以包括:
获取像素物理尺寸比例;根据目标图像分辨率和像素物理尺寸比例,确定数字切片中包含有待分类对象的图像的图像尺寸。
其中,像素物理尺寸比例是指目标像素物理尺寸与原始像素物理尺寸的比值。本实施例,图像处理设备100具体可以根据目标图像分辨率和像素物理尺寸比例的乘积,确定出数字切片中包含有待分类对象的图像的图像尺寸。
图像和像素的形状一般为矩形,即对于图像尺寸而言,具有图像横向尺寸和图像纵向尺寸,对于像素物理尺寸而言则对应为像素纵向物理尺寸和像素纵向物理尺寸。由此,前述目标像素物理尺寸比例,可以包括像素横向物理尺寸比例和像素纵向物理尺寸比例;目标像素物理尺寸,可以包括目标像素横向物理尺寸和目标像素纵向物理尺寸,原始像素物理尺寸,可以包括原始像素横向物理尺寸和原始像素纵向物理尺寸,目标图像分辨率,可以包括目标图像横向分辨率和目标图像纵向分辨率。
基于此,在一些实施例中,上述获取像素物理尺寸比例的步骤,可以包括:
将目标像素横向物理尺寸与原始像素横向物理尺寸的比值作为像素横向物理尺寸比例,以及,将目标像素纵向物理尺寸与原始像素纵向物理尺寸的比值作为像素纵向物理尺寸比例。
本实施例中,图像处理设备100可以先确定目标像素横向物理尺寸和目标像素纵向物理尺寸,以及原始像素横向物理尺寸和像素纵向物理尺寸比例,然后图像处理设备100可以计算目标像素横向物理尺寸与原始像素横向物理尺寸的 比值,得到像素横向物理尺寸比例;图像处理设备100还可以计算目标像素纵向物理尺寸与原始像素纵向物理尺寸的比值,得到像素纵向物理尺寸比例。
在一些实施例中,上对于上述根据目标图像分辨率和像素物理尺寸比例,确定数字切片中包含有待分类对象的图像的图像尺寸的步骤,可以包括:
根据目标图像分辨率和像素物理尺寸比例,确定数字切片中包含有待分类对象的图像的图像尺寸进行说明,
将目标图像横向分辨率与像素横向物理尺寸比例的乘积作为数字切片中包含有待分类对象的图像的图像横向尺寸,以及,将目标图像纵向分辨率与像素纵向物理尺寸比例的乘积作为数字切片中包含有待分类对象的图像的图像纵向尺寸。
具体的,图像处理设备100可以先确定目标图像横向分辨率以及目标图像纵向分辨率。然后,图像处理设备100将目标图像横向分辨率与像素横向物理尺寸比例相乘,得到数字切片中包含有待分类对象的图像的图像横向尺寸;图像处理设备100还可以将目标图像纵向分辨率与像素纵向物理尺寸比例相乘,得到数字切片中包含有待分类对象的图像的图像纵向尺寸。这样,图像处理设备100即可根据该图像横向尺寸和图像纵向尺寸,得到数字切片中包含有待分类对象的图像的图像尺寸。
下面以一个具体示例对数字切片中包含有待分类对象的图像的图像尺寸的获取方式进行详细说明。
首先,图像处理设备100读取数字切片的最大原始图像的像素尺寸,像素尺寸包括pixelsize_x和pixelsize_y,分别表示像素在水平方向和竖直方向的物理尺寸。一般来说,通过40倍镜扫描得到的数字切片图像,像素尺寸pixelsize(包括pixelsize_x和pixelsize_y)大约是0.25微米左右,类似的,通过20倍镜扫描得到的数字切片图像,像素尺寸pixelsize一般是0.5微米左右。基于此,例如需要某尺度对应的图像的纵向分辨率为M像素,横向分辨率为N像素,同时期望该图像的像素的纵向物理尺寸和横向物理尺寸分别为H和W微米,这样,在数字切片上,需要读取的包含有待分类对象的图像的图像尺寸的计算方式为:需要裁剪的图像的图像纵向尺寸h_wsi:h_wsi=M×H/pixelsize_y; 需要图像的图像的图像横向尺寸w_wsi:w_wsi=N×W/pixelsize_x。在具体实现中,图像处理设备100可以通过python的openslide工具包,从数字切片的图像数据中获取最大尺度图像(即Level=0的图像)上的区域大小为h_wsi×w_wsi的图像,然后再将这个图像经过缩放至M×N像素大小,得到则完成获取某尺度对应的图像。
示例性的,设需要5倍镜下1024×1024的图像,则图像处理设备100可以读取H=2微米,W=2微米,M=1024,N=1024;若数字切片的最大尺度图像为通过40倍镜扫描得到的图像,也就是该数字切片的像素物理尺寸pixelsize约等于0.25微米,则图像处理设备100从数字切片上获取的图像区域大小应该为w_wsi=h_wsi=1024×2/0.25=8192像素,然后图像处理设备100将8192×8192的包含有待分类对象的图像,缩放至1024×1024像素大小的图像,即可得到5倍镜下1024×1024的图像;同理,如果数字切片的最大尺度图像为通过20倍镜扫描得到的图像,图像处理设备100会从数字切片上获取4096×4096像素的区域,并缩放至1024×1024像素大小的图像。
在一些实施例当中,待分类对象在至少两种尺度下的图像,可以包括对应于第一尺度的第一图像和对应于第二尺度的第二图像,其中,第一尺度小于第二尺度;步骤S203中的将待分类对象在至少两种尺度下的图像中的,图像尺寸与待分类对象的尺寸匹配的图像,作为待分类对象的待分类图像,具体可以包括:
当待分类对象的尺寸大于第二图像的图像尺寸时,将图像尺寸大于待分类对象的尺寸的第一图像,作为待分类对象的待分类图像。
本实施例中,图像处理设备100可以将先获取待分类对象的尺寸,然后将该待分类对象的尺寸与第二图像的图像尺寸进行比较,如果该待分类对象的尺寸大于第二图像的图像尺寸,则说明该待分类对象并不是完全包含在第二图像当中,由此图像处理设备100无法对其进行准确分类,在这种情况之下,图像处理设备100可以在图像尺寸大于该待分类对象的尺寸的第一图像当中对该待分类对象进行分类,提高分类准确性。另外,图像处理设备100判断待分类对象的尺寸小于第二图像的图像尺寸时,由于第二图像相较于第一图像具有更大的图像分辨率,所以图像处理设备100可以在该第二图像中对该待分类对象进行分类, 以提高分类准确性。
进一步的,在上述当待分类对象的尺寸大于第二图像的图像尺寸时,将图像尺寸大于待分类对象的尺寸的第一图像,作为述待分类对象的待分类图像之前,还可以通过如下步骤获取待分类对象的尺寸,具体包括:
获取待分类对象的轮廓特征;根据轮廓特征,获取待分类对象的尺寸。
其中,图像处理设备100在对当前图像中的待分类对象进行分类前,可以获取该待分类对象在图像当中的轮廓特征,根据轮廓特征确定该待分类对象的尺寸,该轮廓特征可以包括待分类对象在该图像上的轮廓点坐标,从而图像处理设备100可以根据轮廓点坐标计算该待分类对象的尺寸。
具体的,待分类对象在至少两种尺度下的图像,可以包括对应于第一尺度的第一图像和对应于第二尺度的第二图像,第一尺度小于第二尺度。其中,由于该第一图像具有比第二图像更大的图像尺寸,且该第一图像能完全包含待分类对象,由此图像处理设备100可以在获取待分类对象在该第一图像上的轮廓点坐标,根据轮廓点坐标可以获取如该待分类对象的外部轮廓所形成的外接矩形等待分类对象的尺寸,由于该待分类对象在各尺度下的图像中对应的实际物理尺寸相同,因此在第一图像上确定的待分类对象的尺寸可以应用于不同尺度的图像中进行尺寸的比较。
在一个实施例中,可以利用预先训练好的针对于待分类对象的分割模型获取待分类对象的轮廓特征,上述获取待分类对象的轮廓特征的步骤,具体可以包括:
利用与第一尺度对应的分割模型,获取待分类对象在第一图像上的分割结果;根据分割结果,得到轮廓特征。
本实施例中,图像处理设备100可以利用与不同尺度对应的分割模型,对相应尺寸的图像中的待分类对象进行分割,根据分割结果获取其轮廓特征。其中,分割模型可以应用于在不同尺度的图像上,将待分类对象与图像的背景进行分割,不同尺度可以对应于不同的分割模型,从而根据所需尺度的不同,图像处理设备100可以预先在不同尺度下,应用相应的训练数据进行训练得到针对于各种尺度下,能够对待分类对象进行识别和分割的分割模型。
在具体应用中,可以采用全卷积稠密连接网络FC-DenseNet网络模型实现对待分类对象的分割模型。FC-DenseNet网络模型的结构如下图6所示,图6为一个实施例中一种分割模型的结构示意图,其中,DB表示Dense Block,即稠密模块,C表示Convolution即卷积,TD表示Transitions Down即下采样,TU表示Transitions Up即上采样,CT表示concatenation即合并,SC表示Skip Connection即跳层连接或跳跃链接,Input表示输入的图像,output表示输出的分割分类结果。
具体的,待分类对象在至少两种尺度下的图像,可以包括对应于第一尺度的第一图像和对应于第二尺度的第二图像,第一尺度小于第二尺度。图像处理设备100可以将FC-DenseNet网络模型在第一尺度和第二尺度对应的图像中基于待分类对象的图像数据进行训练,得到第一尺度和第二尺度对应的分割模型,可以利用与第一尺度和第二尺度对应的分割模型,分别获取待分类对象在第一图像和第二图像上的分割结果。以乳腺导管的大小导管为例对分割结果进行具体说明,参考图7,图7为一个实施例中一种分割结果的示意图,图7示出了在5倍镜下1024像素×1024像素的第一图像以及20倍镜下2048像素×2048像素的第二图像的导管分割结果,可见,应用两种尺度的模型可以比较互补地把大导管和小导管分割出来,从而提高待分类对象的分割效率和准确性。
本实施例中,图像处理设备100可以利用与第一尺度对应的分割模型,获取待分类对象在第一图像上的分割结果,根据分割结果得到轮廓特征。其中,第一图像可以完整包含该待分类图像,因此在第一图像上对待分类对象进行分割,可以完整获取到该待分类对象的轮廓特征。
在一些实施例中,上述根据轮廓特征,获取待分类对象的尺寸的步骤,具体可以包括:
根据轮廓特征,获取待分类对象在第一图像占据的图像范围;将待分类对象在第一图像占据的图像范围,作为待分类对象的尺寸。
本实施例中,图像处理设备100可以在第一图像上获取待分类对象的轮廓特征,该轮廓特征可以包括该第一图像上,待分类对象的外部轮廓点坐标,从而图像处理特征100可以根据该外部轮廓点坐标,确定该待分类对象在第一图像 占据的图像范围,并将该图像范围作为待分类对象的尺寸。
其中,参考图8,图8为一个实施例中待分类对象占据的图像范围的示意图,待分类对象在第一图像占据的图像范围可以包括横向图像范围和纵向图像范围,根据待分类对象在第一图像占据的横向图像范围和纵向图像范围可以框定该待分类对象外部轮廓的外接矩形,从而图像处理设备100可以将该外接矩形的大小作为待分类对象的尺寸。
进一步的,在一些实施例当中,依据待分类对象在第一图像占据的横向图像范围和纵向图像范围,可以判断待分类对象的尺寸是否大于第二图像的图像尺寸,在上述当待分类对象的尺寸大于第二图像的图像尺寸时,将图像尺寸大于待分类对象的尺寸的第一图像,作为待分类对象的待分类图像的步骤之前,还可以通过如下步骤判断待分类对象的尺寸是否大于第二图像的图像尺寸,具体包括:
当待分类对象在第一图像占据的横向图像范围大于第二图像的横向图像尺寸,和,待分类对象在第一图像占据的纵向图像范围大于第二图像的纵向图像尺寸中的至少一个条件被满足时,图像处理设备100可以判断待分类对象的尺寸大于第二图像的图像尺寸。
本实施例中,图像处理设备100可以将待分类对象在第一图像占据的横向图像范围大于第二图像的横向图像尺寸和待分类对象在第一图像占据的纵向图像范围大于第二图像的纵向图像尺寸中的至少一个条件,作为待分类对象的尺寸大于第二图像的图像尺寸的判断条件,当图像处理设备100判断前述任一条件被满足时,图像处理设备100即可判断待分类对象的尺寸大于第二图像的图像尺寸,提高图像尺寸与待分类对象尺寸进行匹配的效率。
在一个实施例中,还提供一种图像处理方法,如图9所述,图9为另一个实施例中图像处理方法的流程示意图,该图像处理方法可以应用于如图1所示的图像处理设备100,该方法具体包括:
步骤S901,图像处理设备100获取包含有至少两种尺寸的待分类对象的数字切片;
步骤S902,图像处理设备100获取在至少两种尺度下的像素物理尺寸,作为目标像素物理尺寸;
步骤S903,图像处理设备100确定待分类对象在至少两种尺度下的图像的图像分辨率,作为目标图像分辨率;
其中,待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率。
步骤S904,图像处理设备100获取数字切片的原始像素物理尺寸,根据原始像素物理尺寸以及目标图像分辨率和目标像素物理尺寸,确定数字切片中包含有待分类对象的图像的图像尺寸;
步骤S905,图像处理设备100将数字切片中包含有待分类对象的图像的图像尺寸,缩小至与目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到待分类对象在至少两种尺度下的图像;
步骤S906,图像处理设备100将待分类对象在至少两种尺度下的图像中的,图像尺寸与待分类对象的尺寸匹配的图像,作为待分类对象的待分类图像;
步骤S907,当待分类图像为第一图像时,图像处理设备100利用图像语义分类模型对第一图像中的待分类对象进行分类,得到相应尺寸的待分类对象的分类结果;当待分类图像为第二图像时,图像处理设备100利用局部特征分类模型对第二图像中的待分类对象进行分类,得到相应尺寸的待分类对象的分类结果;
其中,待分类对象在至少两种尺度下的图像,可以包括对应于第一尺度的上述第一图像和对应于第二尺度的上述第二图像,该第一尺度小于第二尺度。
步骤S908,图像处理设备100将各尺寸的待分类对象的分类结果融合,得到数字切片中至少两种尺寸的待分类对象的分类结果。
上述图像处理方法,可以根据待分类对象的实际尺寸适应性地选取待分类图像,同时在相应的待分类图像中分别利用图像语义分类模型和局部特征分类模型同时对其上的不同尺寸的待分类对象进行分类,最后将各尺寸的待分类对象的分类结果进行融合即可得到数字切片上各种尺寸的待分类对象的分类结果,达到精准分类的技术效果。
应该理解的是,虽然上述流程图的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明, 这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,上述流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
将本申请的图像处理方法应用于对乳腺导管的大小导管进行病灶分类当中进行说明,参考图10,图10为一个应用示例中图像处理方法的流程示意图,具体步骤包括:
步骤一:多尺度数据生成,具体包括:
输入:WSI图像文件(WSI:Whole Slide Image,全视野数字病理切片),期望获取的某尺度下的图像的图像像素尺寸M×N,以及像素物理大小H和W。
输出:WSI的patch图像,以及对应的标注图像。
本步骤中,WSI图像的最大原始图像的像素长度一般是不同的,主要取决于不同的WSI图像扫描设备。但是,图像中的细胞是有实际的物理长度,因此,在本应用示例中,所有图像的像素应该对应同一个物理尺寸,才能正确的实现分割和分类。因此,在本步骤,可进一步通过如下步骤实现针对各种不同的像素尺寸的WSI,输出对应统一物理尺寸的各尺度图像。具体的,首先,读取WSI最大原始图像的像素长度,包括pixelsize_x和pixelsize_y,分别是像素在水平方向和竖直方向的物理长度。其中,通过40倍镜扫描的WSI图像,pixelsize即像素尺寸大约是0.25微米左右,通过20倍镜扫描的WSI图像,pixelsize一般是0.5微米左右。
例如,需要输出的某尺度下的图像的图像尺寸为高度=M像素,宽度=N像素,同时期望这个该尺度下的图像的像素的物理高度和宽度分别为H和W微米,则在原始WSI图像上需要裁剪的图像所占据的像素长宽的计算方式如下:
需要裁剪的图像的像素高度h_wsi:h_wsi=M×H/pixelsize_y;需要裁剪的图像的像素宽度w_wsi:w_wsi=N×W/pixelsize_x;其中,通过python的openslide工具包,可以从WSI图像数据获取最大尺度图像(即Level=0的图像)上的区域为h_wsi×w_wsi的图像,再将该图像缩放至M×N像素大小,从而 获取裁剪的图像的像素长宽。
例如,当需要5倍镜下1024×1024的图像时,输入H=2微米,W=2微米,M=1024,N=1024,若WSI最大尺度图像为通过40倍镜扫描得到的图像,也就是pixelsize_x约等于0.25微米,则在WSI上获取的图像区域大小应该为w_wsi=h_wsi=1024×2/0.25=8192像素,最后将8192×8192的图像,缩放至1024×1024像素大小的图像,同理,如果WSI的最大尺度图像为通过20倍镜扫描得到的图像,则会从原始WSI图像上获取4096×4096像素的区域,并缩放至1024×1024像素大小的图像。
通过上述步骤,获取了2组不同尺度的图像:
第一组:5倍镜(即H=W=2微米)的1024×1024像素大小的图像,可以用于在大导管分割和基于图像的病灶分类;
第二组:20倍镜(即H=W=0.5微米)的2048×2048像素大小的图像,用于较小导管分割和基于细胞的病灶分类;
其中,这里第二组数据的20倍镜的2048×2048等价于5倍镜的512像素图像,可见其视野是第一组数据的四分之一,因此可能有些大导管会超过一个图像的大小从而增加识别难度,这也是使用第一组尺度数据的原因,但第二组数据的清晰度是第一组数据的2倍,因此更加适合对较小的导管,以及细胞区域的精细分割。
从WSI原始图上获取各尺度对应的图像的示意图如下,对应每个尺度下WSI的图像,获取乳腺导管所在图像区域的标注图像根据深度学习训练需求,生成多值或者二值mask图像,如图3(a)和图3(b)分别示出了WSI图像文件切割出的两种尺度下,包含有待分类对象的图像及其对应的标注图像,其中,图3(a)对应5倍镜1024像素大小,图3(b)对应20倍镜2048像素大小。
步骤二:导管分割:
输入:包含乳腺导管的图像以及乳腺导管的二值化标注图像;
输出:乳腺导管轮廓二值化图像。
本步骤可以基于步骤一的数据进行两个分割模型的训练。其中,分割模型一:使用第一组5倍镜1024×1024图像数据,目的在于针对大导管分割;分割模型 二:使用第二组20倍镜2048×2048图像数据,目的在于针对小导管的精细分割。其中,分割网络可以使用全卷积稠密连接网络FC-DenseNet实现导管分割模型,这个方法利用Dense block来代替传统的U-net里面的Conv block。FC-DenseNet的优点是参数量比较低,使得理论上泛化性比较好,以及具有比Encoder-decoder结构能获得更好的结果,该模型的具体结构参考图6。
图7展示了经过前述2个分割模型获得的分割结果,该两种尺度的分割模型比较互补得把大导管和小导管都分割出来。
步骤三:导管面积判断:
输入:步骤二的导管分割结果;
输出:导管尺寸判断结果。
如图8所示,基于步骤二,可以计算单个导管的像素最大长和最大宽的范围,进一步可以把当前图像(5倍镜1024×1024图像或者20倍镜2048×2048图像)的导管像素坐标,换算到等价20倍镜的像素坐标,然后判断当前导管范围是否大于2048pixel像素,由于在分类处理过程中,允许获取各种倍镜下的图像,在此需要定义在20倍镜下的2048pixel像素的导管是大导管,所以需要先换算再确定这导管是否为大导管。
步骤四:
输入:大导管尺寸判断结果,以及WSI图像中包含有各种乳腺导管的图像;
输出:导管类别判断;
本步骤,可以先处理大导管,即使用FC-DenseNet网络模型并基于多类别标签,可以训练数据获得多类别的分割模型,进而进一步获得分割结果,在分割结果中,保留大导管的结果作为大导管分类结果,如图11所示为完整WSI图像,该WSI图像是有单个包含乳腺导管的图像进行拼接之后形成的图像,便于查看结果,如图12示出了基于FC-DenseNet网络模型的多类别分割结果,包括大导管和小导管,图中示出了预测结果,包括原位癌导管1210、正常导管1220和UDH导管1230(UDH,usual ductal hyperplasia,典型乳腺导管增生)。
经过导管尺寸判断后,如图13所示,留下来的确认作为大导管分类结果的图像,该三个预测出来的大导管均为原位癌,与专业领域技术人员预先标注的标 注结果一致。
另一方面,如图14所示,针对每一个小导管,可以使用基于细胞分割的算法来实现对小导管的分类,具体步骤可以包括,对小导管图像进行细胞分割,基于细胞分割结果进行细胞特征提取,从而得到当前导管分类结果。
由此,综合上述四个步骤,可以实现在一个WSI图像上,识别每一个导管区域,并且能够进一步识别各尺寸导管的类别,该应用实例展示了本申请提供的图像处理方法,可以通过并联策略,针对乳腺导管内病变分类问题,使用先分割导管,再分别对大导管和小导管进行分类的算法,其中,大导管分类方法可以使用端到端的语义分割算法,其优点是可以在一个较大的包含该大导管的图像上一次性处理大导管的分类问题,同时使用并联的小导管分类方法,也就是使用20倍镜2048图像基于单个小导管进行细胞分割和分类,由于20倍镜2048图像是大导管分类方法使用的图像清晰度的两倍,因此能更准确的预测小导管的病灶类别,从而更精准快速有效地处理大导管和小导管的分类问题。
在一个实施例中,如图15所示,图15为一个实施例中图像处理装置的结构框图,提供了一种图像处理装置1500,该装置1500包括:
切片获取模块1501,用于获取包含有至少两种尺寸的待分类对象的数字切片;
图像获取模块1502,用于根据数字切片,获取待分类对象在至少两种尺度下的图像;其中,待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;
图像匹配模块1503,用于将待分类对象在至少两种尺度下的图像中的,图像尺寸与待分类对象的尺寸匹配的图像,作为待分类对象的待分类图像;
结果获取模块1504,用于基于与待分类图像的图像分辨率对应的分类模型,对待分类图像中的待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;
结果融合模块1505,用于将各尺寸的待分类对象的分类结果融合,得到数字切片中至少两种尺寸的待分类对象的分类结果。
在一个实施例中,图像获取模块1502,进一步用于:获取在至少两种尺度 下的像素物理尺寸,作为目标像素物理尺寸;确定待分类对象在至少两种尺度下的图像的图像分辨率,作为目标图像分辨率;将数字切片中包含有待分类对象的图像的图像尺寸,缩小至与目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到待分类对象在至少两种尺度下的图像。
在一个实施例中,图像获取模块1502,还用于:在将数字切片中包含有待分类对象的图像的图像尺寸,缩小至与目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到待分类对象在至少两种尺度下的图像之前,获取数字切片的原始像素物理尺寸;根据原始像素物理尺寸以及目标图像分辨率和目标像素物理尺寸,确定数字切片中包含有待分类对象的图像的图像尺寸。
在一个实施例中,图像获取模块1502,进一步用于:获取像素物理尺寸比例;像素物理尺寸比例为目标像素物理尺寸与原始像素物理尺寸的比值;根据目标图像分辨率和像素物理尺寸比例,确定数字切片中包含有待分类对象的图像的图像尺寸。
在一个实施例中,图像获取模块1502,进一步用于:将目标像素横向物理尺寸与原始像素横向物理尺寸的比值作为像素横向物理尺寸比例,以及,将目标像素纵向物理尺寸与原始像素纵向物理尺寸的比值作为像素纵向物理尺寸比例;其中,目标像素物理尺寸比例包括像素横向物理尺寸比例和像素纵向物理尺寸比例;目标像素物理尺寸包括目标像素横向物理尺寸和目标像素纵向物理尺寸;原始像素物理尺寸包括原始像素横向物理尺寸和原始像素纵向物理尺寸;将目标图像横向分辨率与像素横向物理尺寸比例的乘积作为数字切片中包含有待分类对象的图像的图像横向尺寸,以及,将目标图像纵向分辨率与像素纵向物理尺寸比例的乘积作为数字切片中包含有待分类对象的图像的图像纵向尺寸;其中,图像尺寸包括图像横向尺寸和图像纵向尺寸;目标图像分辨率包括目标图像横向分辨率和目标图像纵向分辨率。
在一个实施例中,待分类对象在至少两种尺度下的图像,包括对应于第一尺度的第一图像和对应于第二尺度的第二图像;第一尺度小于第二尺度;图像匹配模块1503,进一步用于:当待分类对象的尺寸大于第二图像的图像尺寸时,将图像尺寸大于待分类对象的尺寸的第一图像,作为待分类对象的待分类图像。
在一个实施例中,图像匹配模块1503,还用于:在当待分类对象的尺寸大于第二图像的图像尺寸时,将图像尺寸大于待分类对象的尺寸的第一图像,作为待分类对象的待分类图像之前,获取待分类对象的轮廓特征;根据轮廓特征,获取待分类对象的尺寸。
在一个实施例中,图像匹配模块1503,进一步用于:利用与第一尺度对应的分割模型,获取待分类对象在第一图像上的分割结果;根据分割结果,得到轮廓特征。
在一个实施例中,图像匹配模块1503,进一步用于:根据轮廓特征,获取待分类对象在第一图像占据的图像范围;将待分类对象在第一图像占据的图像范围,作为待分类对象的尺寸。
在一个实施例中,图像匹配模块1503,还进一步用于:在当待分类对象的尺寸大于第二图像的图像尺寸时,将图像尺寸大于待分类对象的尺寸的第一图像,作为待分类对象的待分类图像之前,当待分类对象在第一图像占据的横向图像范围大于第二图像的横向图像尺寸,和,待分类对象在第一图像占据的纵向图像范围大于第二图像的纵向图像尺寸中的至少一个条件被满足时,判断待分类对象的尺寸大于第二图像的图像尺寸;其中,待分类对象在第一图像占据的图像范围包括横向图像范围和纵向图像范围。
在一个实施例中,待分类对象在至少两种尺度下的图像,包括对应于第一尺度的第一图像和对应于第二尺度的第二图像;第一尺度小于第二尺度;结果获取模块1504,进一步用于:当待分类图像为第一图像时,利用图像语义分类模型作为与第一图像的图像分辨率对应的分类模型,对第一图像中的待分类对象进行分类,得到第一图像中的待分类对象的分类结果;当待分类图像为第二图像时,利用局部特征分类模型作为与第二图像的图像分辨率对应的分类模型,对第二图像中的待分类对象进行分类,得到第二图像中的待分类对象的分类结果。
在一个实施例中,还提供了一种智能显微镜,如图16所示,图16为一个实施例中智能显微镜的结构框图,该智能显微镜1600,可以包括:图像扫描设备1610和图像分析设备1620;其中,
图像扫描设备1610,用于对待分类对象进行扫描,得到待分类对象的数字 切片,传输至图像分析设备1620;
图像分析设备1620,用于执行如上任一项实施例所述的图像处理方法的步骤。
上述实施例提供的智能显微镜,可以应用于对乳腺导管等待分类对象进行病灶分类,图像扫描设备1610获取包含有各种尺寸的乳腺导管的数字切片,交由图像分析设备1620进行分类,图像分析设备1620可以配置有具有图像处理功能的处理器,通过该处理器执行如上任一项实施例所述的图像处理方法的步骤,对各种尺寸的乳腺导管进行病灶分类,实现对数字切片所包含的各种尺寸的乳腺导管进行精准分类的效果,提高分类准确性。
图17为一个实施例中计算机设备的结构框图。该计算机设备具体可以作为图1中的图像处理设备100。如图17所示,该计算机设备包括该计算机设备包括通过***总线连接的处理器、存储器、网络接口、输入装置和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作***,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现图像处理方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行图像处理方法。计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图17中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行上述图像处理方法的步骤。此处图像处理方法的步骤可以是上述各个实施例的图像处理方法中的步骤。
在一个实施例中,提供了一种计算机可读存储介质,存储有计算机可读指令, 计算机可读指令被处理器执行时,使得处理器执行上述图像处理方法的步骤。此处图像处理方法的步骤可以是上述各个实施例的图像处理方法中的步骤。
在一个实施例中,提供了一种计算机可读指令产品或计算机可读指令,该计算机可读指令产品或计算机可读指令包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (15)

  1. 一种图像处理方法,由计算机设备执行,包括:
    获取包含有至少两种尺寸的待分类对象的数字切片;
    根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;
    将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像;
    基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;及
    将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像,包括:
    获取在所述至少两种尺度下的像素物理尺寸,作为目标像素物理尺寸;
    确定所述待分类对象在至少两种尺度下的图像的图像分辨率,作为目标图像分辨率;及
    将所述数字切片中包含有所述待分类对象的图像的图像尺寸,缩小至与所述目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到所述待分类对象在至少两种尺度下的图像。
  3. 根据权利要求2所述的方法,其特征在于,所述将所述数字切片中包含有所述待分类对象的图像的图像尺寸,缩小至与所述目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到所述待分类对象在至少两种尺度下的图像之前,还包括:
    获取所述数字切片的原始像素物理尺寸;及
    根据所述原始像素物理尺寸以及所述目标图像分辨率和目标像素物理尺寸,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸。
  4. 根据权利要求3所述的方法,其特征在于,所述根据所述原始像素物理 尺寸以及所述目标图像分辨率和目标像素物理尺寸,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸,包括:
    获取像素物理尺寸比例;所述像素物理尺寸比例为所述目标像素物理尺寸与所述原始像素物理尺寸的比值;及
    根据所述目标图像分辨率和像素物理尺寸比例,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸。
  5. 根据权利要求4所述的方法,其特征在于,
    所述获取像素物理尺寸比例,包括:
    将目标像素横向物理尺寸与原始像素横向物理尺寸的比值作为像素横向物理尺寸比例,以及,将目标像素纵向物理尺寸与原始像素纵向物理尺寸的比值作为像素纵向物理尺寸比例;其中,所述目标像素物理尺寸比例包括所述像素横向物理尺寸比例和像素纵向物理尺寸比例;所述目标像素物理尺寸包括所述目标像素横向物理尺寸和目标像素纵向物理尺寸;所述原始像素物理尺寸包括原始像素横向物理尺寸和原始像素纵向物理尺寸;
    所述根据所述目标图像分辨率和像素物理尺寸比例,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸,包括:
    将目标图像横向分辨率与像素横向物理尺寸比例的乘积作为所述数字切片中包含有所述待分类对象的图像的图像横向尺寸,以及,将目标图像纵向分辨率与像素纵向物理尺寸比例的乘积作为所述数字切片中包含有所述待分类对象的图像的图像纵向尺寸;其中,所述图像尺寸包括所述图像横向尺寸和图像纵向尺寸;所述目标图像分辨率包括所述目标图像横向分辨率和目标图像纵向分辨率。
  6. 根据权利要求1所述的方法,其特征在于,所述待分类对象在至少两种尺度下的图像,包括对应于第一尺度的第一图像和对应于第二尺度的第二图像;所述第一尺度小于所述第二尺度;
    所述将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像,包括:
    当所述待分类对象的尺寸大于所述第二图像的图像尺寸时,将图像尺寸大于所述待分类对象的尺寸的所述第一图像,作为所述待分类对象的待分类图像。
  7. 根据权利要求6所述的方法,其特征在于,所述当所述待分类对象的尺寸大于所述第二图像的图像尺寸时,将图像尺寸大于所述待分类对象的尺寸的所述第一图像,作为所述待分类对象的待分类图像之前,还包括:
    获取所述待分类对象的轮廓特征;及
    根据所述轮廓特征,获取所述待分类对象的尺寸。
  8. 根据权利要求7所述的方法,其特征在于,所述获取所述待分类对象的轮廓特征,包括:
    利用与所述第一尺度对应的分割模型,获取所述待分类对象在所述第一图像上的分割结果;及
    根据所述分割结果,得到所述轮廓特征。
  9. 根据权利要求7所述的方法,其特征在于,所述根据所述轮廓特征,获取所述待分类对象的尺寸,包括:
    根据所述轮廓特征,获取所述待分类对象在所述第一图像占据的图像范围;及
    将所述待分类对象在所述第一图像占据的图像范围,作为所述待分类对象的尺寸。
  10. 根据权利要求9所述的方法,其特征在于,所述当所述待分类对象的尺寸大于所述第二图像的图像尺寸时,将图像尺寸大于所述待分类对象的尺寸的所述第一图像,作为所述待分类对象的待分类图像之前,还包括:
    当所述待分类对象在所述第一图像占据的横向图像范围大于所述第二图像的横向图像尺寸,和,所述待分类对象在所述第一图像占据的纵向图像范围大于所述第二图像的纵向图像尺寸中的至少一个条件被满足时,判断所述待分类对象的尺寸大于所述第二图像的图像尺寸;其中,所述待分类对象在所述第一图像占据的图像范围包括所述横向图像范围和纵向图像范围。
  11. 根据权利要求1所述的方法,其特征在于,所述待分类对象在至少两种尺度下的图像,包括对应于第一尺度的第一图像和对应于第二尺度的第二图像;所述第一尺度小于所述第二尺度;
    所述基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类 图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果,包括:
    当所述待分类图像为所述第一图像时,利用图像语义分类模型作为与所述第一图像的图像分辨率对应的分类模型,对所述第一图像中的所述待分类对象进行分类,得到所述第一图像中的所述待分类对象的分类结果;及
    当所述待分类图像为所述第二图像时,利用局部特征分类模型作为与所述第二图像的图像分辨率对应的分类模型,对所述第二图像中的所述待分类对象进行分类,得到所述第二图像中的所述待分类对象的分类结果。
  12. 一种智能显微镜,包括:图像扫描设备和图像分析设备;其中,
    所述图像扫描设备,用于对待分类对象进行扫描,得到所述待分类对象的数字切片,传输至所述图像分析设备;及
    所述图像分析设备,用于执行如权利要求1至11任一项所述的图像处理方法的步骤。
  13. 一种图像处理装置,包括:
    切片获取模块,用于获取包含有至少两种尺寸的待分类对象的数字切片;
    图像获取模块,用于根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;
    图像匹配模块,用于将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像;
    结果获取模块,用于基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;及
    结果融合模块,用于将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
  14. 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述处理器执行如权利要求1至11中 任一项所述方法的步骤。
  15. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至11中任一项所述方法的步骤。
PCT/CN2020/127037 2020-02-14 2020-11-06 图像处理方法、装置、智能显微镜、可读存储介质和设备 WO2021159778A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010095182.0 2020-02-14
CN202010095182.0A CN111325263B (zh) 2020-02-14 2020-02-14 图像处理方法、装置、智能显微镜、可读存储介质和设备

Publications (1)

Publication Number Publication Date
WO2021159778A1 true WO2021159778A1 (zh) 2021-08-19

Family

ID=71168938

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127037 WO2021159778A1 (zh) 2020-02-14 2020-11-06 图像处理方法、装置、智能显微镜、可读存储介质和设备

Country Status (2)

Country Link
CN (1) CN111325263B (zh)
WO (1) WO2021159778A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111325263B (zh) * 2020-02-14 2023-04-07 腾讯科技(深圳)有限公司 图像处理方法、装置、智能显微镜、可读存储介质和设备
CN117011550B (zh) * 2023-10-08 2024-01-30 超创数能科技有限公司 一种电子显微镜照片中的杂质识别方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2336972A1 (en) * 2007-05-25 2011-06-22 Definiens AG Generating an anatomical model using a rule-based segmentation and classification process
CN109034208A (zh) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 一种高低分辨率组合的宫颈细胞病理切片分类方法
CN110310253A (zh) * 2019-05-09 2019-10-08 杭州迪英加科技有限公司 数字切片分类方法和装置
CN111325263A (zh) * 2020-02-14 2020-06-23 腾讯科技(深圳)有限公司 图像处理方法、装置、智能显微镜、可读存储介质和设备

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9881234B2 (en) * 2015-11-25 2018-01-30 Baidu Usa Llc. Systems and methods for end-to-end object detection
CN109214403B (zh) * 2017-07-06 2023-02-28 斑马智行网络(香港)有限公司 图像识别方法、装置及设备、可读介质
CN109166107A (zh) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 一种医学图像分割方法及装置、电子设备和存储介质
CN110533120B (zh) * 2019-09-05 2023-12-12 腾讯科技(深圳)有限公司 器官结节的图像分类方法、装置、终端及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2336972A1 (en) * 2007-05-25 2011-06-22 Definiens AG Generating an anatomical model using a rule-based segmentation and classification process
CN109034208A (zh) * 2018-07-03 2018-12-18 怀光智能科技(武汉)有限公司 一种高低分辨率组合的宫颈细胞病理切片分类方法
CN110310253A (zh) * 2019-05-09 2019-10-08 杭州迪英加科技有限公司 数字切片分类方法和装置
CN111325263A (zh) * 2020-02-14 2020-06-23 腾讯科技(深圳)有限公司 图像处理方法、装置、智能显微镜、可读存储介质和设备

Also Published As

Publication number Publication date
CN111325263A (zh) 2020-06-23
CN111325263B (zh) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109389129B (zh) 一种图像处理方法、电子设备及存储介质
CN112017189B (zh) 图像分割方法、装置、计算机设备和存储介质
US11373305B2 (en) Image processing method and device, computer apparatus, and storage medium
WO2021164322A1 (zh) 基于人工智能的对象分类方法以及装置、医学影像设备
CN110428432B (zh) 结肠腺体图像自动分割的深度神经网络算法
CN111445478B (zh) 一种用于cta图像的颅内动脉瘤区域自动检测***和检测方法
JP7026826B2 (ja) 画像処理方法、電子機器および記憶媒体
CN110348294A (zh) Pdf文档中图表的定位方法、装置及计算机设备
CN110974306B (zh) 一种超声内镜下识别和定位胰腺神经内分泌肿瘤的***
WO2023130648A1 (zh) 一种图像数据增强方法、装置、计算机设备和存储介质
WO2021159778A1 (zh) 图像处理方法、装置、智能显微镜、可读存储介质和设备
CN109086777A (zh) 一种基于全局像素特征的显著图精细化方法
WO2021057148A1 (zh) 基于神经网络的脑组织分层方法、装置、计算机设备
CN113902945A (zh) 一种多模态乳腺磁共振图像分类方法及***
CN108537109B (zh) 基于OpenPose的单目相机手语识别方法
CN113298018A (zh) 基于光流场和脸部肌肉运动的假脸视频检测方法及装置
CN115240119A (zh) 一种基于深度学习的视频监控中行人小目标检测方法
CN117197763A (zh) 基于交叉注意引导特征对齐网络的道路裂缝检测方法和***
US20220319208A1 (en) Method and apparatus for obtaining feature of duct tissue based on computer vision, and intelligent microscope
CN109829484B (zh) 一种服饰分类方法、设备及计算机可读存储介质
CN114565035A (zh) 一种舌象分析方法、终端设备及存储介质
CN113255787B (zh) 一种基于语义特征和度量学习的小样本目标检测方法及***
US20220309610A1 (en) Image processing method and apparatus, smart microscope, readable storage medium and device
CN113592807A (zh) 一种训练方法、图像质量确定方法及装置、电子设备
CN115908363B (zh) 肿瘤细胞统计方法、装置、设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20918313

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20918313

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 01.02.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20918313

Country of ref document: EP

Kind code of ref document: A1