WO2021159778A1 - 图像处理方法、装置、智能显微镜、可读存储介质和设备 - Google Patents
图像处理方法、装置、智能显微镜、可读存储介质和设备 Download PDFInfo
- Publication number
- WO2021159778A1 WO2021159778A1 PCT/CN2020/127037 CN2020127037W WO2021159778A1 WO 2021159778 A1 WO2021159778 A1 WO 2021159778A1 CN 2020127037 W CN2020127037 W CN 2020127037W WO 2021159778 A1 WO2021159778 A1 WO 2021159778A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- classified
- size
- pixel
- physical size
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 38
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 238000013145 classification model Methods 0.000 claims abstract description 49
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000012545 processing Methods 0.000 claims description 126
- 230000011218 segmentation Effects 0.000 claims description 60
- 238000010191 image analysis Methods 0.000 claims description 11
- 230000004927 fusion Effects 0.000 claims description 3
- 210000000481 breast Anatomy 0.000 description 30
- 238000010586 diagram Methods 0.000 description 23
- 238000005516 engineering process Methods 0.000 description 20
- 238000013473 artificial intelligence Methods 0.000 description 7
- 239000011521 glass Substances 0.000 description 5
- 230000003902 lesion Effects 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 208000009458 Carcinoma in Situ Diseases 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 206010020718 hyperplasia Diseases 0.000 description 2
- 201000004933 in situ carcinoma Diseases 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000007170 pathology Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10056—Microscopic image
- G06T2207/10061—Microscopic image from scanning electron microscope
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30021—Catheter; Guide wire
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30068—Mammography; Breast
Definitions
- This application relates to the field of image processing technology, and in particular to an image processing method, device, smart microscope, computer-readable storage medium, and computer equipment.
- image classification processing technology can be applied to identify and classify objects waiting to be classified in digital slices such as breast ducts.
- the digital slice is to scan a traditional glass slice through a smart microscope to obtain a high-resolution digital image.
- the digital slice can also be processed with high-precision and multi-field splicing on a computer device.
- the related image classification processing technology currently applied to classify the objects to be classified in the digital slices is usually to classify the objects to be classified in the digital slices at a specific scale, but because the digital slices generally contain a variety of Factors such as the size of the object to be classified, and so on, cause the accuracy of this technology to classify the object to be classified in the digital slice to be low.
- an image processing method for processing a digital image processing method, device, smart microscope, computer-readable storage medium, and computer equipment.
- an embodiment of the present invention provides an image processing method, which is executed by a computer device, and includes: obtaining a digital slice containing at least two sizes of objects to be classified; and obtaining the digital slice according to the digital slice. Images of objects in at least two scales; wherein, among the images of the objects to be classified in at least two scales, the larger-scale image has a smaller image size and a higher image size than a smaller-scale image.
- Image resolution among the images of the object to be classified in the at least two scales, the image whose image size matches the size of the object to be classified is used as the image to be classified of the object to be classified; Based on the classification model corresponding to the image resolution of the image to be classified, classify the objects to be classified in the image to be classified to obtain classification results of the objects to be classified in various sizes; The classification results of the objects to be classified are merged to obtain classification results of the objects to be classified in at least two sizes in the digital slice.
- an embodiment of the present invention provides an image processing device, which includes: a slice acquisition module for acquiring a digital slice containing at least two sizes of objects to be classified; an image acquisition module for The digital slice is used to obtain images of the object to be classified in at least two scales; wherein, among the images of the object to be classified in at least two scales, an image of a larger scale has a larger scale than an image of a smaller scale.
- an image matching module for comparing the object to be classified in an image of the at least two scales, the image size and the size of the object to be classified The matched image is used as the to-be-classified image of the to-be-classified object;
- the result acquisition module is used to determine the to-be-classified object in the to-be-classified image based on the classification model corresponding to the image resolution of the to-be-classified image Perform the classification to obtain the classification results of the objects to be classified in each size;
- the result fusion module is used to fuse the classification results of the objects to be classified in the various sizes to obtain at least two sizes of the objects to be classified in the digital slice Classification results.
- an embodiment of the present invention provides a smart microscope, including: an image scanning device and an image analysis device; wherein the image scanning device is used to scan the object to be classified to obtain the number of the object to be classified The slice is transmitted to the image analysis device; the image analysis device is used to perform the steps of the image processing method as described above.
- the embodiment of the present invention provides one or more non-volatile storage media storing computer-readable instructions.
- the processor When the computer-readable instructions are executed by one or more processors, the processor The following steps are performed: obtaining a digital slice containing at least two sizes of objects to be classified; according to the digital slices, obtaining images of the objects to be classified in at least two scales; wherein, the objects to be classified are in at least two sizes.
- the larger-scale image has a smaller image size and a higher image resolution than a smaller-scale image; the image of the object to be classified at the at least two scales
- the image whose size matches the size of the object to be classified is used as the image to be classified of the object to be classified; based on the classification model corresponding to the image resolution of the image to be classified, the The objects to be classified in the classified image are classified to obtain the classification results of the objects to be classified of various sizes; the classification results of the objects to be classified of the various sizes are merged to obtain at least two sizes of the objects to be classified in the digital slice.
- the classification result of the classification object is a smaller image size and a higher image resolution than a smaller-scale image; the image of the object to be classified at the at least two scales
- the image whose size matches the size of the object to be classified is used as the image to be classified of the object to be classified; based on the classification model corresponding to the image resolution of the image to be classified, the The objects to be classified in the classified image
- an embodiment of the present invention provides a computer device, including a memory and a processor, the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processing The device executes the following steps: acquiring a digital slice containing at least two sizes of the object to be classified; according to the digital slice, acquiring images of the object to be classified in at least two scales; wherein the object to be classified is at least Among the images at the two scales, the larger-scale image has a smaller image size and a higher image resolution than the smaller-scale image; In the image, the image whose size matches the size of the object to be classified is used as the image to be classified of the object to be classified; based on the classification model corresponding to the image resolution of the image to be classified, the The objects to be classified in the image to be classified are classified to obtain classification results of the objects to be classified of various sizes; the classification results of the objects to be classified of each size are fused to obtain at least two sizes of the digital slices The classification result of the object to be classified
- Fig. 1 is an application environment diagram of an image processing method in an embodiment
- FIG. 2 is a schematic flowchart of an image processing method in an embodiment
- Figure 3(a) is a schematic diagram of an image at one scale in an embodiment
- Figure 3(b) is a schematic diagram of an image under another scale in an embodiment
- FIG. 4 is a schematic flowchart of the step of obtaining classification results of objects to be classified of various sizes in an embodiment
- FIG. 5 is a schematic flowchart of a step of obtaining images of an object to be classified in at least two scales in an embodiment
- Fig. 6 is a schematic structural diagram of a segmentation model in an embodiment
- FIG. 7 is a schematic diagram of a segmentation result in an embodiment
- FIG. 8 is a schematic diagram of an image range occupied by an object to be classified in an embodiment
- FIG. 9 is a schematic flowchart of an image processing method in another embodiment.
- FIG. 10 is a schematic flowchart of an image processing method in an application example
- Figure 11 is a schematic diagram of a digital slice in an application example
- Fig. 12 is a schematic diagram of a multi-category segmentation result in an application example
- Figure 13 is a schematic diagram of the segmentation result of a large catheter in an application example
- Figure 14 is a schematic diagram of a small catheter classification process in an application example
- Figure 15 is a structural block diagram of an image processing device in an embodiment
- Figure 16 is a block diagram of a smart microscope in an embodiment.
- Fig. 17 is a structural block diagram of a computer device in an embodiment.
- AI Artificial Intelligence
- artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
- Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
- Artificial intelligence technology is a comprehensive discipline, covering a wide range of fields, including both hardware-level technology and software-level technology.
- Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation and interaction systems, and mechatronics.
- Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning and deep learning.
- Digital Slides also known as virtual slides, contain lesion information on glass slides.
- the digital slice can be zoomed on computer equipment such as a personal computer.
- the digital slice can be used on the computer equipment to observe any position on the glass slice, and the corresponding position can also be further enlarged to, for example, 5 times, 10 times, 20 times, and 40 times. Observation at the same magnification is the same as magnifying and reducing glass slices on a traditional microscope.
- FIG. 1 is an application environment diagram of the image processing method in an embodiment.
- the processing device 100 may be a computer device with image processing capabilities such as image acquisition, analysis, and display.
- the computer device may specifically be at least one of a mobile phone, a tablet computer, a desktop computer, and a notebook computer; in addition, the image processing device 100 also It can be a smart microscope, where the smart microscope incorporates artificial intelligence's vision, voice, natural language processing technology, and augmented reality (AR) technology.
- the user can input control commands such as voice commands to the smart display, and the smart microscope can be based on
- the instruction performs operations such as automatic identification, detection, quantitative calculation, and report generation. It can also display the detection results in real time in the field of view shown by the user's eyepiece, prompting timely without disturbing the user's review process, which can improve processing Efficiency and accuracy.
- the image processing device 100 may obtain a digital slice containing at least two sizes of objects to be classified by scanning or the like, and obtain the objects to be classified in at least two scales according to the digital slices.
- the larger-scale image has a smaller image size and higher image resolution than a smaller-scale image; then, image processing The device 100 may use an image whose image size matches the size of the object to be classified among the images of the object to be classified in at least two scales as the image to be classified of the object to be classified; then, the image processing device 100 may first obtain The classification models corresponding to the image resolution of the image to be classified.
- These classification models can be pre-configured in the image processing device 100, and then the image processing device 100 can perform the classification based on the classification model corresponding to the image resolution of the image to be classified.
- the objects to be classified in the image are classified, so that the classification results of the objects to be classified of various sizes can be obtained; finally, the image processing device 100 fuses the classification results of the objects to be classified of various sizes to obtain at least two sizes in the digital slice
- This method can be applied to image processing equipment such as computer equipment, smart microscopes, etc. to accurately classify objects to be classified in at least two sizes contained in digital slices, and improve the accuracy of classifying objects to be classified in digital slices. .
- the image processing device 100 can be used to scan the object to be classified locally, and perform classification processing on the objects to be classified in various sizes in the scanned digital slice.
- the classification of objects to be classified can also be completed by means of remote communication.
- the classification processing of non-local objects to be classified can be realized based on the fifth generation mobile networks (5th generation mobile networks or 5th generation wireless systems, 5th-Generation, referred to as 5G or 5G technology).
- the user can be such as a mobile phone or a tablet.
- a terminal device such as a computer obtains the digital slice containing the object to be classified, and then can send the digital slice to the remote image processing device 100 in real time based on the 5G communication network, and then the image processing device 100 can perform the classification on the object to be classified in the digital slice.
- Classification processing the classification result is transmitted back to the user's terminal device through the 5G communication network, so that the user can grasp the classification result through the terminal device, thanks to the strong real-time characteristics of 5G communication technology, even if the remote image processing device 100
- To classify the objects to be classified in the digital slices collected by users on-site can also enable users to obtain corresponding classification results in real-time on-site, and can also reduce the pressure of image data processing on the user side while ensuring real-time performance.
- FIG. 2 is a schematic flowchart of an image processing method in an embodiment. This embodiment mainly uses this method to apply to the image processing in FIG. 1 above.
- the device 100 is taken as an example.
- the image processing device 100 may specifically be a mobile phone, a tablet computer, or a desktop computer with image processing capabilities such as image acquisition, analysis, and display. And laptops and other computer equipment.
- the image processing method specifically includes the following steps:
- Step S201 Acquire a digital slice containing at least two sizes of objects to be classified
- the image processing device 100 may scan the object to be classified by the image scanning device to obtain a digital slice of the object to be classified.
- the image scanning device can be used as a part of the image processing device 100, and can also be used as an external device of the image processing device 100.
- the image scanning device may be controlled by the image processing device 100 to scan a carrier such as a slide glass waiting to be classified to obtain a digital slice of the object to be classified and transmit it to the image processing device 100 for analysis and processing.
- the digital slice may be a WSI The (Whole Slide Image, full-field digital pathology slice) image can be arbitrarily zoomed in and out on the image processing device 100.
- the digital slice obtained by the image processing device 100 generally contains at least two sizes of objects to be classified.
- the image processing device 100 may, after obtaining a digital slice containing at least two sizes of objects to be classified, perform classification on the objects to be classified in different sizes. Objects are marked to distinguish them. Taking breast ducts as the object to be classified as an example, breast ducts can be divided into two sizes according to a size threshold. Breast ducts with a size larger than the size threshold can be regarded as large ducts, and smaller than the size threshold can be regarded as small ducts.
- the image processing device 100 may mark a breast duct that occupies more than or equal to 2048 pixels under a 20-fold lens as a large duct, and vice versa, it may mark a small duct.
- Step S202 Obtain images of the object to be classified in at least two scales according to the digital slice;
- the image processing device 100 may obtain an image of the object to be classified in at least two scales from the digital slice, wherein, among the images of the object to be classified in the at least two scales, the larger-scale image is compared with Smaller-scale images have smaller image sizes and higher image resolutions. Wherein, the actual physical size corresponding to each pixel in the image of different scales is the same, so that the object to be classified can correspond to the same physical size in the image of each scale.
- the image processing device 100 may perform scaling processing on the digital slice to obtain images of the object to be classified at different scales.
- the scale can correspond to the magnification of the microscope, the larger the magnification, the larger the scale, and vice versa.
- a 20x lens has a larger scale than a 5x lens.
- an image with a larger scale has a smaller image size and a higher image resolution.
- the breast duct obtained by the image processing device 100 The image under the 5X lens is shown in the first example image 310 in Fig. 3(a), and the image of the breast duct under the 20X lens is shown in the third example images 331 to 334 in Fig. 3(b), where, Any one of the third example images 331 to 334 can be used as an image of the mammary duct under a 20-fold lens.
- the first example arrow 3110 and the third example arrow 3310 show the position of the breast duct in the first example image 310 and the third example images 331 to 334, respectively.
- the third example images 331 to 334 can be stitched into an image with the same image size as that of the first example image 310, and the resolution of the third example images 331 to 334 is higher than that of the first example image 310. That is, the image resolution of the image corresponding to the larger field of view is lower than the image resolution of the image corresponding to the smaller field of view.
- the image processing device 100 can also label the objects to be classified in the images of various scales.
- binarization or multi-value can be used to label the objects to be classified in the images of various scales, as shown in FIG. 3( As shown in a), the second example image 320 is a binarized annotated image corresponding to the first example image 310, and the second example arrow 3210 shows the result of the binarization of the mammary duct under a 5x lens; As shown in 3(b), the fourth example images 341 to 344 are binarized annotated images corresponding to the third example images 331 to 334, and the second example arrow 3410 shows the binary value of the mammary duct under a 20-fold lens.
- the image processing device 100 can segment the object to be classified and the background in the image of the corresponding scale by means of binarization labeling, so as to subsequently classify the object to be classified obtained by the segmentation.
- multi-valued labeling can also be used to segment the object to be classified from the background.
- the multi-valued labeling method may specifically be to label the objects to be classified as different according to the different sizes of the objects to be classified. Color, so as to determine the size range of the object to be classified according to the color of the object to be classified.
- step S203 among the images of the object to be classified in at least two scales, the image whose image size matches the size of the object to be classified is used as the image to be classified of the object to be classified.
- the image processing device 100 selects an image matching the size of the object to be classified as the image to be classified according to the size of the object to be classified, so that the subsequent steps perform classification processing on the object to be classified based on the object to be classified.
- the objects to be classified include large catheters and small catheters.
- the definition of large and small ducts can refer to the above, that is, the image processing device 100 can mark the breast ducts occupying more than or equal to 2048 pixels under the 20-fold lens as large ducts, and vice versa, it can be marked as small ducts.
- the image processing apparatus 100 takes an image of 1024 pixels ⁇ 1024 pixels under a 5 magnification lens as an image matching the size of a large duct in a breast duct, and an image of 2048 pixels ⁇ 2048 pixels under a 20 magnification lens as an image with Image that matches the size of the small duct in the breast duct.
- the image size of the 1024 pixels ⁇ 1024 pixel image under the 5x lens is also larger than the size of the small duct, because the 1024 pixels under the 5x lens
- the image of ⁇ 1024 pixels can also be used to classify the small catheter, but because the image of 2048 pixels ⁇ 2048 pixels under the 20 magnification lens also meets the size matching condition, and the image of 2048 pixels ⁇ 2048 pixels under the 20 magnification lens has It has a higher resolution than the 1024 pixels ⁇ 1024 pixels image under the 5x mirror, so the relevant features of the small catheter can be obtained more clearly, which is more conducive to its classification. Therefore, the image processing device 100 will 2048 pixels ⁇ 2048 pixels under the 20x mirror. The 2048-pixel image is classified as an image matching the size of the small catheter.
- Step S204 based on the classification model corresponding to the image resolution of the image to be classified, classify the object to be classified in the image to be classified, and obtain classification results of the object to be classified in each size.
- the image processing device 100 may be pre-configured with multiple classification models for the object to be classified, and the multiple classification models may correspond to different image resolutions. Therefore, after the image processing device 100 obtains the image to be classified, it can select a corresponding classification model according to the image resolution of the image to be classified, and then the image processing device 100 inputs the image to be classified into the classification model, and the classification model can The objects to be classified in the image to be classified are classified, and the corresponding classification results are obtained and output.
- the image processing device 100 selects the corresponding classification model based on the image resolution of the image to be classified to classify the object to be classified on it.
- the image resolution is relatively high, and the part of the object to be classified Features are easy to extract, which is more beneficial to accurately classify them, so local feature classification models can be used to classify them.
- the image resolution is relatively low, and the objects to be classified can be classified based on their overall characteristics such as outline size on the image. For example, image semantic classification models can be used to classify them.
- the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, wherein the first scale is smaller than the second scale.
- FIG. 4 is a schematic flowchart of the step of obtaining the classification results of the objects to be classified of various sizes in an embodiment.
- the above step S204 is based on the classification model corresponding to the image resolution of the image to be classified. Classify the objects to be classified, and obtain the classification results of objects to be classified of various sizes, which may specifically include:
- Step S401 When the image to be classified is the first image, use the image semantic classification model as the classification model corresponding to the image resolution of the first image to classify the object to be classified in the first image to obtain the The classification result of the object to be classified;
- Step S402 When the image to be classified is the second image, use the local feature classification model as the classification model corresponding to the image resolution of the second image to classify the object to be classified in the second image to obtain the The classification result of the object to be classified.
- the image processing device 100 recognizes the object to be classified on the first image, that is, the image to be classified is the first image;
- the size of the object matches the image size of the second image, and the image processing device 100 recognizes the object to be classified on the second image, that is, the image to be classified is the second image.
- the first scale of the first image is smaller than the second scale of the second image, so the image resolution of the first image is smaller than that of the second image.
- image processing The device 100 uses the image semantic classification model as its classification model to classify the object to be classified on the first image, where the image semantic classification model can perform the classification of the object to be classified based on the overall contour feature of the object to be classified on the first image.
- the image semantic classification model can be based on a semantic segmentation network model implemented by a network model such as FC-DenseNet.
- the image processing device 100 may use a local feature classification model as its classification model to classify the object to be classified on the second image, where the local feature classification model can classify the object to be classified.
- Each part of the classification object is segmented, and the local features of each part of the object to be classified are extracted, so as to realize the classification of the object to be classified.
- the image processing device 100 can take an image of 1024 pixels ⁇ 1024 pixels under a 5x lens as the first image, and use 2048 pixels ⁇ 2048 pixels under a 20x lens as the first image.
- the 2048-pixel image is used as the second image.
- the image processing device 100 may perform semantic classification on the large catheter based on the FC-DenseNet network model and the multi-category tags, and obtain the classification results according to the corresponding category tags.
- the image processing device 100 may segment the cells of the small catheter in the second image, and then extract the cell feature to obtain the classification result of the small catheter.
- the image processing device 100 can accurately classify the objects to be classified in the corresponding images to be classified by using a classification model adapted to the image resolution, and obtain classification results of objects to be classified in various sizes at the same time. Improve classification efficiency.
- step S205 the classification results of the objects to be classified in various sizes are merged to obtain classification results of the objects to be classified in at least two sizes in the digital slice.
- the image processing device 100 merges the to-be-classified results of various sizes, so as to obtain the classification results of the to-be-classified objects of various sizes in the digital slice.
- the breast ducts may include large ducts and small ducts.
- the image processing device 100 can classify the large ducts and the small ducts in different images to be classified, and finally merge the classification results of the large ducts and the small ducts.
- the classification results of various sizes of breast ducts in digital slices can be obtained at the same time.
- the image processing device 100 obtains a digital slice containing at least two sizes of an object to be classified, and obtains an image of the object to be classified in at least two scales according to the digital slice, wherein the larger-scale image corresponds to Compared with a smaller scale image, it has a smaller image size and a higher image resolution. Then, the image processing device 100 uses the image whose image size matches the size of the object to be classified among the images of the object to be classified in the at least two scales as the image to be classified of the object to be classified.
- the image processing device 100 can According to the actual size of the object to be classified, the image to be classified is adaptively selected, and the classification model corresponding to the image resolution of the image to be classified is used to classify the object to be classified.
- the classification results of the classified objects are merged to achieve the effect of accurately classifying the objects to be classified in various sizes included in the digital slice, and to improve the accuracy of classifying the objects to be classified in the digital slice.
- FIG. 5 is a schematic flowchart of the step of obtaining images of the object to be classified in at least two scales in an embodiment.
- step S202 according to the digital slice, the object to be classified is obtained in at least Images under the two scales can be realized in the following ways:
- Step S501 Obtain the physical size of the pixel in at least two scales as the physical size of the target pixel;
- Step S502 Determine the image resolution of the image of the object to be classified in at least two scales as the target image resolution
- step S503 the image size of the image containing the object to be classified in the digital slice is reduced to an image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain an image of the object to be classified in at least two scales.
- the image processing device 100 can reduce the image size of the image containing the object to be classified in the digital slice according to the physical size of the pixel and the image resolution corresponding to different scales to obtain the object to be classified in at least two types.
- the size of the image under the scale, and the size of the object to be classified in the image of each scale is kept consistent, which improves the accuracy of classifying the object to be classified.
- the image processing device 100 can obtain and read the physical size of the pixel corresponding to the aforementioned at least two scales as the physical size of the target pixel, and the image resolution required to read the image of the object to be classified in the at least two scales, As the target image resolution.
- the image processing device 100 can select the image containing the object to be classified from the digital slice, and reduce the image size of the image containing the object to be classified in the digital slice to the same size as the target pixel.
- the physical size and the image size corresponding to the resolution of the target image are used to obtain images of the object to be classified in at least two scales.
- the image size of the image containing the object to be classified in the digital slice is reduced to the image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain the image size of the object to be classified in at least two scales.
- the image processing device 100 may also determine the image size of the image containing the object to be classified in the digital slice in the following manner. The specific steps include:
- the pixel size of the largest original image of the digital slice is generally different, which usually depends on different image scanning devices.
- the solution of this embodiment can realize the digital slice of various pixel sizes and output the actual physical of the object to be classified. Images at all scales of the same size.
- the image processing device 100 can read the pixel size of the largest original image of the digital slice (such as a digital slice scanned by a 40x mirror) as the original pixel physical size, and then the image processing device 100 can obtain the target image resolution and target After the physical size of the pixel, the image size of the image containing the object to be classified in the corresponding digital slice is calculated according to the physical size of the original pixel, the resolution of the target image, and the physical size of the target pixel.
- the image processing device 100 can read the pixel size of the largest original image of the digital slice (such as a digital slice scanned by a 40x mirror) as the original pixel physical size, and then the image processing device 100 can obtain the target image resolution and target After the physical size of the pixel, the image size of the image containing the object to be classified in the corresponding digital slice is calculated according to the physical size of the original pixel, the resolution of the target image, and the physical size of the target pixel.
- the foregoing determination of the image size of the image containing the object to be classified in the digital slice according to the physical size of the original pixel, the resolution of the target image and the physical size of the target pixel, the specific steps may include:
- the pixel physical size ratio refers to the ratio of the physical size of the target pixel to the physical size of the original pixel.
- the image processing device 100 may specifically determine the image size of the image containing the object to be classified in the digital slice according to the product of the target image resolution and the pixel physical size ratio.
- the shape of the image and the pixel is generally rectangular, that is, for the image size, it has the image horizontal size and the image vertical size, and for the pixel physical size, it corresponds to the vertical physical size of the pixel and the vertical physical size of the pixel. Therefore, the aforementioned target pixel physical size ratio may include the pixel horizontal physical size ratio and the pixel vertical physical size ratio; the target pixel physical size may include the target pixel horizontal physical size and the target pixel vertical physical size, and the original pixel physical size may include The original pixel horizontal physical size and the original pixel vertical physical size, and the target image resolution may include the target image horizontal resolution and the target image vertical resolution.
- the above step of obtaining the pixel physical size ratio may include:
- the ratio of the horizontal physical size of the target pixel to the horizontal physical size of the original pixel is taken as the horizontal physical size ratio of the pixel
- the ratio of the vertical physical size of the target pixel to the vertical physical size of the original pixel is taken as the vertical physical size of the pixel.
- the image processing device 100 may first determine the horizontal physical size of the target pixel and the vertical physical size of the target pixel, and the ratio of the original pixel horizontal physical size to the vertical physical size of the pixel, and then the image processing device 100 may calculate the horizontal physical size of the target pixel and the vertical physical size of the pixel.
- the ratio of the horizontal physical size of the original pixel is obtained to obtain the horizontal physical size ratio of the pixel; the image processing device 100 may also calculate the ratio of the vertical physical size of the target pixel to the vertical physical size of the original pixel to obtain the vertical physical size ratio of the pixel.
- the above step of determining the image size of the image containing the object to be classified in the digital slice based on the target image resolution and the pixel physical size ratio may include:
- the target image resolution and the pixel physical size ratio determine the image size of the image containing the object to be classified in the digital slice for description.
- the product of the horizontal resolution of the target image and the horizontal physical size ratio of the pixel is taken as the horizontal size of the image containing the object to be classified in the digital slice, and the product of the vertical resolution of the target image and the vertical physical size ratio of the pixel is taken as the digital slice.
- the vertical size of the image of the object to be classified is taken as the horizontal size of the image containing the object to be classified in the digital slice.
- the image processing device 100 may first determine the horizontal resolution of the target image and the vertical resolution of the target image. Then, the image processing device 100 multiplies the horizontal resolution of the target image by the pixel horizontal physical size ratio to obtain the image horizontal size of the image containing the object to be classified in the digital slice; the image processing device 100 can also compare the vertical resolution of the target image with the pixel The longitudinal physical size ratio is multiplied to obtain the image longitudinal size of the image containing the object to be classified in the digital slice. In this way, the image processing device 100 can obtain the image size of the image containing the object to be classified in the digital slice according to the horizontal size of the image and the vertical size of the image.
- the following uses a specific example to describe in detail the method of obtaining the image size of the image containing the object to be classified in the digital slice.
- the image processing device 100 reads the pixel size of the largest original image of the digital slice.
- the pixel size includes pixelsize_x and pixelsize_y, which represent the physical size of the pixel in the horizontal and vertical directions, respectively.
- the pixel size (including pixelsize_x and pixelsize_y) of the digital slice image obtained by scanning with a 40x lens is about 0.25 microns.
- the pixel size of a digital slice image obtained by scanning with a 20x lens is generally 0.5. About micrometers. Based on this, for example, the vertical resolution of an image corresponding to a certain scale is M pixels, and the horizontal resolution is N pixels.
- the vertical physical size and horizontal physical size of the pixels of the image are H and W microns, respectively.
- the image processing device 100 can obtain an image with an area size of h_wsi ⁇ w_wsi from the image data of the digital slice from the image data of the digital slice through the openslide toolkit of python, and then This image is scaled to the size of M ⁇ N pixels, and then the image corresponding to a certain scale is obtained.
- the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, wherein the first scale is smaller than the second scale;
- the image whose image size matches the size of the object to be classified as the image to be classified of the object to be classified may specifically include:
- the first image whose image size is larger than the size of the object to be classified is used as the image to be classified of the object to be classified.
- the image processing device 100 may first obtain the size of the object to be classified, and then compare the size of the object to be classified with the image size of the second image, if the size of the object to be classified is larger than the image size of the second image Size, it means that the object to be classified is not completely contained in the second image, and therefore the image processing device 100 cannot accurately classify it.
- the image processing device 100 can set the image size larger than the The object to be classified is classified in the first image of the size of the object to improve classification accuracy.
- the image processing device 100 determines that the size of the object to be classified is smaller than the image size of the second image, since the second image has a larger image resolution than the first image, the image processing device 100 can use the Classify the object to be classified in, so as to improve the accuracy of classification.
- the first image whose image size is larger than the size of the object to be classified is used as the image to be classified of the object to be classified, and the following steps Get the size of the object to be classified, including:
- the image processing device 100 may obtain the contour feature of the object to be classified in the image, and determine the size of the object to be classified according to the contour feature.
- the contour feature may include the object to be classified.
- the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, and the first scale is smaller than the second scale.
- the image processing device 100 can obtain the contour of the object to be classified on the first image. Point coordinates, according to the contour point coordinates, the size of the object waiting to be classified as the circumscribed rectangle formed by the outer contour of the object to be classified can be obtained. Since the actual physical size of the object to be classified is the same in the images at each scale, The size of an object to be classified determined on an image can be applied to images of different scales for size comparison.
- a pre-trained segmentation model for the object to be classified may be used to obtain the contour feature of the object to be classified.
- the above step of obtaining the contour feature of the object to be classified may specifically include:
- the segmentation model corresponding to the first scale is used to obtain the segmentation result of the object to be classified on the first image; according to the segmentation result, the contour feature is obtained.
- the image processing device 100 may use segmentation models corresponding to different scales to segment the objects to be classified in the images of corresponding sizes, and obtain their contour features according to the segmentation results.
- the segmentation model can be applied to images of different scales to segment the object to be classified from the background of the image. Different scales can correspond to different segmentation models, so that the image processing device 100 can pre-install the At different scales, the corresponding training data is used for training to obtain a segmentation model that can identify and segment the object to be classified under various scales.
- the fully convolution densely connected network FC-DenseNet network model can be used to realize the segmentation model of the object to be classified.
- the structure of the FC-DenseNet network model is shown in Figure 6 below.
- Figure 6 is a schematic diagram of the structure of a segmentation model in an embodiment, where DB stands for Dense Block, which means dense module, C means Convolution, which means convolution, and TD means Transitions Down. That is downsampling, TU means Transitions Up means upsampling, CT means concatenation means merging, SC means Skip Connection means skip connection or skip link, Input means input image, and output means output segmentation classification result.
- the images of the object to be classified in at least two scales may include a first image corresponding to a first scale and a second image corresponding to a second scale, and the first scale is smaller than the second scale.
- the image processing device 100 can train the FC-DenseNet network model in the images corresponding to the first scale and the second scale based on the image data of the object to be classified to obtain the segmentation model corresponding to the first scale and the second scale.
- the segmentation models corresponding to the first scale and the second scale respectively obtain the segmentation results of the object to be classified on the first image and the second image. Take the large and small ducts of the breast duct as an example to describe the segmentation results in detail. Refer to FIG. 7, which is a schematic diagram of a segmentation result in an embodiment.
- FIG. 1 The catheter segmentation results of the first image and the second image of 2048 pixels ⁇ 2048 pixels under a 20-fold lens, it can be seen that the application of two scales of models can be more complementary to segment the large catheter and the small catheter, thereby improving the segmentation efficiency of the object to be classified And accuracy.
- the image processing device 100 may use the segmentation model corresponding to the first scale to obtain the segmentation result of the object to be classified on the first image, and obtain the contour feature according to the segmentation result.
- the first image may completely contain the image to be classified. Therefore, by segmenting the object to be classified on the first image, the contour feature of the object to be classified can be completely obtained.
- the above step of obtaining the size of the object to be classified according to the contour feature may specifically include:
- the image range occupied by the object to be classified in the first image is obtained; the image range occupied by the object to be classified in the first image is taken as the size of the object to be classified.
- the image processing device 100 can acquire the contour feature of the object to be classified on the first image, and the contour feature can include the coordinates of the outer contour point of the object to be classified on the first image, so that the image processing feature 100 can be based on The outer contour point coordinates determine the image range occupied by the object to be classified in the first image, and use the image range as the size of the object to be classified.
- FIG. 8 is a schematic diagram of the image range occupied by the object to be classified in an embodiment.
- the image range occupied by the object to be classified in the first image may include a horizontal image range and a vertical image range.
- the horizontal image range and the vertical image range occupied by an image can frame the circumscribed rectangle of the outer contour of the object to be classified, so that the image processing device 100 can use the size of the circumscribed rectangle as the size of the object to be classified.
- the size of the object to be classified is larger than the image size of the second image.
- the following steps can also be used to determine whether the size of the object to be classified is greater than the first image.
- the image size of the image including:
- the image processing device 100 may determine that the size of the object to be classified is larger than the image size of the second image.
- the image processing device 100 can make the horizontal image range occupied by the object to be classified in the first image larger than the horizontal image size of the second image and the vertical image range occupied by the object to be classified in the first image is larger than the vertical image range of the second image. At least one condition in the image size is used as a judgment condition that the size of the object to be classified is greater than the image size of the second image. When the image processing device 100 determines that any of the foregoing conditions is satisfied, the image processing device 100 can determine the size of the object to be classified. The size is larger than the image size of the second image, which improves the efficiency of matching the image size with the size of the object to be classified.
- an image processing method is also provided, as shown in FIG. 9, which is a schematic flowchart of an image processing method in another embodiment, and the image processing method can be applied to the image processing shown in FIG. 1 Device 100, the method specifically includes:
- Step S901 The image processing device 100 obtains a digital slice containing at least two sizes of objects to be classified;
- Step S902 the image processing device 100 obtains the physical size of the pixel in at least two scales as the physical size of the target pixel;
- Step S903 The image processing device 100 determines the image resolution of the image of the object to be classified in at least two scales as the target image resolution;
- the larger-scale image has a smaller image size and higher image resolution than a smaller-scale image.
- Step S904 the image processing device 100 obtains the original pixel physical size of the digital slice, and determines the image size of the image containing the object to be classified in the digital slice according to the original pixel physical size, the target image resolution and the target pixel physical size;
- Step S905 the image processing device 100 reduces the image size of the image containing the object to be classified in the digital slice to an image size corresponding to the physical size of the target pixel and the resolution of the target image, to obtain an image of the object to be classified in at least two scales ;
- step S906 the image processing device 100 uses the image whose image size matches the size of the object to be classified among the images of the object to be classified in at least two scales as the image to be classified of the object to be classified;
- Step S907 When the image to be classified is the first image, the image processing device 100 uses the image semantic classification model to classify the object to be classified in the first image to obtain the classification result of the object to be classified of the corresponding size; when the image to be classified is In the case of the second image, the image processing device 100 uses the local feature classification model to classify the object to be classified in the second image to obtain a classification result of the object to be classified in a corresponding size;
- the images of the object to be classified in at least two scales may include the first image corresponding to the first scale and the second image corresponding to the second scale, and the first scale is smaller than the second scale.
- step S908 the image processing device 100 merges the classification results of the objects to be classified in various sizes to obtain classification results of the objects to be classified in at least two sizes in the digital slice.
- the above image processing method can adaptively select the images to be classified according to the actual size of the objects to be classified, and simultaneously use the image semantic classification model and the local feature classification model in the corresponding images to be classified at the same time to classify different sizes of the images to be classified.
- the objects are classified, and finally the classification results of the objects to be classified of various sizes are merged to obtain the classification results of the objects to be classified of various sizes on the digital slice, and the technical effect of accurate classification is achieved.
- steps of the above flowchart are displayed in sequence according to the instructions of the arrows, these steps are not necessarily executed in the order indicated by the arrows. Unless specifically stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least part of the steps in the above flowchart may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed at the same time, but can be executed at different times. The order of execution is not necessarily performed sequentially, but may be performed alternately or alternately with other steps or at least a part of sub-steps or stages of other steps.
- FIG. 10 is a schematic flow diagram of the image processing method in an application example. The specific steps include:
- Step 1 Multi-scale data generation, including:
- WSI image file (WSI: Whole Slide Image, full-field digital pathology slice), the desired image pixel size M ⁇ N of the image at a certain scale, and the pixel physical size H and W.
- Output WSI patch image, and the corresponding annotated image.
- the pixel length of the largest original image of the WSI image is generally different, which mainly depends on different WSI image scanning devices.
- the cells in the image have actual physical lengths. Therefore, in this application example, all the pixels of the image should correspond to the same physical size to correctly achieve segmentation and classification. Therefore, in this step, WSI for various pixel sizes can be further implemented through the following steps, and images of various scales corresponding to a uniform physical size can be output.
- read the pixel length of the WSI maximum original image including pixelssize_x and pixelssize_y, which are the physical lengths of the pixels in the horizontal direction and the vertical direction, respectively.
- the pixelsize of the WSI image scanned through a 40x mirror is about 0.25 microns
- the pixelsize of the WSI image scanned through a 20x mirror is generally about 0.5 microns.
- the calculation method of the pixel length and width occupied by the image to be cropped on the image is as follows:
- the 2048 ⁇ 2048 of the second set of data here is equivalent to the 512-pixel image of the 5x lens. It can be seen that the field of view is one-fourth of the first set of data, so some large catheters may have more than one image. The size increases the difficulty of identification. This is also the reason for using the first set of scale data, but the definition of the second set of data is twice that of the first set of data, so it is more suitable for smaller catheters and fine segmentation of cell regions.
- the schematic diagram of obtaining the images corresponding to each scale from the original WSI map is as follows. Corresponding to the WSI image at each scale, obtain the labeled image of the image area where the breast duct is located. According to the deep learning training requirements, generate a multi-value or binary mask image, as shown in the figure Figure 3(a) and Figure 3(b) respectively show images of objects to be classified and their corresponding labeled images in two scales cut out of WSI image files.
- Figure 3(a) corresponds to a 5x lens 1024 Pixel size
- Figure 3(b) corresponds to 2048 pixel size of 20x mirror.
- Step 2 Catheter segmentation:
- Input Contains the image of the breast duct and the binarized image of the breast duct;
- Output Binarized image of mammary duct contour.
- segmentation model 1 using the first set of 5x lens 1024 ⁇ 1024 image data, aimed at segmenting large catheters
- segmentation model 2 using the second set of 20 ⁇ lens 2048 ⁇ 2048 image data, aiming at fine catheters segmentation.
- the segmentation network can use the fully convolution densely connected network FC-DenseNet to implement the catheter segmentation model. This method uses the dense block to replace the Conv block in the traditional U-net.
- FC-DenseNet is that the amount of parameters is relatively low, which makes theoretical generalization better, and has better results than the Encoder-decoder structure. Refer to Figure 6 for the specific structure of the model.
- Figure 7 shows the segmentation results obtained through the aforementioned two segmentation models.
- the segmentation models of the two scales are more complementary to each other so that both the large catheter and the small catheter are segmented.
- Step 3 Judgment of duct area:
- the maximum length and maximum width of a single catheter can be calculated, and the current image (1024 ⁇ 1024 image with 5x lens or 2048 ⁇ 2048 image with 20x lens) can be converted into the catheter pixel coordinates. , Convert to the pixel coordinates of the equivalent 20x lens, and then determine whether the current catheter range is greater than 2048pixel pixels. Since the classification process allows to obtain images under various magnifications, it is necessary to define 2048pixels under the 20x lens. The pipe of the pixel is a large pipe, so it is necessary to convert first to determine whether the pipe is a large pipe.
- Input the result of judging the size of the large duct, and the WSI image contains the images of various breast ducts;
- the large catheter can be processed first, that is, using the FC-DenseNet network model and based on multi-category tags, the data can be trained to obtain a multi-category segmentation model, and then the segmentation results can be further obtained.
- the result of the large catheter is retained as the large
- the result of catheter classification is a complete WSI image.
- the WSI image is an image formed after stitching a single image containing breast ducts, which is convenient for viewing the results.
- Figure 12 shows the multiple images based on the FC-DenseNet network model.
- the results of category segmentation include large ducts and small ducts.
- the figure shows the prediction results, including carcinoma-in-situ duct 1210, normal duct 1220, and UDH duct 1230 (UDH, normal ductal hyperplasia, typical ductal hyperplasia of the breast).
- the remaining images are confirmed as the classification results of the large catheters.
- the three predicted large catheters are all carcinoma-in-situ, which are consistent with the pre-marked results of the professional technicians.
- an algorithm based on cell segmentation can be used to classify the small catheter.
- the specific steps may include cell segmentation on the small catheter image, and cell segmentation based on the result of cell segmentation. Feature extraction, so as to obtain the current catheter classification results.
- This application example shows the image processing method provided by this application, which can be achieved through a parallel strategy
- the algorithm is used to segment the ducts first, and then to classify the large ducts and small ducts respectively.
- the large duct classification method can use an end-to-end semantic segmentation algorithm, and its advantage is that it can be used in a larger
- the classification problem of the large catheter is processed at one time, and the parallel small catheter classification method is used at the same time, that is, the 2048 image of the 20x lens is used for cell segmentation and classification based on a single small catheter, because the 2048 image of the 20x lens It is twice the image definition used by the large catheter classification method, so it can more accurately predict the lesion type of the small catheter, so as to deal with the classification of large catheters and small catheters more accurately, quickly and effectively.
- FIG. 15 is a structural block diagram of an image processing apparatus in an embodiment, and an image processing apparatus 1500 is provided, and the apparatus 1500 includes:
- the slice acquisition module 1501 is configured to acquire a digital slice containing at least two sizes of objects to be classified;
- the image acquisition module 1502 is configured to acquire images of the object to be classified in at least two scales according to the digital slice; among the images of the object to be classified in the at least two scales, the larger-scale image is compared with the smaller-scale image.
- the image has a smaller image size and a higher image resolution;
- the image matching module 1503 is configured to use the image whose image size matches the size of the object to be classified among the images of the object to be classified in at least two scales as the image to be classified of the object to be classified;
- the result obtaining module 1504 is configured to classify the objects to be classified in the image to be classified based on the classification model corresponding to the image resolution of the image to be classified, and obtain classification results of the objects to be classified in each size respectively;
- the result fusion module 1505 is used for fusing classification results of objects to be classified of various sizes to obtain classification results of objects to be classified in at least two sizes in the digital slice.
- the image acquisition module 1502 is further configured to: acquire the physical size of the pixel in at least two scales as the physical size of the target pixel; determine the image resolution of the image of the object to be classified in the at least two scales, As the target image resolution; the image size of the image containing the object to be classified in the digital slice is reduced to an image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain an image of the object to be classified in at least two scales.
- the image acquisition module 1502 is further configured to: reduce the image size of the image containing the object to be classified in the digital slice to an image size corresponding to the physical size of the target pixel and the resolution of the target image to obtain the Obtain the original pixel physical size of the digital slice before the image of the object in at least two scales; determine the image size of the image containing the object to be classified in the digital slice according to the original pixel physical size, the target image resolution and the target pixel physical size.
- the image acquisition module 1502 is further configured to: acquire the pixel physical size ratio; the pixel physical size ratio is the ratio of the target pixel physical size to the original pixel physical size; determine according to the target image resolution and the pixel physical size ratio
- the digital slice contains the image size of the image to be classified.
- the image acquisition module 1502 is further configured to: use the ratio of the horizontal physical size of the target pixel to the horizontal physical size of the original pixel as the ratio of the horizontal physical size of the pixel, and compare the vertical physical size of the target pixel to the vertical physical size of the original pixel
- the ratio of is used as the pixel vertical physical size ratio; where the target pixel physical size ratio includes the pixel horizontal physical size ratio and the pixel vertical physical size ratio; the target pixel physical size includes the target pixel horizontal physical size and the target pixel vertical physical size; the original pixel physical size Including the original pixel horizontal physical size and the original pixel vertical physical size; the product of the target image horizontal resolution and the pixel horizontal physical size ratio is used as the image horizontal size of the image containing the object to be classified in the digital slice, and the vertical resolution of the target image
- the product of the ratio of the vertical physical size of the pixel is taken as the vertical image size of the image containing the object to be classified in the digital slice; the image size includes the horizontal image size and the vertical image size;
- the images of the object to be classified in at least two scales include a first image corresponding to the first scale and a second image corresponding to the second scale; the first scale is smaller than the second scale; the image matching module 1503. It is further configured to: when the size of the object to be classified is greater than the image size of the second image, use the first image whose image size is greater than the size of the object to be classified as the image to be classified of the object to be classified.
- the image matching module 1503 is further configured to: when the size of the object to be classified is greater than the image size of the second image, use the first image whose image size is greater than the size of the object to be classified as the object to be classified. Before the image to be classified, the contour feature of the object to be classified is obtained; according to the contour feature, the size of the object to be classified is obtained.
- the image matching module 1503 is further configured to: use the segmentation model corresponding to the first scale to obtain the segmentation result of the object to be classified on the first image; and obtain the contour feature according to the segmentation result.
- the image matching module 1503 is further configured to: obtain the image range occupied by the object to be classified in the first image according to the contour feature; take the image range occupied by the object to be classified in the first image as the image range of the object to be classified size.
- the image matching module 1503 is further configured to: when the size of the object to be classified is greater than the image size of the second image, use the first image whose image size is greater than the size of the object to be classified as the object to be classified Before the image to be classified, when the horizontal image range occupied by the object to be classified in the first image is greater than the horizontal image size of the second image, and the vertical image range occupied by the object to be classified in the first image is greater than the vertical image size of the second image When at least one of the conditions in is satisfied, it is determined that the size of the object to be classified is greater than the image size of the second image; wherein the image range occupied by the object to be classified in the first image includes a horizontal image range and a vertical image range.
- the images of the object to be classified in at least two scales include a first image corresponding to the first scale and a second image corresponding to the second scale; the first scale is smaller than the second scale; the result obtaining module 1504. It is further configured to: when the image to be classified is the first image, use the image semantic classification model as the classification model corresponding to the image resolution of the first image to classify the object to be classified in the first image to obtain the first image.
- a smart microscope is also provided, as shown in FIG. 16, which is a structural block diagram of the smart microscope in an embodiment.
- the smart microscope 1600 may include: an image scanning device 1610 and an image analysis device 1620 ;in,
- the image scanning device 1610 is used to scan the object to be classified to obtain a digital slice of the object to be classified, and transmit it to the image analysis device 1620;
- the image analysis device 1620 is configured to execute the steps of the image processing method described in any of the above embodiments.
- the smart microscope provided in the above embodiment can be applied to classify breast ducts waiting to be classified.
- the image scanning device 1610 obtains digital slices containing breast ducts of various sizes, and sends them to the image analysis device 1620 for classification.
- the image analysis device The 1620 may be equipped with a processor with image processing functions, by which the steps of the image processing method as described in any of the above embodiments are executed to classify breast ducts of various sizes, so as to realize the classification of the digital slices.
- Various sizes of breast ducts can be accurately classified, and the classification accuracy can be improved.
- Fig. 17 is a structural block diagram of a computer device in an embodiment.
- the computer device may specifically be used as the image processing device 100 in FIG. 1.
- the computer equipment includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus.
- the memory includes a non-volatile storage medium and an internal memory.
- the non-volatile storage medium of the computer device stores an operating system, and may also store computer-readable instructions.
- the processor can realize the image processing method.
- the internal memory may also store computer-readable instructions, and when the computer-readable instructions are executed by the processor, the processor can execute the image processing method.
- the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen. It can be an external keyboard, touchpad, or mouse.
- FIG. 17 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
- the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
- a computer device including a memory and a processor, the memory stores computer readable instructions, and when the computer readable instructions are executed by the processor, the processor executes the steps of the image processing method described above.
- the steps of the image processing method may be the steps in the image processing method of each of the foregoing embodiments.
- a computer-readable storage medium which stores computer-readable instructions, and when the computer-readable instructions are executed by a processor, the processor executes the steps of the above-mentioned image processing method.
- the steps of the image processing method may be the steps in the image processing method of each of the foregoing embodiments.
- a computer-readable instruction product or computer-readable instruction is provided.
- the computer-readable instruction product or computer-readable instruction includes a computer instruction, and the computer instruction is stored in a computer-readable storage medium.
- the processor of the computer device reads the computer instruction from the computer-readable storage medium, and the processor executes the computer instruction, so that the computer device executes the steps in the foregoing method embodiments.
- Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
- Volatile memory may include random access memory (RAM) or external cache memory.
- RAM can be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (15)
- 一种图像处理方法,由计算机设备执行,包括:获取包含有至少两种尺寸的待分类对象的数字切片;根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像;基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;及将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
- 根据权利要求1所述的方法,其特征在于,所述根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像,包括:获取在所述至少两种尺度下的像素物理尺寸,作为目标像素物理尺寸;确定所述待分类对象在至少两种尺度下的图像的图像分辨率,作为目标图像分辨率;及将所述数字切片中包含有所述待分类对象的图像的图像尺寸,缩小至与所述目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到所述待分类对象在至少两种尺度下的图像。
- 根据权利要求2所述的方法,其特征在于,所述将所述数字切片中包含有所述待分类对象的图像的图像尺寸,缩小至与所述目标像素物理尺寸以及目标图像分辨率对应的图像尺寸,得到所述待分类对象在至少两种尺度下的图像之前,还包括:获取所述数字切片的原始像素物理尺寸;及根据所述原始像素物理尺寸以及所述目标图像分辨率和目标像素物理尺寸,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸。
- 根据权利要求3所述的方法,其特征在于,所述根据所述原始像素物理 尺寸以及所述目标图像分辨率和目标像素物理尺寸,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸,包括:获取像素物理尺寸比例;所述像素物理尺寸比例为所述目标像素物理尺寸与所述原始像素物理尺寸的比值;及根据所述目标图像分辨率和像素物理尺寸比例,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸。
- 根据权利要求4所述的方法,其特征在于,所述获取像素物理尺寸比例,包括:将目标像素横向物理尺寸与原始像素横向物理尺寸的比值作为像素横向物理尺寸比例,以及,将目标像素纵向物理尺寸与原始像素纵向物理尺寸的比值作为像素纵向物理尺寸比例;其中,所述目标像素物理尺寸比例包括所述像素横向物理尺寸比例和像素纵向物理尺寸比例;所述目标像素物理尺寸包括所述目标像素横向物理尺寸和目标像素纵向物理尺寸;所述原始像素物理尺寸包括原始像素横向物理尺寸和原始像素纵向物理尺寸;所述根据所述目标图像分辨率和像素物理尺寸比例,确定所述数字切片中包含有所述待分类对象的图像的图像尺寸,包括:将目标图像横向分辨率与像素横向物理尺寸比例的乘积作为所述数字切片中包含有所述待分类对象的图像的图像横向尺寸,以及,将目标图像纵向分辨率与像素纵向物理尺寸比例的乘积作为所述数字切片中包含有所述待分类对象的图像的图像纵向尺寸;其中,所述图像尺寸包括所述图像横向尺寸和图像纵向尺寸;所述目标图像分辨率包括所述目标图像横向分辨率和目标图像纵向分辨率。
- 根据权利要求1所述的方法,其特征在于,所述待分类对象在至少两种尺度下的图像,包括对应于第一尺度的第一图像和对应于第二尺度的第二图像;所述第一尺度小于所述第二尺度;所述将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像,包括:当所述待分类对象的尺寸大于所述第二图像的图像尺寸时,将图像尺寸大于所述待分类对象的尺寸的所述第一图像,作为所述待分类对象的待分类图像。
- 根据权利要求6所述的方法,其特征在于,所述当所述待分类对象的尺寸大于所述第二图像的图像尺寸时,将图像尺寸大于所述待分类对象的尺寸的所述第一图像,作为所述待分类对象的待分类图像之前,还包括:获取所述待分类对象的轮廓特征;及根据所述轮廓特征,获取所述待分类对象的尺寸。
- 根据权利要求7所述的方法,其特征在于,所述获取所述待分类对象的轮廓特征,包括:利用与所述第一尺度对应的分割模型,获取所述待分类对象在所述第一图像上的分割结果;及根据所述分割结果,得到所述轮廓特征。
- 根据权利要求7所述的方法,其特征在于,所述根据所述轮廓特征,获取所述待分类对象的尺寸,包括:根据所述轮廓特征,获取所述待分类对象在所述第一图像占据的图像范围;及将所述待分类对象在所述第一图像占据的图像范围,作为所述待分类对象的尺寸。
- 根据权利要求9所述的方法,其特征在于,所述当所述待分类对象的尺寸大于所述第二图像的图像尺寸时,将图像尺寸大于所述待分类对象的尺寸的所述第一图像,作为所述待分类对象的待分类图像之前,还包括:当所述待分类对象在所述第一图像占据的横向图像范围大于所述第二图像的横向图像尺寸,和,所述待分类对象在所述第一图像占据的纵向图像范围大于所述第二图像的纵向图像尺寸中的至少一个条件被满足时,判断所述待分类对象的尺寸大于所述第二图像的图像尺寸;其中,所述待分类对象在所述第一图像占据的图像范围包括所述横向图像范围和纵向图像范围。
- 根据权利要求1所述的方法,其特征在于,所述待分类对象在至少两种尺度下的图像,包括对应于第一尺度的第一图像和对应于第二尺度的第二图像;所述第一尺度小于所述第二尺度;所述基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类 图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果,包括:当所述待分类图像为所述第一图像时,利用图像语义分类模型作为与所述第一图像的图像分辨率对应的分类模型,对所述第一图像中的所述待分类对象进行分类,得到所述第一图像中的所述待分类对象的分类结果;及当所述待分类图像为所述第二图像时,利用局部特征分类模型作为与所述第二图像的图像分辨率对应的分类模型,对所述第二图像中的所述待分类对象进行分类,得到所述第二图像中的所述待分类对象的分类结果。
- 一种智能显微镜,包括:图像扫描设备和图像分析设备;其中,所述图像扫描设备,用于对待分类对象进行扫描,得到所述待分类对象的数字切片,传输至所述图像分析设备;及所述图像分析设备,用于执行如权利要求1至11任一项所述的图像处理方法的步骤。
- 一种图像处理装置,包括:切片获取模块,用于获取包含有至少两种尺寸的待分类对象的数字切片;图像获取模块,用于根据所述数字切片,获取所述待分类对象在至少两种尺度下的图像;其中,所述待分类对象在至少两种尺度下的图像中,较大尺度的图像相比于较小尺度的图像具有较小的图像尺寸和较高的图像分辨率;图像匹配模块,用于将所述待分类对象在所述至少两种尺度下的图像中的,所述图像尺寸与所述待分类对象的尺寸匹配的图像,作为所述待分类对象的待分类图像;结果获取模块,用于基于与所述待分类图像的图像分辨率对应的分类模型,对所述待分类图像中的所述待分类对象进行分类,分别得到各尺寸的待分类对象的分类结果;及结果融合模块,用于将所述各尺寸的待分类对象的分类结果融合,得到所述数字切片中至少两种尺寸的待分类对象的分类结果。
- 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得所述处理器执行如权利要求1至11中 任一项所述方法的步骤。
- 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如权利要求1至11中任一项所述方法的步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010095182.0 | 2020-02-14 | ||
CN202010095182.0A CN111325263B (zh) | 2020-02-14 | 2020-02-14 | 图像处理方法、装置、智能显微镜、可读存储介质和设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021159778A1 true WO2021159778A1 (zh) | 2021-08-19 |
Family
ID=71168938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/127037 WO2021159778A1 (zh) | 2020-02-14 | 2020-11-06 | 图像处理方法、装置、智能显微镜、可读存储介质和设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111325263B (zh) |
WO (1) | WO2021159778A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325263B (zh) * | 2020-02-14 | 2023-04-07 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、智能显微镜、可读存储介质和设备 |
CN117011550B (zh) * | 2023-10-08 | 2024-01-30 | 超创数能科技有限公司 | 一种电子显微镜照片中的杂质识别方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2336972A1 (en) * | 2007-05-25 | 2011-06-22 | Definiens AG | Generating an anatomical model using a rule-based segmentation and classification process |
CN109034208A (zh) * | 2018-07-03 | 2018-12-18 | 怀光智能科技(武汉)有限公司 | 一种高低分辨率组合的宫颈细胞病理切片分类方法 |
CN110310253A (zh) * | 2019-05-09 | 2019-10-08 | 杭州迪英加科技有限公司 | 数字切片分类方法和装置 |
CN111325263A (zh) * | 2020-02-14 | 2020-06-23 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、智能显微镜、可读存储介质和设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9881234B2 (en) * | 2015-11-25 | 2018-01-30 | Baidu Usa Llc. | Systems and methods for end-to-end object detection |
CN109214403B (zh) * | 2017-07-06 | 2023-02-28 | 斑马智行网络(香港)有限公司 | 图像识别方法、装置及设备、可读介质 |
CN109166107A (zh) * | 2018-04-28 | 2019-01-08 | 北京市商汤科技开发有限公司 | 一种医学图像分割方法及装置、电子设备和存储介质 |
CN110533120B (zh) * | 2019-09-05 | 2023-12-12 | 腾讯科技(深圳)有限公司 | 器官结节的图像分类方法、装置、终端及存储介质 |
-
2020
- 2020-02-14 CN CN202010095182.0A patent/CN111325263B/zh active Active
- 2020-11-06 WO PCT/CN2020/127037 patent/WO2021159778A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2336972A1 (en) * | 2007-05-25 | 2011-06-22 | Definiens AG | Generating an anatomical model using a rule-based segmentation and classification process |
CN109034208A (zh) * | 2018-07-03 | 2018-12-18 | 怀光智能科技(武汉)有限公司 | 一种高低分辨率组合的宫颈细胞病理切片分类方法 |
CN110310253A (zh) * | 2019-05-09 | 2019-10-08 | 杭州迪英加科技有限公司 | 数字切片分类方法和装置 |
CN111325263A (zh) * | 2020-02-14 | 2020-06-23 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、智能显微镜、可读存储介质和设备 |
Also Published As
Publication number | Publication date |
---|---|
CN111325263A (zh) | 2020-06-23 |
CN111325263B (zh) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109389129B (zh) | 一种图像处理方法、电子设备及存储介质 | |
CN112017189B (zh) | 图像分割方法、装置、计算机设备和存储介质 | |
US11373305B2 (en) | Image processing method and device, computer apparatus, and storage medium | |
WO2021164322A1 (zh) | 基于人工智能的对象分类方法以及装置、医学影像设备 | |
CN110428432B (zh) | 结肠腺体图像自动分割的深度神经网络算法 | |
CN111445478B (zh) | 一种用于cta图像的颅内动脉瘤区域自动检测***和检测方法 | |
JP7026826B2 (ja) | 画像処理方法、電子機器および記憶媒体 | |
CN110348294A (zh) | Pdf文档中图表的定位方法、装置及计算机设备 | |
CN110974306B (zh) | 一种超声内镜下识别和定位胰腺神经内分泌肿瘤的*** | |
WO2023130648A1 (zh) | 一种图像数据增强方法、装置、计算机设备和存储介质 | |
WO2021159778A1 (zh) | 图像处理方法、装置、智能显微镜、可读存储介质和设备 | |
CN109086777A (zh) | 一种基于全局像素特征的显著图精细化方法 | |
WO2021057148A1 (zh) | 基于神经网络的脑组织分层方法、装置、计算机设备 | |
CN113902945A (zh) | 一种多模态乳腺磁共振图像分类方法及*** | |
CN108537109B (zh) | 基于OpenPose的单目相机手语识别方法 | |
CN113298018A (zh) | 基于光流场和脸部肌肉运动的假脸视频检测方法及装置 | |
CN115240119A (zh) | 一种基于深度学习的视频监控中行人小目标检测方法 | |
CN117197763A (zh) | 基于交叉注意引导特征对齐网络的道路裂缝检测方法和*** | |
US20220319208A1 (en) | Method and apparatus for obtaining feature of duct tissue based on computer vision, and intelligent microscope | |
CN109829484B (zh) | 一种服饰分类方法、设备及计算机可读存储介质 | |
CN114565035A (zh) | 一种舌象分析方法、终端设备及存储介质 | |
CN113255787B (zh) | 一种基于语义特征和度量学习的小样本目标检测方法及*** | |
US20220309610A1 (en) | Image processing method and apparatus, smart microscope, readable storage medium and device | |
CN113592807A (zh) | 一种训练方法、图像质量确定方法及装置、电子设备 | |
CN115908363B (zh) | 肿瘤细胞统计方法、装置、设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20918313 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20918313 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 01.02.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20918313 Country of ref document: EP Kind code of ref document: A1 |