WO2021148465A1 - Method for outputting a focused image through a microscope - Google Patents

Method for outputting a focused image through a microscope Download PDF

Info

Publication number
WO2021148465A1
WO2021148465A1 PCT/EP2021/051199 EP2021051199W WO2021148465A1 WO 2021148465 A1 WO2021148465 A1 WO 2021148465A1 EP 2021051199 W EP2021051199 W EP 2021051199W WO 2021148465 A1 WO2021148465 A1 WO 2021148465A1
Authority
WO
WIPO (PCT)
Prior art keywords
specimen
image
images
axis
optical device
Prior art date
Application number
PCT/EP2021/051199
Other languages
French (fr)
Inventor
Henning Johannes FALK
Tim RACH
Stefan Günther
Markus Turber
Maximilian HANS
Christoph Witte
Original Assignee
Intuity Media Lab GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intuity Media Lab GmbH filed Critical Intuity Media Lab GmbH
Publication of WO2021148465A1 publication Critical patent/WO2021148465A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • G02B7/38Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals measured at different points on the optical axis, e.g. focussing on two or more planes and comparing image data
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing
    • G02B21/244Devices for focusing using image analysis techniques

Definitions

  • the invention relates to a method and microscope arrangement for produc ing focused microscope images of at least one object by a microscope.
  • a microscope is a scientific instrument that is used for the visualization of objects or specimens, which have details that are too small to be resolved by the naked eye.
  • Microscopes serve as an important tool for viewing and analyzing micro structures in biology, medicine, and materials science.
  • mi croscopes There are many types of mi croscopes available on the market and these microscopes employ different physical principles for the magnification effect.
  • One example is optical microscopes, which use light as a source of illumination to produce the magnified images using light re fraction on glass lenses.
  • bright-field microscopy the contrast required for producing a magnified image is generated by colored or dark structures in the il luminated object or specimen.
  • the contrast of a specimen can be artificially en hanced by additional coloring of the specimen.
  • transmitted-light microscopy the illumination is passed through the specimen (trans-illumination) before being col lected by the microscope objective. Consequently, transparent or thinly sectioned specimens are required for this transmitted-light microscopy.
  • fluorescence microscopy fluorescent properties of the specimen are used to generate contrast. Fluorescence may be an inherent feature of the specimen or may be introduced artificially by adding fluorescent dies to the specimen.
  • fluo rescence microscopy the illumination light and the spectrally discrete fluorescent emission signal are separated either by means of wavelength-specific optical filters or by the direction of illumination/detection.
  • the illumination in fluorescence microscopy can be installed in a trans- arrangement or in an epi-arrangement, in which the illumination objective and the detection objective are positioned on the same side of the specimen to be imaged.
  • the image from the optical microscope can be either viewed through an eyepiece or, more commonly nowadays, captured by a light-sensitive camera sen sor to generate a so-called micrograph.
  • a light-sensitive camera sen sor to generate a so-called micrograph.
  • sensors available to catch the images. Non-limiting examples are charge-coupled devices (CCD) and complementary metal-oxide semiconductor (CMOS) based technologies, which are widely used. These sensors allow the capture and storage of digital images to the computer. Typically, there is a subsequent processing of these images in the com puter to obtain the desired information.
  • CCD charge-coupled devices
  • CMOS complementary metal-oxide semiconductor
  • FIG. 1 shows a setup of a microscope 10 as known in the art.
  • a detection beam path 16 comprises a detector (e.g., a digital camera device) 11 and an optical device (e.g., an objective lens) 12.
  • the illumination beam path is not shown in Fig. 1.
  • a specimen 18 is mounted on a microscope slide 13 which resides on an object table 14.
  • the microscope slide 13 (without the speci men) and the object table 14 form collectively together the object assembly 17. Dif ferent areas of the specimen 18 can be examined by moving the object assembly 17 in an XY-plane perpendicular to an optical axis 15 of the objective lens (in the Z- axis).
  • the detection beam path 16 and the object assembly 17 are moved relative to each other along the Z-axis. This can be done, for example, automatically using an actuator 19, such as a stepper motor.
  • a plurality of biological specimens is tradition ally examined by light microscopy.
  • the patient samples are prepared as smears of body fluids or thin sections of tissues, stained, and fixated on glass microscope slides for microscopic examination.
  • a trained microscopist exam ines different areas of the prepared specimen. Due to various factors, including thermal expansion, mechanical backlash, and lack of object planarity, the micro scope must be repeatedly "focused” when viewing different areas/structures of the specimen or viewing the same area of the specimen over an extended period of time to generate consistently "sharp", i.e., focused images.
  • the focusing involves align- ing the specimen to be imaged and the detection beam path with respect to each other in a manner that produces a sharp, focused image of the specimen on the de tector.
  • the detector can be, for example, the retina of the observer, a light-sensitive film, or a sensor, such as a camera chip of a digital camera.
  • the purpose of an autofocus unit is generally to automate the process of the focusing to produce consistently sharp images.
  • the autofocus unit usually compris es three components: 1) a sensor, usually optical, that reads out a feature of the sys tem that can be used to determine the focus position; 2) a signal processor that uses an autofocus algorithm to interpret data from the sensor; and 3) an adjustment de vice, such as a motorized axis, that is used to move parts of the optics and the spec imen being viewed relative to each other, thereby producing a sharp image on the detector.
  • Known implementations of the image-based autofocus methods generally use a two-step process to acquire a sharp image. Firstly, the relative position of the specimen to the detection optics is adjusted stepwise by means of an actuator. In each step, an image or image section is captured, and a focus value is calculated. From the change in the focus values as a function of a position of the object, the position at which an optimally focused image of the object can be captured is de termined. In the second step, the specimen and the detection optics are moved to the determined position and an optimally focused image is captured.
  • the method is shown in Figure 2: In a first step (T), a section along the Z-axis (as known in Fig. 1) is scanned in n steps.
  • One scan step consists of: 1.
  • the position of optimal focus (Pf) is determined by a focus algorithm (F) and the specimen is moved to this position Pf. Subsequently, a focused image (Bf) is captured at the position of optimal focus Pf. The image Bf is displayed and/or output for further analysis.
  • a microscope system and a method are known in which the focusing is performed using software by the control and readout unit of a camera module.
  • An electronic camera unit with an image sensor for recording the microscope image is arranged in the beam path of the microscope.
  • the image sig nals of the camera unit are read via a control and readout unit into a microprocessor system with a signal processor.
  • the microprocessor system calculates according to an algorithm whether the current focus position is "in focus" or whether the micro scope is not in focus. If the microscope is not in focus, the microprocessor system generates an actuating signal upon which the control unit of the microscope brings the microscope into focus.
  • the known autofocus methods require the use of a control unit and actuator with high absolute positioning precision and high repeatability to allow precise po sitioning of the assembly at the position of optimum focus.
  • Such precise control units are cost-intensive and therefore unsuitable for relatively simple, low-cost digi tal microscopes. For this reason, the microscopes equipped with autofocus can only be found in medium to high price segments.
  • the need to focus before capturing an image slows down the entire image capture process, limiting the abil ity to capture sharp images in rapid succession.
  • US Patent Nr. 8,786,694 B2 (Xie Min, assigned to Abbot Point of Care, Inc.) teaches a method and apparatus for imaging a biologic fluid sample quiescent ly residing within a chamber.
  • the method includes the steps of: a) positioning the chamber at a Z-axis position relative to an objective lens having a lens axis, where in the Z-axis is parallel to the lens axis; b) moving one or both of the chamber and the objective lens relative to one another at a constant velocity along the Z-axis; and c) creating one or more images of the biologic fluid sample as one or both of the chamber and the objective lens are moving at a velocity relative to one another within a focus search range along the Z-axis.
  • a sharp image is detected by deter mining the change in sharpness values between two or more of the images and then stopping the detection when the slope of the sharpness values reaches a minimum or a maximum i.e., there is a change in the sign of the derivative of the slope.
  • a polynomial equation can be derived to fit the data points and a maxi mum or minimum position is determined. This requires knowledge of the exact po sition of the objective lens along the Z-axis.
  • US Patent Nr. 7,417,213 B2 (Krief, assigned to Tripath Imaging) teaches a method of capturing a focused image of a continuously moving slide/objective ar rangement.
  • a frame grabber device is triggered to capture an image of the slide through a 20x objective at a first focus level as the slide continuously moves later ally relative to the objective.
  • the objective is triggered to move to a second focus level after capture of the image of the slide.
  • the objective moves in discrete steps, oscillating between min imum and maximum focus levels.
  • the frame grabber device is triggered at a fre quency as the slide continuously moves laterally relative to the objective so multi ple images at different focus levels overlap, whereby a slide portion is common to each.
  • the image having the maximum contrast value within overlapping images represents an optimum focus level for the slide portion, and thus the focused image.
  • the focus method used in this patent only works when the focus range and/or the depth of field of the objective (as is the case for a 20x objective) is quite large. It is not suitable for thin samples (e.g. fixed and dried sputum or blood sam ples are between 0.5 and 5 pm in thickness) or large magnifications in which the focus easily moves out of the focus range.
  • the movement of the slide/objective ar rangement requires multiple images to be captured at different degrees of focus which requires a lot of time to carry out as well as processing of the data. There is no provision to abort the procedure when the correct focus has been determined.
  • DE 10 2014 104704 (Neumeier et al, assigned to EMCO Priifmaschinen) teaches a method for the computer-aided evaluation of hardness tests of a test body with the following steps: Providing an electronic camera system with at least one camera lens, preferably a micro camera lens, for taking photographs, preferably for taking them of microscopic photography, on a test stand for analyzing hardness test impressions on test specimens. A series of photographs of the test body with the same image section is made and the distance between the camera lens and the test body and / or the focus setting of the camera system is changed for each photo graph. The image is saved in compressed form.
  • the file sizes of the individual rec orded images are compared with each other and a search is made for the image with the largest file size, whereby the file with the largest file size is assigned the sharp est image.
  • This patent application does not teach the use of the method for a biolog ical specimen.
  • This document teaches a method for producing focused microscope images of at least one specimen by a microscope.
  • the microscope comprises at least one optical device for imaging the specimen, at least one electronic camera device for capturing at least one image of the specimen, at least one actuator for adjusting the distance between the specimen and the optical device, and at least one control de vice.
  • the control device comprises a control unit which controls an actuator, by means of which a distance between the specimen and the optical device is varied between at least two different positions and more than one image is recorded during movement between the positions.
  • the sharpness of the images is determined by means of the control device and is assigned to the image in the form of a sharpness value.
  • the invention further relates to a microscope arrangement for carrying out the method and to a control device for controlling the microscope arrangement.
  • the document teaches a microscope arrangement which comprises an opti cal device which is arranged in a detection beam path and is moveable in a first axis (Z-axis).
  • An object table supports a specimen for imaging.
  • the specimen is in one aspect a biological specimen.
  • At least one of the object table or the detection beam path is movable in a plane (XY-plane) that is substantially normal to the first axis i.e., the Z-axis.
  • the top surface of the specimen is at a distance d from the optical device.
  • a detector is arranged in the detection beam path for capturing a plurality of images of an area of the specimen at different ones of the distances d and the captured images are stored in a storage medium.
  • a processor is adapted to deter- mine a sharpness value (V) for the stored plurality of images and selecting the sharpest image of the plurality of images.
  • the processor aborts de termination of the sharpness value on identification of the sharpest image.
  • the pro cessor does not therefore have to continue processing of the images when the pro cessor has determined the image with the best focus. It is not necessary for the ex act distance d to be established. The best focused or sharpest one of the plurality of images is chosen.
  • the determination of the sharpness value (V) is carried out by determination of change of contrast in the image. This enables the determination of the edges of features in the image of the specimen.
  • the processor further comprises a classifying system for clas sifying one or more of the plurality of images or identified objects in the sharpest image.
  • a classifying system for clas sifying one or more of the plurality of images or identified objects in the sharpest image.
  • This uses an image and/or object classifying database.
  • the microscope ar rangement can further include a slide Stainer to enable staining of specimens to im prove the contrast. It is also possible to add a fluorescent agent to the specimen to enable features in the biological specimen to be highlighted and, in this case, the detector can image at two different spectra.
  • the microscope arrangement can be automated to process the specimens au tomatically and can include a slide delivery arrangement.
  • This document also teaches a method of determining a focused or sharp im age of a specimen comprising placing the specimen on an object table and arrang ing an optical device at the specimen to enable imaging of an area of the specimen.
  • the optical device can be moved along a first axis (Z-axis) and a plurality of imag es of the specimen captured using a detector.
  • the method further comprises analys ing the plurality of images to determine a sharpness value for ones of the plurality of images and selecting the sharpest image from the plurality of images with the best sharpness value.
  • the method is aborted after determination of the sharpest im age to reduce the amount of processing required.
  • the optical device is oscillated along the first axis about a po sition of the maximum sharpness value.
  • the specimen is moved on the object table to capture an image of a different area of the specimen.
  • the document also teaches a computer-implemented method of classifying biological specimens, for example sputum with bacteria. This method comprises imaging the specimens as set out above and subsequently selecting the focused im age from the plurality of images with the best sharpness value followed by classify - ing the image using a classification algorithm.
  • the classification algorithm uses a classification database of labelled images for developing a classification algorithm to enable classification of biological spec imens.
  • the classification database is constructed by labeling the focused images with features and storing the labelled image and the features in a database.
  • a classi- fication algorithm is developed by analyzing the stored labelled images and the stored features.
  • Figure 1 shows schematically structure of movable elements of a micro- scope with autofocus unit according to the prior art.
  • Figure 2 shows an implementation of an image-based autofocus method ac cording to the prior art.
  • Figure 3 shows a schematic representation of an exemplary embodiment of the method according to the invention with decoupled image acquisition.
  • Figure 4 shows diagrams to illustrate the oscillation of the specimen posi tioning around the focus area during the acquisition of several sharp images with abort criterion.
  • Figure 5 shows an example of the microscope assembly of this document.
  • Fig. 6 shows a diagram to illustrate the logical combination of a sharpness value-dependent abort criterion (S) and a time-dependent abort criterion (T) to in crease robustness of specimen positioning around the focus area.
  • Fig. 7 shows a micrograph of a Ziehl-Neelson stained sputum sample ac quired at lOOx magnification. Examples of single acid-fast bacteria (AFB) 71, clus ters of AFB 71 and different background structures 73 are highlighted.
  • Fig. 8 shows an image portion of a Ziehl-Neelson stained sputum sample that was manually labeled for the generation of a classification database.
  • Fig. 9 shows an outline of the method. Detailed Description of the invention
  • the aim of the invention is to provide an inexpensive method and apparatus for capturing one or more images around the position of optimal focus by a micro scope.
  • the method and arrangement enable the acquisition of sharp images at dif ferent locations in an XY-plane of the specimen in rapid succession.
  • Fig. 5 shows an example of the microscope assembly 10 according to this document.
  • the micro scope assembly 10 includes an objective lens 12 which images a specimen 18.
  • the specimen 18 is mounted on an object table 14.
  • a 0.5x adapter 53 is located in a de tection beam path 16 between the objective lens 12 and a detector 11 (i.e., a cam era).
  • An illumination source 52 illuminates the specimen 18 along an illumination path 51 - in this case from behind the specimen 18 but this is not limiting of the in vention.
  • the detector 11, the objective lens 12, the adapter 53 and the illumination beam path 51 collectively form an illumination and detection assembly 54 which can be moved independently of the object assembly 17 along the X, Y and Z-axes.
  • the method comprises capturing a plurality of images at different positions along the Z-axis and determining a contrast-based sharpness value which is then compared by a control device.
  • the image with the highest sharpness value is de termined to be the best focused image and is di splay ed/output for further analysis.
  • the method differs from the prior art in that capturing of the images of the speci men and the positioning of the specimen relative to the detection optics through which the images are captured are decoupled from each other.
  • Fig. 3 shows the effect of the actuator 19 (such as a positioning device, e.g., a stepper motor - known in Fig. 1) which moves the specimen 18 and the optical device 12 at a fixed XY-position relative to each other along the Z-axis either in a stepwise manner or continuously from a start (Pstart) to an end position (Pend) while the detector 11 (e.g., a high-frame rate vid eo camera) captures images (B) of the specimen 18 at the fixed XY-position either as separate snap shots or as a video stream.
  • the optical device 12 includes a detec tion objective for imaging the specimen 18.
  • the sharpness value (V) of the image is determined by the control unit 20 and assigned to the corresponding image (B).
  • the best focused image (B3 in the example illustrated in Figure 3) is directly selected from a sequence of images captured during Z-movement and output. This contrasts with the two-step process of (I.) focus-position determination and (II.) image acqui sition at the focus position of known autofocus-implementations (as known from the prior art and shown in Figure 2).
  • the sharpness value to determine image sharpness is calculated using a mathematical method to determine and enhance edges in the sequence of images.
  • the edge is defined as a set of contiguous pixel positions in the image at which an abrupt change of intensity (gray or color) values occurs between neighbouring pix els. Such abrupt change of intensity will be referred to as “contrast” in the follow ing.
  • the edges in the images are blurred, which means that the change in the intensity values of the pixels is spread out over a larg er number of pixels.
  • focused ones of the images are sharper and thus the contrast changes over a smaller number of the pixels. In this manner, a fo cused image can be identified by the steepness of change in the intensity values at the image edges.
  • Step 932 convolution of the image or a subregion of the image with a 1 -dimensional or 2-dimensional derivative filter ker nel (e.g., Laplace operator, Sobel filter, Canny edge detector, Prewitt filter); Step 934 - summing up the contrast values of the pixels in the convolved image or image subregion.
  • the sharpness value is the sum of the contrast values.
  • a central region of interest of 20% (to reduce required computing power) of the total image area is convolved with a 2-dimensional Laplacian filter (kernel size 5).
  • the squared sum of pixel values in the convolved region of interest is the sharpness value (step 936).
  • the step of outputting of the determined image includes, for example, a dis play of the image on a display device 31 or the (temporary) storage of the corre sponding image file 34 on a storage medium 33.
  • the exact position of the specimen 18 relative to the optical device 12 along the Z-axis during image acquisition need not be known, so that precise positioning or distance control can be dispensed with. As a result, the method can be implemented with inexpensive, comparatively sim ple components.
  • the distances between the specimen 18 and the optical device 12 are adjusted within a preset range that lies between the start position Pstart and the end position Pend.
  • the control unit 20 changes the distance be tween the specimen 18 and the optical device 12 along the Z-axis from a previously defined start position to a previously defined end position.
  • the range in which a sharp or optimally focused image can be expected lies between these two positions (defined start position and defined end position).
  • the two positions can be marked, for example, by mechanical end stops or Hall sensors.
  • the end position Pend in which the detection objective in the optical device 12 is brought closest to the specimen 18 is set by a mechanical end stop switch mechanically connected to a spring-loaded tip of the detection ob jective in optical device 12.
  • the specimen 18 and the detection objec tive are continuously or stepwise moved closer to each other until the detection ob jective touches the specimen 18, marking the start position Pstart of the focus scan range.
  • the spring-loaded objective tip prevents damage to the detection objective and the specimen 18.
  • the method for setting the start position Pstart in the de scribed fashion may be performed at each position in the XY-plane at which a fo cused image is to be acquired.
  • the start position Pstart is determined once per specimen 18 and is then stored.
  • the method and arrangement are developed to support and replace manual operation of an optical microscope in the process of examination of the specimen 18.
  • processing time per specimen with the microscope arrangement 10 should not exceed the processing time of manual microscopy for the given type of microscopic examination.
  • the constraints on processing speed will be illustrated on the example of sputum microscopy for tuberculosis diagnosis. It will be appreci ated, however, that the method and arrangement may also be used in another con text in which other parameters may apply.
  • the World Health Organization recom mends the examination of minimum 100 fields of view (FOV) on each of the spec imens 18 to classify the specimen 18 as being smear negative. Highly positive ones of the specimens 18, in contrast, may be classified upon examination of a smaller number of FOV. Trained personnel spend roughly 10 minutes on the specimen 18 to cover the 100 FOV. Consequently, an automatic device equipped with a 3-axis automation and autofocus functionality should also be able to cover the same area of the specimen 18 in no more than 10 minutes.
  • FOV fields of view
  • the optical devices 12 with a large FOV and good optical properties at the margins (i.e., Plan-Apochromats) and the associated detectors 11, e.g., cameras with large sensor chips to capture the large FOV are generally expensive.
  • the arrangement with comparatively inexpensive components and a smaller FOV is exemplarily illustrated in the following: In an acceptable em bodiment shown in Fig.
  • a lOOx 1.25 NA plan oil-immersion objective with a 160 mm fixed tube length (PA100X-V300, AmScope) is combined with a 0.5x eyepiece to C-Mount adapter (FMA050) and a USB3.0 digital camera featuring the Aptina AR0330 CMOS (Color) sensor (resolution: 2048 x 1536 pixel, pixel size: 2.2pm x 2.2pm).
  • this configuration results in a FOV which is three to four times smaller than that FOV through the eyepiece of a standard optical microscope with lOOx objective and lOx eyepiece.
  • the 300-400 FOV must be imaged with the microscope 10 in a maximum of 10 minutes (i.e., in 1.5-2 seconds per different imaging posi tion in the XY-plane of the specimen 18).
  • the aspects described in the following section aim at the fulfillment of the 1.5 sec/focused image criterion.
  • the determination of the sharpness values is carried out during the move ment of the optical device 12 along the Z-axis.
  • High frequency image capture by the detector 11 and determination of the corresponding sharpness values by the control device 20 is thus carried out during the passage of the different distances between the specimen 18 and the optical device 12, i.e., parallel to the passage of the different focal planes, so that the image output procedure can be carried out very quickly.
  • the sharpness of the captured images is thus calculated in a processor 36 in parallel with the image capture, and the relatively sharpest image is deter mined and output immediately.
  • the adjustment of the different distances between the specimen 18 and the optical device 12 can be performed continuously or stepwise.
  • At least one image must be acquired while passing through the focus area to ensure that a sharp image can be provided.
  • the quotient of the veloci ty (v) at which the control unit 20 moves the specimen 18 and the optical device 12 relative to each other along the Z-axis and the frame rate of image capture by the detector (f) is less than the depth of field (Ad) or (for a very thin specimen) the thickness (dx) of the specimen: v/f ⁇ dx or v/f ⁇ Ad, respectively.
  • the method does not require an exact assignment of the image to a certain position of the specimen 18 relative to the optical device 12, as an optimally focused image is captured "as it passes by".
  • the Aptina AR0330 CMOS (Color) sensor is an ac ceptable detector 11 to use in this arrangement.
  • the sensor delivers 53 frames per second (fps) at bin size 2 (1024x770 pixels/image) when powered in video capture mode and connected to the processor 36 in a computer via a USB 3.0 interface.
  • fps frames per second
  • Ex ecution of parallel image acquisition and sharpness value determination requires a fast, preferably GPU-based processor 36 in, for example, the control unit 20.
  • the NVIDIA Jetson Nano 2GB Developer Kit with 128-core NVIDIA MaxwellTM GPU is an acceptable processor for the purpose.
  • the sharpness val ue calculation may be executed on a 20% central region of interest on each image on one of the 3 RGB color channels, resulting in a processing rate of 45 fps on the Nvidia GPU.
  • the determination of the sharpest image and/or the movement of the speci men 18 and the optical device 12 relative to each other along the Z-axis is aborted if at least one specific sharpness value satisfies an abort criterion.
  • the abort criteri on indicates that the optimum distance between the specimen 18 and the optical de vice 12 has been reached or exceeded.
  • the time per focus cycle is reucked by the fact that the determination of the sharpest image runs parallel to the image acquisition and has a termination criterion, upon fulfillment of which the movement of the actuator 19 (e.g., stepper motor) is terminated.
  • the abort criterion should be selected so that the abort criterion takes effect as soon as possible after an optimally focused image has been acquired.
  • the abort criterion is log ically connected to an algorithm for detecting local or global maxima. When the abort criterion is reached, the routine in progress is aborted and the sharpest image acquired during the focus cycle is output.
  • an abort criterium that is detecting local max ima is described in the following.
  • the sharpness values Vn for the n images are calculated and (if above the activation threshold A) compared to each other.
  • the maximum sharpness value Vmax is determined and continuously updated.
  • the maximum sharpness value Vmax is considered a sharpness peak and the focus scan is aborted.
  • the abort criterium terminates the focus scan after finding the number of d images after the local maximum sharpness value Vmax.
  • the direction of movement of the specimen 18 and the optical device 12 relative to each other along the Z-axis is reversed when at least one predeter mined or adaptively calculated abort criterion is reached.
  • the direction of traversing the focus area is thereby changed each time another position of the specimen 18 in the XY-plane is reached.
  • the relative position of the specimen 18 to the optical device 12 "oscillates" (along the Z-axis) with a small amplitude around the position of optimal focus and enables sharp images to be cap tured in particularly rapid succession.
  • the robustness of the oscillation of the optical device 12 rela tive to the specimen 18 is increased by a combination of a sharpness value-dependent abort criterion (S) and a time-dependent abort criterion (T) (as shown in Fig.
  • the scan time tmax may be increased until the preset start position Pstart and/or the end position Pend is reached during one focus cycle.
  • This aspect facili tates a robust focusing result at different positions in the XY-plane of the specimen 18 because the focus search range and the oscillation amplitude is progressively adapted to the autofocus result at each XY-position. If the focus plane varies signif- icantly along the Z-axis across a given specimen 18 or physical perturbations of the optical device 12 (such as vibrations, movement, heat expansion, etc.) temporarily or permanently alter the position of the detection optics in the optical device 12 and the specimen 18 relative to each other, the method prevents the focusing routine to permanently lose the area of optimal focus for consecutive focus cycles.
  • the method and apparatus are designed to image thin biological and medi cal specimens 18, prepared as smears of body fluids or thin sections of tissues. Ex amples are blood, sputum and stool smears, smears of cultured bacteria or fungi and histological sections. For examination, these specimens are typically applied to a glass microscope slide, stained to generate specific contrast, and fixed chemically or with heat.
  • the arrangement and method have been developed and tested for the imag ing of sputum specimens stained with a Ziehl-Neelsen or similar staining protocol for brightfield imaging or an auramine-rhodamine stain or similar stain for fluores cence imaging but may also be used for similarly prepared other types of speci mens.
  • the method needs a relatively homogeneous distribution of high-contrast features in the specimen 18. To allow reliable focus detection at a given XY- position of the specimen 18, the FOV must not be entirely uniform.
  • the specimen 18 is enriched with a contrast agent during spec imen preparation or staining.
  • the contrast agent may be a stain coloring specific background elements or beads/particles that distribute evenly on the specimen without impacting the examination of the relevant image features.
  • fluorescent beads may be used for contrast generation.
  • the fluorescent beads and the stain for the relevant image features can be spectrally differentiated by their emission spectrum and imaged simultaneously on two different color channels.
  • the spectral channel in which the beads are visible is used for the autofocusing and determining the best focused image, while the corresponding signal from the other spectral channel carries the information about the relevant image features and is used for examination of the specimen 18.
  • an image stack of more than one consecutively recorded im ages around the best focused image as determined by the sharpness values is stored in the storage medium 33 and can be output and/or displayed on the display device 31 for each focus cycle.
  • the image stack generated in this manner includes image information from directly above and below the focus plane of the specimen 18 and may be valuable for the examination/interpretation of the specimen.
  • the focus stack may be presented to the user as the image stack through which the user can browse (similar to manual refocusing using the Z-drive during live observation of a specimen 18 under a conventional optical microscope). This procedure may be helpful if certain aspects of the specimen 18 appear in different focal planes.
  • the focus stack may be fused into one image using an algorithm that com pares and combines areas of the images in the stack in an advantageous manner, e.g., based on a local contrast-based sharpness measure.
  • Such procedure may be employed to compensate for spherical aberrations as they occur in simple optical components or for a tilt in the specimen’s focus plane.
  • the method steps described above are performed by means of a control de vice.
  • the control device comprises the control unit 20 for moving the specimen 18 and the optical device 12 relative to each other along the Z-axis.
  • the control device further comprises at least one processor 36 that performs the method steps by inter acting with the storage medium 33.
  • the processor 36 and the storage medium 33 can be integrated directly into the microscope.
  • the NVIDIA Jetson Nano 2GB De veloper Kit with 128-core NVIDIA MaxwellTM GPU is an acceptable processor or the purpose.
  • the processor 36 and the storage medium 33 may be e.g., part of a mobile device (e.g., laptop, notebook, tablet) and/or a more stationary device (e.g., personal computer).
  • the invention also relates to a computer program comprising computer- executable instructions that cause a computer to perform the method described above when the program is executed on the computer.
  • the computer program is preferably stored and executed on the control device.
  • the invention also relates to a graphical user interface (GUI) by which the user of such a device can control the method according to the invention and evaluate the output images on the display device 31.
  • GUI graphical user interface
  • This GUI may be delivered as ap plication to be installed and executed on e.g., a PC or laptop or as server/web-based application.
  • the microscope arrangement comprises at least one further actuator 19’ for moving the optical device 12 and the specimen 18 relative to each other in the XY-plane.
  • the control unit 20 (or a further control unit) enable the nav igation in the XY-plane.
  • the movement along the Z-axis and in the XY-plane are coordinated in a way that the autofocus method to output at least one sharp image may be performed at different positions in the XY-plane of the specimen 18.
  • the detection beam path 16 comprises at least one of the optical device 12 and at least one detector, such as the electronic camera, and which are both mounted on at least one movable axis to allow for positioning of the detec tion beam path 16 relative to the specimen 18 residing on the object table 14 along the Z-axis, and also the X- and Y-axis of the assembly.
  • the illumination beam path 51 including the illumination light source 52 is mounted on the movable ax is/axes together with the detection beam path as illustrated in Figure 5.
  • the object table 14 may be an entity entirely separated from the microscope assembly as part of another device, e.g., a device for conducting certain steps of sample preparation or a device for storing and automatically delivering the specimens to the microscope assembly (multi-slide handling, automatic slide feeding).
  • the object table 14 may include a band conveyer or a rotating platform transporting the specimens 18 to the micro scope 10.
  • the object table 14 may be simply the benchtop of a labora tory bench and the microscope 10 is placed over the specimen(s) 18.
  • the microscope 10 may be integrated as subassembly into a larger assembly of compo nents in which the specimens 18 are shuttled from one station to another to auto mate more complex specimen preparation and analysis pipelines (e.g. a combined slide stainer and slide imager).
  • more complex specimen preparation and analysis pipelines e.g. a combined slide stainer and slide imager.
  • the microscope 10 and/or the control device according to the invention can/could further comprise wireless communication modules (transmit ting/receiving devices) and/or electronic components of organic and printed elec tronics as well as microprocessors or miniaturized sensors and actuators.
  • the transmission of data between the microscope 10 and the control device can thereby be carried out in a wireless or cable-free manner, e.g., by means of Bluetooth, WLAN, ZigBee, NFC, Wibree or WiMAX.
  • the time per image cycle is shortened by the fact that the algo rithm implemented in the control unit 20 runs in parallel with the image capture and has the abort criterion described above.
  • the fulfillment of the abort criterion aborts the setting of the different distances between the specimen 18 and the optical de vice 12 by means of the actuator 19.
  • the upper diagram shows the time sequence of image captures for the respective distances ("specimen positions relative to detec tion") around the gray shaded "focus area". Each point represents one captured im age.
  • focused images (outlined points) can be recognized as maxima in the sharpness value diagram (lower diagram).
  • the abort criterion ensures that the direc tion of movement of the actuator 19 is reversed after a sharpness maximum has been reached.
  • the abort criterion is preferably selected so that it takes effect imme diately after the acquisition of an optimally focused image.
  • the method according to the invention can thus be designed, for example, in such a way that the relative position of the specimen 18 to the optical device 12 "oscillates" with low amplitude around the position of optimum focusing, thus enabling sharp images to be ac quired in particularly rapid succession.
  • the microscope 10 and the method for the capture of sharp images may be integrated in a system for the automatic imaging and classification of sputum or similar thin biological/medical specimen to support routine diagnostics e.g. of tu berculosis.
  • the microscope outputs sharp images of differ ent areas of the stained sputum specimen 18 as shown in Fig. 7 which serve as in- put for an algorithm to analyse the image content.
  • the algorithm may run on the control device, a separate local computer or a cloud platform.
  • the algorithm evaluates some/all of the following image criteria:
  • the information from the image quality is used to judge the success of im age capture. If this step of quality assessment is passed (all questions above an swered with “no”) the image features are evaluated. Based on these numbers, the analysis results are displayed in the GUI to support the specimen classification de cision.
  • the GUI may inform about the number of total images taken/XY-positions scanned, the number of low-quality images acquired and the total number (or a semi -quantitative representation) of AFB, AFB clusters, ambiguous structures. Fur ther the GUI may offer to revisit the images taken and/or portions of images that contain above mentioned features to review the performance of the system and to support the diagnosis decision made by the user.
  • the computer program contains classical image analysis algorithms (e.g. to calculate brightness and color distribution histogram) and a deep neural network (DNN) to evaluate the more complex image features.
  • DNN deep neural network
  • YOLOv3 a state-of-the-art, real-time object detection system was trained with an image training dataset of Ziehl-Neelsen-stained sputum samples containing my cobacterium tuberculosis to produce a classification algorith.
  • the training dataset is acquired with the microscope assembly as described in this document and the im age and features on the image are manually labeled three times by independent people. Examples of the labelling are shown in Fig. 8.
  • the labels of the three peo ple are merged to generate a ground-truth dataset; the majority vote wins for labels that are placed inconsistently among the three people. Labels are made as bounding boxes on small portions of the micrographs as shown in Figure 8. In the example, images were labeled with three categories: AFB (“Tuberculosis”), AFB clusters (“Cluster”) and Ambiguous structures that need further analysis (“Unclear”).
  • AFB Tubeculosis
  • AFB clusters AFB clusters
  • Unclear Ambiguous structures that need further analysis
  • Fig. 9 shows an outline of the method used for determining the focussed im age of a specimen 18.
  • the method comprises placing in step 905 the specimen 18 on an object table 14.
  • this placement 905 could be automatic using, for example a slide feeder.
  • the optical device 12 is ar ranged 910 at the specimen 18 to enable imaging of an area of the specimen 18. This could be the whole area of the specimen 18 or a limited area of the specimen.
  • the optical device 12 is moved along a first axis (Z-axis) and a plurality of images of the specimen 18 is captured in step 920 using the detector.
  • the plurali ty of images is analysed in step 930 to determine a sharpness value for ones of the plurality of images and the sharpest image is selected in step 950 from the plurality of images with the best sharpness value.
  • the method is either aborted after deter mination of the sharpest image or the direction of moment along the first axis is re versed in step 955 to move along the first axis in the inverse direction to capture more of the images.
  • a focus stack of a plurality of images about the image with the best sharpness value can be generated and output in step 980. It is possible after reversing the direction of movement in step 955 to oscillate the optical device along the first axis (Z-axis) about a position of the maximum sharpness value to minimize the movement along the first axis and increase the fre quency of delivering output stacks 980.
  • the specimen 18 on the object table 14 can be moved in step 965 if more areas of the specimen 18 are to be imaged 952 or images of different specimens 18 are to be captured 953.
  • step 907 it is possible to stain in step 907 the specimens 18 using a contrast agent and / or a fluorescent agent.
  • step 970 it is possible to classify in step 970 ob jects or features of the focused images using the classification algorithm.
  • the clas- sification algorithm is developed by labelling features in step 972 and then running an algorithm in step 974.

Abstract

A method for producing focused microscope images of at least one specimen by a microscope is described. The microscope comprises at least one optical device for imaging the specimen, at least one electronic camera device for capturing at least one image of the specimen, at least one actuator for adjusting the distance between the specimen and the optical device, and at least one control device. The control device comprises a control unit which controls an actuator, by means of which a distance between the specimen and the optical device is varied between at least two different positions and more than one image is recorded during movement between the positions. The sharpness of the images is determined by means of the control device and is assigned to the image in the form of a sharpness value. The invention further relates to a microscope arrangement for carrying out the method and to a control device for controlling the microscope arrangement.

Description

Description
METHOD FOR OUTPUTTING A FOCUSED IMAGE THROUGH A MICROSCOPE
Field of the invention
[0001] The invention relates to a method and microscope arrangement for produc ing focused microscope images of at least one object by a microscope.
State of the art
[0002] A microscope is a scientific instrument that is used for the visualization of objects or specimens, which have details that are too small to be resolved by the naked eye.
[0003] Microscopes serve as an important tool for viewing and analyzing micro structures in biology, medicine, and materials science. There are many types of mi croscopes available on the market and these microscopes employ different physical principles for the magnification effect. One example is optical microscopes, which use light as a source of illumination to produce the magnified images using light re fraction on glass lenses. In so-called bright-field microscopy, the contrast required for producing a magnified image is generated by colored or dark structures in the il luminated object or specimen. The contrast of a specimen can be artificially en hanced by additional coloring of the specimen. In transmitted-light microscopy, the illumination is passed through the specimen (trans-illumination) before being col lected by the microscope objective. Consequently, transparent or thinly sectioned specimens are required for this transmitted-light microscopy.
[0004] In fluorescence microscopy, fluorescent properties of the specimen are used to generate contrast. Fluorescence may be an inherent feature of the specimen or may be introduced artificially by adding fluorescent dies to the specimen. In fluo rescence microscopy, the illumination light and the spectrally discrete fluorescent emission signal are separated either by means of wavelength-specific optical filters or by the direction of illumination/detection. Thus, in contrast to brightfield mi croscopy, the illumination in fluorescence microscopy can be installed in a trans- arrangement or in an epi-arrangement, in which the illumination objective and the detection objective are positioned on the same side of the specimen to be imaged.
[0005] The image from the optical microscope can be either viewed through an eyepiece or, more commonly nowadays, captured by a light-sensitive camera sen sor to generate a so-called micrograph. There are a wide range of sensors available to catch the images. Non-limiting examples are charge-coupled devices (CCD) and complementary metal-oxide semiconductor (CMOS) based technologies, which are widely used. These sensors allow the capture and storage of digital images to the computer. Typically, there is a subsequent processing of these images in the com puter to obtain the desired information.
[0006] Fig. 1 shows a setup of a microscope 10 as known in the art. The setup shown in Figure 1 will be assumed for simplification of the further description. A detection beam path 16 comprises a detector (e.g., a digital camera device) 11 and an optical device (e.g., an objective lens) 12. For simplicity, the illumination beam path is not shown in Fig. 1. A specimen 18 is mounted on a microscope slide 13 which resides on an object table 14. The microscope slide 13 (without the speci men) and the object table 14 form collectively together the object assembly 17. Dif ferent areas of the specimen 18 can be examined by moving the object assembly 17 in an XY-plane perpendicular to an optical axis 15 of the objective lens (in the Z- axis). To adjust the focus, the detection beam path 16 and the object assembly 17 are moved relative to each other along the Z-axis. This can be done, for example, automatically using an actuator 19, such as a stepper motor.
[0007] A plurality of biological specimens (including patient samples) is tradition ally examined by light microscopy. Traditionally, the patient samples are prepared as smears of body fluids or thin sections of tissues, stained, and fixated on glass microscope slides for microscopic examination. To evaluate the specimen e.g., to identify pathogens or abnormal cell/tissue structures, a trained microscopist exam ines different areas of the prepared specimen. Due to various factors, including thermal expansion, mechanical backlash, and lack of object planarity, the micro scope must be repeatedly "focused" when viewing different areas/structures of the specimen or viewing the same area of the specimen over an extended period of time to generate consistently "sharp", i.e., focused images. The focusing involves align- ing the specimen to be imaged and the detection beam path with respect to each other in a manner that produces a sharp, focused image of the specimen on the de tector. The detector can be, for example, the retina of the observer, a light-sensitive film, or a sensor, such as a camera chip of a digital camera.
[0008] The purpose of an autofocus unit is generally to automate the process of the focusing to produce consistently sharp images. The autofocus unit usually compris es three components: 1) a sensor, usually optical, that reads out a feature of the sys tem that can be used to determine the focus position; 2) a signal processor that uses an autofocus algorithm to interpret data from the sensor; and 3) an adjustment de vice, such as a motorized axis, that is used to move parts of the optics and the spec imen being viewed relative to each other, thereby producing a sharp image on the detector.
[0009] There are different methods for determining an optimum focus point. Some known autofocus systems use light reflections that occur when a (glass) slide is il luminated to determine the position of the object in relation to the detection optics. Other known autofocus methods directly use the image information of the imaging system, which is captured by the detector, for example a digital camera. In this case, the autofocus algorithm calculates a focus value for the captured image based on image contrast or image frequencies. The advantage of such image-based auto focus methods is their relatively simple implementation. Those microscopes that are already equipped with a digital image detector do not require any additional sensors or light sources, since the focus sensor and the image detector are the same component.
[0010] Known implementations of the image-based autofocus methods generally use a two-step process to acquire a sharp image. Firstly, the relative position of the specimen to the detection optics is adjusted stepwise by means of an actuator. In each step, an image or image section is captured, and a focus value is calculated. From the change in the focus values as a function of a position of the object, the position at which an optimally focused image of the object can be captured is de termined. In the second step, the specimen and the detection optics are moved to the determined position and an optimally focused image is captured. The method is shown in Figure 2: In a first step (T), a section along the Z-axis (as known in Fig. 1) is scanned in n steps. One scan step consists of: 1. moving the specimen to a specific position X along the Z-axis (Px), 2. capturing an image for focus analysis at position X (Bx), 3. calculating the focus value (Vx) for the image captured at po sition X. In the second step (II.) the position of optimal focus (Pf) is determined by a focus algorithm (F) and the specimen is moved to this position Pf. Subsequently, a focused image (Bf) is captured at the position of optimal focus Pf. The image Bf is displayed and/or output for further analysis.
[0011] From DE 101 13 084 A1 a microscope system and a method are known in which the focusing is performed using software by the control and readout unit of a camera module. An electronic camera unit with an image sensor for recording the microscope image is arranged in the beam path of the microscope. The image sig nals of the camera unit are read via a control and readout unit into a microprocessor system with a signal processor. The microprocessor system calculates according to an algorithm whether the current focus position is "in focus" or whether the micro scope is not in focus. If the microscope is not in focus, the microprocessor system generates an actuating signal upon which the control unit of the microscope brings the microscope into focus.
[0012] The known autofocus methods require the use of a control unit and actuator with high absolute positioning precision and high repeatability to allow precise po sitioning of the assembly at the position of optimum focus. Such precise control units are cost-intensive and therefore unsuitable for relatively simple, low-cost digi tal microscopes. For this reason, the microscopes equipped with autofocus can only be found in medium to high price segments. In addition, the need to focus before capturing an image slows down the entire image capture process, limiting the abil ity to capture sharp images in rapid succession.
[0013] US Patent Nr. 8,786,694 B2 (Xie Min, assigned to Abbot Point of Care, Inc.) teaches a method and apparatus for imaging a biologic fluid sample quiescent ly residing within a chamber. The method includes the steps of: a) positioning the chamber at a Z-axis position relative to an objective lens having a lens axis, where in the Z-axis is parallel to the lens axis; b) moving one or both of the chamber and the objective lens relative to one another at a constant velocity along the Z-axis; and c) creating one or more images of the biologic fluid sample as one or both of the chamber and the objective lens are moving at a velocity relative to one another within a focus search range along the Z-axis. A sharp image is detected by deter mining the change in sharpness values between two or more of the images and then stopping the detection when the slope of the sharpness values reaches a minimum or a maximum i.e., there is a change in the sign of the derivative of the slope. Al ternatively, a polynomial equation can be derived to fit the data points and a maxi mum or minimum position is determined. This requires knowledge of the exact po sition of the objective lens along the Z-axis.
[0014] US Patent Nr. 7,417,213 B2 (Krief, assigned to Tripath Imaging) teaches a method of capturing a focused image of a continuously moving slide/objective ar rangement. A frame grabber device is triggered to capture an image of the slide through a 20x objective at a first focus level as the slide continuously moves later ally relative to the objective. Alternatingly with triggering the frame grabber de vice, the objective is triggered to move to a second focus level after capture of the image of the slide. The objective moves in discrete steps, oscillating between min imum and maximum focus levels. The frame grabber device is triggered at a fre quency as the slide continuously moves laterally relative to the objective so multi ple images at different focus levels overlap, whereby a slide portion is common to each. The image having the maximum contrast value within overlapping images represents an optimum focus level for the slide portion, and thus the focused image.
[0015] The focus method used in this patent only works when the focus range and/or the depth of field of the objective (as is the case for a 20x objective) is quite large. It is not suitable for thin samples (e.g. fixed and dried sputum or blood sam ples are between 0.5 and 5 pm in thickness) or large magnifications in which the focus easily moves out of the focus range. The movement of the slide/objective ar rangement requires multiple images to be captured at different degrees of focus which requires a lot of time to carry out as well as processing of the data. There is no provision to abort the procedure when the correct focus has been determined.
[0016] DE 10 2014 104704 (Neumeier et al, assigned to EMCO Priifmaschinen) teaches a method for the computer-aided evaluation of hardness tests of a test body with the following steps: Providing an electronic camera system with at least one camera lens, preferably a micro camera lens, for taking photographs, preferably for taking them of microscopic photography, on a test stand for analyzing hardness test impressions on test specimens. A series of photographs of the test body with the same image section is made and the distance between the camera lens and the test body and / or the focus setting of the camera system is changed for each photo graph. The image is saved in compressed form. The file sizes of the individual rec orded images are compared with each other and a search is made for the image with the largest file size, whereby the file with the largest file size is assigned the sharp est image. This patent application does not teach the use of the method for a biolog ical specimen.
Summary of the Invention
[0017] This document teaches a method for producing focused microscope images of at least one specimen by a microscope. The microscope comprises at least one optical device for imaging the specimen, at least one electronic camera device for capturing at least one image of the specimen, at least one actuator for adjusting the distance between the specimen and the optical device, and at least one control de vice. The control device comprises a control unit which controls an actuator, by means of which a distance between the specimen and the optical device is varied between at least two different positions and more than one image is recorded during movement between the positions. The sharpness of the images is determined by means of the control device and is assigned to the image in the form of a sharpness value. The invention further relates to a microscope arrangement for carrying out the method and to a control device for controlling the microscope arrangement.
[0018] The document teaches a microscope arrangement which comprises an opti cal device which is arranged in a detection beam path and is moveable in a first axis (Z-axis). An object table supports a specimen for imaging. The specimen is in one aspect a biological specimen. At least one of the object table or the detection beam path is movable in a plane (XY-plane) that is substantially normal to the first axis i.e., the Z-axis. The top surface of the specimen is at a distance d from the optical device. A detector is arranged in the detection beam path for capturing a plurality of images of an area of the specimen at different ones of the distances d and the captured images are stored in a storage medium. A processor is adapted to deter- mine a sharpness value (V) for the stored plurality of images and selecting the sharpest image of the plurality of images. In one aspect, the processor aborts de termination of the sharpness value on identification of the sharpest image. The pro cessor does not therefore have to continue processing of the images when the pro cessor has determined the image with the best focus. It is not necessary for the ex act distance d to be established. The best focused or sharpest one of the plurality of images is chosen.
[0019] The determination of the sharpness value (V) is carried out by determination of change of contrast in the image. This enables the determination of the edges of features in the image of the specimen.
[0020] In one aspect, the processor further comprises a classifying system for clas sifying one or more of the plurality of images or identified objects in the sharpest image. This uses an image and/or object classifying database. The microscope ar rangement can further include a slide Stainer to enable staining of specimens to im prove the contrast. It is also possible to add a fluorescent agent to the specimen to enable features in the biological specimen to be highlighted and, in this case, the detector can image at two different spectra.
[0021] The microscope arrangement can be automated to process the specimens au tomatically and can include a slide delivery arrangement.
[0022] This document also teaches a method of determining a focused or sharp im age of a specimen comprising placing the specimen on an object table and arrang ing an optical device at the specimen to enable imaging of an area of the specimen. The optical device can be moved along a first axis (Z-axis) and a plurality of imag es of the specimen captured using a detector. The method further comprises analys ing the plurality of images to determine a sharpness value for ones of the plurality of images and selecting the sharpest image from the plurality of images with the best sharpness value. The method is aborted after determination of the sharpest im age to reduce the amount of processing required.
[0023] In one aspect, the optical device is oscillated along the first axis about a po sition of the maximum sharpness value. In a further aspect, the specimen is moved on the object table to capture an image of a different area of the specimen. [0024] The document also teaches a computer-implemented method of classifying biological specimens, for example sputum with bacteria. This method comprises imaging the specimens as set out above and subsequently selecting the focused im age from the plurality of images with the best sharpness value followed by classify - ing the image using a classification algorithm.
[0025] The classification algorithm uses a classification database of labelled images for developing a classification algorithm to enable classification of biological spec imens. The classification database is constructed by labeling the focused images with features and storing the labelled image and the features in a database. A classi- fication algorithm is developed by analyzing the stored labelled images and the stored features.
Description of the Drawings
[0026] Figure 1 shows schematically structure of movable elements of a micro- scope with autofocus unit according to the prior art.
[0027] Figure 2 shows an implementation of an image-based autofocus method ac cording to the prior art.
[0028] Figure 3 shows a schematic representation of an exemplary embodiment of the method according to the invention with decoupled image acquisition. [0029] Figure 4 shows diagrams to illustrate the oscillation of the specimen posi tioning around the focus area during the acquisition of several sharp images with abort criterion.
[0030] Figure 5 shows an example of the microscope assembly of this document.
[0031] Fig. 6 shows a diagram to illustrate the logical combination of a sharpness value-dependent abort criterion (S) and a time-dependent abort criterion (T) to in crease robustness of specimen positioning around the focus area.
[0032] Fig. 7 shows a micrograph of a Ziehl-Neelson stained sputum sample ac quired at lOOx magnification. Examples of single acid-fast bacteria (AFB) 71, clus ters of AFB 71 and different background structures 73 are highlighted. [0033] Fig. 8 shows an image portion of a Ziehl-Neelson stained sputum sample that was manually labeled for the generation of a classification database.
[0034] Fig. 9 shows an outline of the method. Detailed Description of the invention
[0035] The invention will now be described on the basis of the drawings. It will be understood that the embodiments and aspects of the invention described herein are only examples and do not limit the protective scope of the claims in any way. The invention is defined by the claims and their equivalents. It will be understood that features of one aspect or embodiment of the invention can be combined with a fea ture of a different aspect or aspects and/or embodiments of the invention.
[0036] The aim of the invention is to provide an inexpensive method and apparatus for capturing one or more images around the position of optimal focus by a micro scope. The method and arrangement enable the acquisition of sharp images at dif ferent locations in an XY-plane of the specimen in rapid succession. Fig. 5 shows an example of the microscope assembly 10 according to this document. The micro scope assembly 10 includes an objective lens 12 which images a specimen 18. The specimen 18 is mounted on an object table 14. A 0.5x adapter 53 is located in a de tection beam path 16 between the objective lens 12 and a detector 11 (i.e., a cam era). An illumination source 52 illuminates the specimen 18 along an illumination path 51 - in this case from behind the specimen 18 but this is not limiting of the in vention. The detector 11, the objective lens 12, the adapter 53 and the illumination beam path 51 collectively form an illumination and detection assembly 54 which can be moved independently of the object assembly 17 along the X, Y and Z-axes.
[0037] The method comprises capturing a plurality of images at different positions along the Z-axis and determining a contrast-based sharpness value which is then compared by a control device. The image with the highest sharpness value is de termined to be the best focused image and is di splay ed/output for further analysis. The method differs from the prior art in that capturing of the images of the speci men and the positioning of the specimen relative to the detection optics through which the images are captured are decoupled from each other.
[0038] This is illustrated in Figure 3. Fig. 3 shows the effect of the actuator 19 (such as a positioning device, e.g., a stepper motor - known in Fig. 1) which moves the specimen 18 and the optical device 12 at a fixed XY-position relative to each other along the Z-axis either in a stepwise manner or continuously from a start (Pstart) to an end position (Pend) while the detector 11 (e.g., a high-frame rate vid eo camera) captures images (B) of the specimen 18 at the fixed XY-position either as separate snap shots or as a video stream. The optical device 12 includes a detec tion objective for imaging the specimen 18. The sharpness value (V) of the image is determined by the control unit 20 and assigned to the corresponding image (B). The best focused image (B3 in the example illustrated in Figure 3) is directly selected from a sequence of images captured during Z-movement and output. This contrasts with the two-step process of (I.) focus-position determination and (II.) image acqui sition at the focus position of known autofocus-implementations (as known from the prior art and shown in Figure 2).
[0039] The sharpness value to determine image sharpness is calculated using a mathematical method to determine and enhance edges in the sequence of images. The edge is defined as a set of contiguous pixel positions in the image at which an abrupt change of intensity (gray or color) values occurs between neighbouring pix els. Such abrupt change of intensity will be referred to as “contrast” in the follow ing. In unfocused ones of the images, the edges in the images are blurred, which means that the change in the intensity values of the pixels is spread out over a larg er number of pixels. On the other hand, focused ones of the images are sharper and thus the contrast changes over a smaller number of the pixels. In this manner, a fo cused image can be identified by the steepness of change in the intensity values at the image edges.
[0040] An implementation of the sharpness value determination method is shown in Fig. 9 and involves the following steps: Step 932 - convolution of the image or a subregion of the image with a 1 -dimensional or 2-dimensional derivative filter ker nel (e.g., Laplace operator, Sobel filter, Canny edge detector, Prewitt filter); Step 934 - summing up the contrast values of the pixels in the convolved image or image subregion. The sharpness value is the sum of the contrast values. In the exemplary implementation, a central region of interest of 20% (to reduce required computing power) of the total image area is convolved with a 2-dimensional Laplacian filter (kernel size 5). The squared sum of pixel values in the convolved region of interest is the sharpness value (step 936). [0041] The step of outputting of the determined image includes, for example, a dis play of the image on a display device 31 or the (temporary) storage of the corre sponding image file 34 on a storage medium 33. The exact position of the specimen 18 relative to the optical device 12 along the Z-axis during image acquisition need not be known, so that precise positioning or distance control can be dispensed with. As a result, the method can be implemented with inexpensive, comparatively sim ple components. Since the two-step process of (I.) focus-position determination and (II.) image acquisition at the focus position of known autofocus-implementations is reduced to a single-step process, the focusing and imaging process is accelerated, so that sharp images can be recorded in rapid succession.
[0042] In one aspect, the distances between the specimen 18 and the optical device 12 are adjusted within a preset range that lies between the start position Pstart and the end position Pend. In this aspect, the control unit 20 changes the distance be tween the specimen 18 and the optical device 12 along the Z-axis from a previously defined start position to a previously defined end position. The range in which a sharp or optimally focused image can be expected lies between these two positions (defined start position and defined end position). The two positions can be marked, for example, by mechanical end stops or Hall sensors.
[0043] In another aspect, the end position Pend in which the detection objective in the optical device 12 is brought closest to the specimen 18 is set by a mechanical end stop switch mechanically connected to a spring-loaded tip of the detection ob jective in optical device 12. At a given position in the XY-plane of the specimen 18 at which a sharp image is to be acquired, the specimen 18 and the detection objec tive are continuously or stepwise moved closer to each other until the detection ob jective touches the specimen 18, marking the start position Pstart of the focus scan range. The spring-loaded objective tip prevents damage to the detection objective and the specimen 18. The method for setting the start position Pstart in the de scribed fashion may be performed at each position in the XY-plane at which a fo cused image is to be acquired. Alternatively, the start position Pstart is determined once per specimen 18 and is then stored. The control unit 20 uses the stored start position Pstart as reference for any following measurement on the same specimen 18. This embodiment is especially advantageous when short-working distance (=high NA) objectives are used in combination with the specimens 18 of various thickness. This may be the case if the microscope slides 13 of different thicknesses are used or the specimens 18 are prepared with or without a cover slip.
[0044] The method and arrangement are developed to support and replace manual operation of an optical microscope in the process of examination of the specimen 18. To offer an alternative to manual microscope operation e.g., to medical and re search laboratories, processing time per specimen with the microscope arrangement 10 should not exceed the processing time of manual microscopy for the given type of microscopic examination. The constraints on processing speed will be illustrated on the example of sputum microscopy for tuberculosis diagnosis. It will be appreci ated, however, that the method and arrangement may also be used in another con text in which other parameters may apply.
[0045] For sputum microscopy, the World Health Organization (WHO) recom mends the examination of minimum 100 fields of view (FOV) on each of the spec imens 18 to classify the specimen 18 as being smear negative. Highly positive ones of the specimens 18, in contrast, may be classified upon examination of a smaller number of FOV. Trained personnel spend roughly 10 minutes on the specimen 18 to cover the 100 FOV. Consequently, an automatic device equipped with a 3-axis automation and autofocus functionality should also be able to cover the same area of the specimen 18 in no more than 10 minutes.
[0046] The optical devices 12 with a large FOV and good optical properties at the margins (i.e., Plan-Apochromats) and the associated detectors 11, e.g., cameras with large sensor chips to capture the large FOV are generally expensive. With the aim to reduce costs, the arrangement with comparatively inexpensive components and a smaller FOV is exemplarily illustrated in the following: In an acceptable em bodiment shown in Fig. 5, a lOOx 1.25 NA plan oil-immersion objective with a 160 mm fixed tube length (PA100X-V300, AmScope) is combined with a 0.5x eyepiece to C-Mount adapter (FMA050) and a USB3.0 digital camera featuring the Aptina AR0330 CMOS (Color) sensor (resolution: 2048 x 1536 pixel, pixel size: 2.2pm x 2.2pm). Depending on the exact distances between the three components (adapter, the optical device with the detection objective and camera) along the optical axis, this configuration results in a FOV which is three to four times smaller than that FOV through the eyepiece of a standard optical microscope with lOOx objective and lOx eyepiece. Consequently, to cover the same area of the specimen 18 as in manual smear microscopy, the 300-400 FOV must be imaged with the microscope 10 in a maximum of 10 minutes (i.e., in 1.5-2 seconds per different imaging posi tion in the XY-plane of the specimen 18). The aspects described in the following section aim at the fulfillment of the 1.5 sec/focused image criterion.
[0047] The determination of the sharpness values is carried out during the move ment of the optical device 12 along the Z-axis. High frequency image capture by the detector 11 and determination of the corresponding sharpness values by the control device 20 is thus carried out during the passage of the different distances between the specimen 18 and the optical device 12, i.e., parallel to the passage of the different focal planes, so that the image output procedure can be carried out very quickly. The sharpness of the captured images is thus calculated in a processor 36 in parallel with the image capture, and the relatively sharpest image is deter mined and output immediately.
[0048] The adjustment of the different distances between the specimen 18 and the optical device 12 can be performed continuously or stepwise.
[0049] At least one image must be acquired while passing through the focus area to ensure that a sharp image can be provided. In one aspect, the quotient of the veloci ty (v) at which the control unit 20 moves the specimen 18 and the optical device 12 relative to each other along the Z-axis and the frame rate of image capture by the detector (f) is less than the depth of field (Ad) or (for a very thin specimen) the thickness (dx) of the specimen: v/f < dx or v/f < Ad, respectively. The method does not require an exact assignment of the image to a certain position of the specimen 18 relative to the optical device 12, as an optimally focused image is captured "as it passes by".
[0050] As mentioned above, the Aptina AR0330 CMOS (Color) sensor is an ac ceptable detector 11 to use in this arrangement. The sensor delivers 53 frames per second (fps) at bin size 2 (1024x770 pixels/image) when powered in video capture mode and connected to the processor 36 in a computer via a USB 3.0 interface. Ex ecution of parallel image acquisition and sharpness value determination requires a fast, preferably GPU-based processor 36 in, for example, the control unit 20. The NVIDIA Jetson Nano 2GB Developer Kit with 128-core NVIDIA MaxwellTM GPU is an acceptable processor for the purpose. As an example, the sharpness val ue calculation may be executed on a 20% central region of interest on each image on one of the 3 RGB color channels, resulting in a processing rate of 45 fps on the Nvidia GPU. Assuming a typical depth of field for an lOOx 1.25 NA objective of 0.5 pm, the control unit 20 may move the specimen 18 and the optical device 12 at a velocity of up to 45 fps * 0.5 pm = 22.5 pm/sec without compromising focusing quality.
[0051] The determination of the sharpest image and/or the movement of the speci men 18 and the optical device 12 relative to each other along the Z-axis is aborted if at least one specific sharpness value satisfies an abort criterion. The abort criteri on indicates that the optimum distance between the specimen 18 and the optical de vice 12 has been reached or exceeded. In this aspect, the time per focus cycle is re duced by the fact that the determination of the sharpest image runs parallel to the image acquisition and has a termination criterion, upon fulfillment of which the movement of the actuator 19 (e.g., stepper motor) is terminated. The abort criterion should be selected so that the abort criterion takes effect as soon as possible after an optimally focused image has been acquired. In one aspect, the abort criterion is log ically connected to an algorithm for detecting local or global maxima. When the abort criterion is reached, the routine in progress is aborted and the sharpest image acquired during the focus cycle is output.
[0052] One non-limiting example of an abort criterium that is detecting local max ima is described in the following. The sharpness value VI of the first (out-of-focus) image, captured in one focus cycle as described above, is used to calculate an acti vation threshold A=Vl*c, with c being an empirically determined constant. Only those sharpness values above the activation threshold A are considered for the peak detection. In parallel to the image capture, the sharpness values Vn for the n images are calculated and (if above the activation threshold A) compared to each other. The maximum sharpness value Vmax is determined and continuously updated. If, after a new maximum sharpness value Vmax was found, more than a predefined number d of consecutive sharpness values are smaller than the maximum sharpness value Vmax, then the maximum sharpness value Vmax is considered a sharpness peak and the focus scan is aborted. Thus, the abort criterium terminates the focus scan after finding the number of d images after the local maximum sharpness value Vmax. Experiments have shown that d=15 is a good compromise between speed and accuracy for sputum samples.
[0053] In a further aspect, when several positions in the XY-plane are to be imaged automatically, the direction of movement of the specimen 18 and the optical device 12 relative to each other along the Z-axis is reversed when at least one predeter mined or adaptively calculated abort criterion is reached. Preferably, the direction of traversing the focus area is thereby changed each time another position of the specimen 18 in the XY-plane is reached. In this way, the relative position of the specimen 18 to the optical device 12 "oscillates" (along the Z-axis) with a small amplitude around the position of optimal focus and enables sharp images to be cap tured in particularly rapid succession.
[0054] In one aspect, the robustness of the oscillation of the optical device 12 rela tive to the specimen 18 is increased by a combination of a sharpness value- dependent abort criterion (S) and a time-dependent abort criterion (T) (as shown in Fig. 6): if S is reached within a predefined time-window tmax, the direction of movement is reversed if S is not reached within tmax, T takes effect and the direction of movement is reversed at t=tmax and tmax is multiplied by a factor n if S is reached within n * tmax in the consecutive focus cycle, the direction of movement is reversed, and the factor n is reset to its starting value nstart if S is not reached within n * tmax in the consecutive focus cycle, T takes effect and the direction of movement is reversed and tmax is again multiplied by n
(n * n * tmax)
[0055] The scan time tmax may be increased until the preset start position Pstart and/or the end position Pend is reached during one focus cycle. This aspect facili tates a robust focusing result at different positions in the XY-plane of the specimen 18 because the focus search range and the oscillation amplitude is progressively adapted to the autofocus result at each XY-position. If the focus plane varies signif- icantly along the Z-axis across a given specimen 18 or physical perturbations of the optical device 12 (such as vibrations, movement, heat expansion, etc.) temporarily or permanently alter the position of the detection optics in the optical device 12 and the specimen 18 relative to each other, the method prevents the focusing routine to permanently lose the area of optimal focus for consecutive focus cycles.
[0056] The method and apparatus are designed to image thin biological and medi cal specimens 18, prepared as smears of body fluids or thin sections of tissues. Ex amples are blood, sputum and stool smears, smears of cultured bacteria or fungi and histological sections. For examination, these specimens are typically applied to a glass microscope slide, stained to generate specific contrast, and fixed chemically or with heat.
[0057] The arrangement and method have been developed and tested for the imag ing of sputum specimens stained with a Ziehl-Neelsen or similar staining protocol for brightfield imaging or an auramine-rhodamine stain or similar stain for fluores cence imaging but may also be used for similarly prepared other types of speci mens. The method needs a relatively homogeneous distribution of high-contrast features in the specimen 18. To allow reliable focus detection at a given XY- position of the specimen 18, the FOV must not be entirely uniform. This criterion is generally fulfilled in Ziehl-Neelsen-stained sputum specimens which are marked by a background stained in an irregular blue while acid fast bacteria (AFB) like myco bacteria tuberculosis, the pathogen causing tuberculosis, appear as red rods as shown in Fig. 7. Even in the absence of AFB, the specimen 18 provides sufficient contrast for autofocusing due to the background stain. In other ones of the speci mens 18, e.g., fluorescently labeled sputum specimens prepared with an auramine- rhodamine stain, the background appears black. Thus, AFB-low, or negative spec imens present a challenge to image-contrast-based autofocus methods.
[0058] In one aspect, the specimen 18 is enriched with a contrast agent during spec imen preparation or staining. The contrast agent may be a stain coloring specific background elements or beads/particles that distribute evenly on the specimen without impacting the examination of the relevant image features.
[0059] For fluorescence microscopy, fluorescent beads may be used for contrast generation. In one aspect, the fluorescent beads and the stain for the relevant image features can be spectrally differentiated by their emission spectrum and imaged simultaneously on two different color channels. In this configuration, the spectral channel in which the beads are visible, is used for the autofocusing and determining the best focused image, while the corresponding signal from the other spectral channel carries the information about the relevant image features and is used for examination of the specimen 18.
[0060] In one aspect, an image stack of more than one consecutively recorded im ages around the best focused image as determined by the sharpness values is stored in the storage medium 33 and can be output and/or displayed on the display device 31 for each focus cycle. The image stack generated in this manner includes image information from directly above and below the focus plane of the specimen 18 and may be valuable for the examination/interpretation of the specimen. The focus stack may be presented to the user as the image stack through which the user can browse (similar to manual refocusing using the Z-drive during live observation of a specimen 18 under a conventional optical microscope). This procedure may be helpful if certain aspects of the specimen 18 appear in different focal planes. Alter natively, the focus stack may be fused into one image using an algorithm that com pares and combines areas of the images in the stack in an advantageous manner, e.g., based on a local contrast-based sharpness measure. Such procedure may be employed to compensate for spherical aberrations as they occur in simple optical components or for a tilt in the specimen’s focus plane.
[0061] The method steps described above are performed by means of a control de vice. The control device comprises the control unit 20 for moving the specimen 18 and the optical device 12 relative to each other along the Z-axis. The control device further comprises at least one processor 36 that performs the method steps by inter acting with the storage medium 33. The processor 36 and the storage medium 33 can be integrated directly into the microscope. The NVIDIA Jetson Nano 2GB De veloper Kit with 128-core NVIDIA MaxwellTM GPU is an acceptable processor or the purpose. Alternatively, the processor 36 and the storage medium 33 may be e.g., part of a mobile device (e.g., laptop, notebook, tablet) and/or a more stationary device (e.g., personal computer). [0062] The invention also relates to a computer program comprising computer- executable instructions that cause a computer to perform the method described above when the program is executed on the computer. The computer program is preferably stored and executed on the control device.
[0063] The invention also relates to a graphical user interface (GUI) by which the user of such a device can control the method according to the invention and evalu ate the output images on the display device 31. This GUI may be delivered as ap plication to be installed and executed on e.g., a PC or laptop or as server/web-based application.
[0064] In another aspect, the microscope arrangement comprises at least one further actuator 19’ for moving the optical device 12 and the specimen 18 relative to each other in the XY-plane. The control unit 20 (or a further control unit) enable the nav igation in the XY-plane. The movement along the Z-axis and in the XY-plane are coordinated in a way that the autofocus method to output at least one sharp image may be performed at different positions in the XY-plane of the specimen 18.
[0065] In a further aspect, the detection beam path 16 comprises at least one of the optical device 12 and at least one detector, such as the electronic camera, and which are both mounted on at least one movable axis to allow for positioning of the detec tion beam path 16 relative to the specimen 18 residing on the object table 14 along the Z-axis, and also the X- and Y-axis of the assembly. In one aspect of the inven tion, also the illumination beam path 51 including the illumination light source 52, either in trans- or epi-illumination configuration, is mounted on the movable ax is/axes together with the detection beam path as illustrated in Figure 5.
[0066] This further aspect of the invention allows for a high amount of flexibility in the design of the object table 14. For instance, the object table 14 may be an entity entirely separated from the microscope assembly as part of another device, e.g., a device for conducting certain steps of sample preparation or a device for storing and automatically delivering the specimens to the microscope assembly (multi-slide handling, automatic slide feeding). For instance, the object table 14 may include a band conveyer or a rotating platform transporting the specimens 18 to the micro scope 10. Alternatively, the object table 14 may be simply the benchtop of a labora tory bench and the microscope 10 is placed over the specimen(s) 18. Further, the microscope 10 may be integrated as subassembly into a larger assembly of compo nents in which the specimens 18 are shuttled from one station to another to auto mate more complex specimen preparation and analysis pipelines (e.g. a combined slide stainer and slide imager).
[0067] The microscope 10 and/or the control device according to the invention can/could further comprise wireless communication modules (transmit ting/receiving devices) and/or electronic components of organic and printed elec tronics as well as microprocessors or miniaturized sensors and actuators. The transmission of data between the microscope 10 and the control device can thereby be carried out in a wireless or cable-free manner, e.g., by means of Bluetooth, WLAN, ZigBee, NFC, Wibree or WiMAX.
[0068] In Figure 4, the time per image cycle is shortened by the fact that the algo rithm implemented in the control unit 20 runs in parallel with the image capture and has the abort criterion described above. The fulfillment of the abort criterion aborts the setting of the different distances between the specimen 18 and the optical de vice 12 by means of the actuator 19. The upper diagram shows the time sequence of image captures for the respective distances ("specimen positions relative to detec tion") around the gray shaded "focus area". Each point represents one captured im age. Ideally focused images (outlined points) can be recognized as maxima in the sharpness value diagram (lower diagram). The abort criterion ensures that the direc tion of movement of the actuator 19 is reversed after a sharpness maximum has been reached. The abort criterion is preferably selected so that it takes effect imme diately after the acquisition of an optimally focused image. The method according to the invention can thus be designed, for example, in such a way that the relative position of the specimen 18 to the optical device 12 "oscillates" with low amplitude around the position of optimum focusing, thus enabling sharp images to be ac quired in particularly rapid succession.
[0069] The microscope 10 and the method for the capture of sharp images may be integrated in a system for the automatic imaging and classification of sputum or similar thin biological/medical specimen to support routine diagnostics e.g. of tu berculosis. According to this aspect, the microscope outputs sharp images of differ ent areas of the stained sputum specimen 18 as shown in Fig. 7 which serve as in- put for an algorithm to analyse the image content. The algorithm may run on the control device, a separate local computer or a cloud platform. The algorithm evalu ates some/all of the following image criteria:
[0070] 1. Image quality:
Is the staining too light/ too dark for further analysis?
Are there obstacles (e.g. air bubbles in the immersion oil) that prevent examina tion of the image?
[0071] 2. Image features:
Number of AFB - Number of AFB clusters
Ambiguous structures that need further analysis
[0072] The information from the image quality is used to judge the success of im age capture. If this step of quality assessment is passed (all questions above an swered with “no”) the image features are evaluated. Based on these numbers, the analysis results are displayed in the GUI to support the specimen classification de cision.
[0073] The GUI may inform about the number of total images taken/XY-positions scanned, the number of low-quality images acquired and the total number (or a semi -quantitative representation) of AFB, AFB clusters, ambiguous structures. Fur ther the GUI may offer to revisit the images taken and/or portions of images that contain above mentioned features to review the performance of the system and to support the diagnosis decision made by the user.
[0074] The computer program contains classical image analysis algorithms (e.g. to calculate brightness and color distribution histogram) and a deep neural network (DNN) to evaluate the more complex image features. In an exemplary implementa tion, YOLOv3 a state-of-the-art, real-time object detection system was trained with an image training dataset of Ziehl-Neelsen-stained sputum samples containing my cobacterium tuberculosis to produce a classification algorith. The training dataset is acquired with the microscope assembly as described in this document and the im age and features on the image are manually labeled three times by independent people. Examples of the labelling are shown in Fig. 8. The labels of the three peo ple are merged to generate a ground-truth dataset; the majority vote wins for labels that are placed inconsistently among the three people. Labels are made as bounding boxes on small portions of the micrographs as shown in Figure 8. In the example, images were labeled with three categories: AFB (“Tuberculosis”), AFB clusters (“Cluster”) and Ambiguous structures that need further analysis (“Unclear”).
[0075] Fig. 9 shows an outline of the method used for determining the focussed im age of a specimen 18. As can be seen in the figure, the method comprises placing in step 905 the specimen 18 on an object table 14. As noted above, this placement 905 could be automatic using, for example a slide feeder. The optical device 12 is ar ranged 910 at the specimen 18 to enable imaging of an area of the specimen 18. This could be the whole area of the specimen 18 or a limited area of the specimen. In step 915, the optical device 12 is moved along a first axis (Z-axis) and a plurality of images of the specimen 18 is captured in step 920 using the detector. The plurali ty of images is analysed in step 930 to determine a sharpness value for ones of the plurality of images and the sharpest image is selected in step 950 from the plurality of images with the best sharpness value. The method is either aborted after deter mination of the sharpest image or the direction of moment along the first axis is re versed in step 955 to move along the first axis in the inverse direction to capture more of the images.
[0076] At the end of the imaging process, a focus stack of a plurality of images about the image with the best sharpness value can be generated and output in step 980. It is possible after reversing the direction of movement in step 955 to oscillate the optical device along the first axis (Z-axis) about a position of the maximum sharpness value to minimize the movement along the first axis and increase the fre quency of delivering output stacks 980.
[0077] Once the area of the specimen 18 has been imaged, the specimen 18 on the object table 14 can be moved in step 965 if more areas of the specimen 18 are to be imaged 952 or images of different specimens 18 are to be captured 953.
[0078] As noted above, it is possible to stain in step 907 the specimens 18 using a contrast agent and / or a fluorescent agent.
[0079] In one application, outlined above, it is possible to classify in step 970 ob jects or features of the focused images using the classification algorithm. The clas- sification algorithm is developed by labelling features in step 972 and then running an algorithm in step 974.
Reference Numerals
10 Microscope
11 Detector
12 Optical Device
13 Microscope slide
14 Object table
15 Optical axis
16 Detection beam path
17 Object assembly
18 Specimen
19 Actuator
19’ Actuator
20 Control unit
31 Display device
33 Storage medium
34 Image file
36 Processor
51 Illumination beam path
52 Illumination light source
53 Adapter
54 Illumination and detection assembly

Claims

Claims
1. A microscope arrangement (10) compri sing : an optical device (12) arranged in a detection beam path (16) and moveable in a first axis (Z-axis); an object table (14) for supporting a specimen (18), wherein at least one of the ob ject table (14) or the detection beam path (16) is movable in a plane (XY-plane) substantially normal to the first axis, and wherein the top surface of the specimen (18) is at a distance d from the optical device (12); a detector (11) arranged in the detection beam path (16) for capturing a plurality of images of an area of the specimen (18) at different ones of the distances d; a storage medium (33) for storing the captured plurality of images; and a processor (36) adapted to determine a sharpness value (V) for the stored plurality of images and selecting the sharpest image of the plurality of images and aborting determination of the sharpness value on identification of the sharpest image and / or aborting the image capture.
2. The microscope arrangement (10) of claim 1, wherein the determination of the sharpness value (V) is carried out by determination of differing contrasts of neigh bouring pixels in the image.
3. The microscope arrangement (10) of claim 1 or 2, wherein the processor (36) fur ther comprises a classifying system for classifying one or more of the plurality of images or identified objects in the sharpest image.
4. The microscope arrangement (10) of claim 3, further comprising an image and/or object classifying algorithm.
5. The microscope arrangement (10) of any one of the above claims further compris ing a slide Stainer and/or a slide delivery arrangement.
6. The microscope arrangement (10) of any of the above claims wherein the detector (53) is adapted to image at two different spectra.
7. A method of determining a focussed image of a specimen (18) comprising
- placing (905) the specimen (18) on an object table (14);
- arranging (910) an optical device (12) at the specimen (18) to enable imaging of an area of the specimen (18);
- moving (915) the optical device (12) along a first axis (Z-axis);
- capturing (920) a plurality of images of the specimen (18) using a detector (11);
- analysing (930) the plurality of images to determine a sharpness value for ones of the plurality of images;
- selecting (950) the sharpest image from the plurality of images with the best sharpness value, wherein the method is either aborted (950) after determination of the sharpest image or the direction of moment along the first axis is reversed (955) in order to capture more images (920) at the same or a different position of the sample (952).
8. The method of claim 7, wherein the determination of the sharpness value is carried out by determining (932, 934, 936) changes in contrast values of pixels in the im age.
9. The method of claim 7 or 8, further comprising generating (980) a focus stack of a plurality of images about the image with the best sharpness value.
10. The method of claim 9, further comprising oscillating (955) the optical device (12) along the first axis (Z-axis) about a position of the maximum sharpness value.
11. The method of any one of claims 7-10, further comprising moving (965) the speci men (18) on the object table (14) to capture an image of a different area of the spec imen (18).
12. The method of one of claims 7-11, further comprising staining (907) the specimens (18) using a contrast agent.
13. The method of claim 12, wherein the specimen (18) is stained using a fluorescent agent.
14. The method of claim 13, further comprising capturing the plurality of images at dif ferent channels for the contrast agent and the fluorescent agent.
15. The method of one of claims 7-14, further comprising classifying (970) the fo cussed images.
16. The method of one of claims 7-14, further comprising automatically feeding (905) ones of the specimen (18) to the object table (14).
17. A computer-implemented method of classifying biological specimens comprising
- placing the biological specimen (18) on an object table (14);
- arranging an optical device (12) at the specimen (18) to enable imaging of an area of the specimen (18);
- moving the optical device along a first axis (Z-axis);
- capturing a plurality of images of the specimen (18) using a detector (11);
- analysing the plurality of images to determine a sharpness value for ones of the plurality of images;
- selecting the focussed image from the plurality of images with the best sharpness value; and classifying (970) the image using a classification algorithm.
18. The method of claim 17, wherein the biological specimens are sputum specimens.
19. The method of claim 17 or 18, further comprising staining the biological specimens before capturing the plurality of images.
20. The method of one of claims 17 to 19 further comprising applying a fluorescent agent to the biological specimens (18).
21. A method of establishing a classification algorithm for classification of biological specimens comprising:
- placing the biological specimen (18) on an object table (14);
- arranging an optical device (12) at the specimen (18) to enable imaging of an area of the specimen (18);
- moving the optical device (12) along a first axis (z-axis); - capturing a plurality of images of the specimen (18) using a detector (11);
- analysing the plurality of images to determine a sharpness value for ones of the plurality of images;
- selecting the focussed image from the plurality of images with the best sharpness value; - labelling (972) the focussed image with features;
- storing the labelled image and the features in a database; and
- applying (974) an algorithm to the stored labelled image and the stored features to generate the classification algorithm.
PCT/EP2021/051199 2020-01-22 2021-01-20 Method for outputting a focused image through a microscope WO2021148465A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP20153021 2020-01-22
EP20153021.9 2020-01-22

Publications (1)

Publication Number Publication Date
WO2021148465A1 true WO2021148465A1 (en) 2021-07-29

Family

ID=69187598

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/051199 WO2021148465A1 (en) 2020-01-22 2021-01-20 Method for outputting a focused image through a microscope

Country Status (1)

Country Link
WO (1) WO2021148465A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113960778A (en) * 2021-09-29 2022-01-21 成都西图科技有限公司 Dynamic step focusing method based on intermediate frequency filtering
SE2250140A1 (en) * 2022-02-11 2023-08-12 Cellink Bioprinting Ab Imaging apparatus and method for determning a focal point of a well-plate

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001036939A2 (en) * 1999-11-04 2001-05-25 Meltec Multi-Epitope-Ligand-Technologies Gmbh Method for the automatic analysis of microscope images
DE10113084A1 (en) 2001-03-17 2002-09-19 Leica Microsystems Autofocus microscope system
US20070069106A1 (en) * 2005-06-22 2007-03-29 Tripath Imaging, Inc. Apparatus and Method for Rapid Microscope Image Focusing
US20080151097A1 (en) * 2006-12-22 2008-06-26 Industrial Technology Research Institute Autofocus searching method
US20110157344A1 (en) * 2009-12-31 2011-06-30 Abbott Point Of Care, Inc. Method and apparatus for fast focus imaging biologic specimens
US20110248166A1 (en) * 2010-04-09 2011-10-13 Northeastern University Tunable laser-based infrared imaging system and method of use thereof
DE102014104704A1 (en) 2013-04-10 2014-10-16 EMCO-TEST Prüfmaschinen GmbH Method for the computer-assisted evaluation of hardness tests of a test specimen

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001036939A2 (en) * 1999-11-04 2001-05-25 Meltec Multi-Epitope-Ligand-Technologies Gmbh Method for the automatic analysis of microscope images
DE10113084A1 (en) 2001-03-17 2002-09-19 Leica Microsystems Autofocus microscope system
US20070069106A1 (en) * 2005-06-22 2007-03-29 Tripath Imaging, Inc. Apparatus and Method for Rapid Microscope Image Focusing
US7417213B2 (en) 2005-06-22 2008-08-26 Tripath Imaging, Inc. Apparatus and method for rapid microscopic image focusing having a movable objective
US20080151097A1 (en) * 2006-12-22 2008-06-26 Industrial Technology Research Institute Autofocus searching method
US20110157344A1 (en) * 2009-12-31 2011-06-30 Abbott Point Of Care, Inc. Method and apparatus for fast focus imaging biologic specimens
US8786694B2 (en) 2009-12-31 2014-07-22 Abbott Point Of Care, Inc. Method and apparatus for fast focus imaging biologic specimens
US20110248166A1 (en) * 2010-04-09 2011-10-13 Northeastern University Tunable laser-based infrared imaging system and method of use thereof
DE102014104704A1 (en) 2013-04-10 2014-10-16 EMCO-TEST Prüfmaschinen GmbH Method for the computer-assisted evaluation of hardness tests of a test specimen

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113960778A (en) * 2021-09-29 2022-01-21 成都西图科技有限公司 Dynamic step focusing method based on intermediate frequency filtering
SE2250140A1 (en) * 2022-02-11 2023-08-12 Cellink Bioprinting Ab Imaging apparatus and method for determning a focal point of a well-plate

Similar Documents

Publication Publication Date Title
US9366628B2 (en) Standardizing fluorescence microscopy systems
JP4911172B2 (en) Analytical apparatus and use thereof
EP1825317B1 (en) Optical system for cell imaging
US7907765B2 (en) Focal plane tracking for optical microtomography
US9715095B2 (en) Microscope and method for SPIM microscopy
US10823936B2 (en) Real-time autofocus focusing algorithm
US20210215923A1 (en) Microscope system
JP2018512609A (en) Method, system and apparatus for automatically focusing a microscope on a substrate
US11776283B2 (en) Method for the cytometric analysis of cell samples
WO2021148465A1 (en) Method for outputting a focused image through a microscope
CN110546545B (en) Optical scanning device and method
CN112825622B (en) Sample image capturing method and sample image capturing apparatus
CN108845406A (en) The full-automatic micro imaging method of more multiplying powers and device
US10475198B2 (en) Microscope system and specimen observation method
CN114994895A (en) Method and device for the light-sheet microscopic examination of a sample
US11106028B2 (en) Microscopy device
JP2022544292A (en) Sample imaging via two-step brightfield reconstruction
US20240061229A1 (en) Microscopy system and method using an immersion liquid
US20230258918A1 (en) Digital microscope with artificial intelligence based imaging
KR101873318B1 (en) Celll imaging device and mehtod therefor
CN112903675A (en) Sample analyzer and cell image processing method for sample analyzer
JP2010085223A (en) Imaging device and apparatus for detecting three-dimensional position

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21700948

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21700948

Country of ref document: EP

Kind code of ref document: A1