CN114205519A - Rapid parfocal method and device of amplification imaging system - Google Patents

Rapid parfocal method and device of amplification imaging system Download PDF

Info

Publication number
CN114205519A
CN114205519A CN202111319974.2A CN202111319974A CN114205519A CN 114205519 A CN114205519 A CN 114205519A CN 202111319974 A CN202111319974 A CN 202111319974A CN 114205519 A CN114205519 A CN 114205519A
Authority
CN
China
Prior art keywords
image
light source
sample
value
point light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111319974.2A
Other languages
Chinese (zh)
Inventor
马朔昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Terry Technologies Nanjing Co ltd
Original Assignee
Terry Technologies Nanjing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Terry Technologies Nanjing Co ltd filed Critical Terry Technologies Nanjing Co ltd
Priority to CN202111319974.2A priority Critical patent/CN114205519A/en
Publication of CN114205519A publication Critical patent/CN114205519A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/241Devices for focusing

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

The invention relates to a rapid parfocal method and a device of an amplification imaging system, wherein the method comprises the following characteristic steps: acquiring an image A and an image B of two single-color channels corresponding to a sample wafer under the irradiation of a pair of different-color point light sources, wherein the irradiation points of the pair of different-color point light sources are respectively positioned at two sides of the optical axis of the imaging element; calculating an offset p between the two images through the image A and the image B, and calculating a defocusing distance l of the current sample from a target focusing surface of the amplifying unit, namely a target moving value h according to the offset p; and placing the sample at the target focal plane of the amplifying unit according to the target movement value h through the motion unit. By the method and the device, the detection of the parfocal plane can be rapidly realized, the distance which should move up and down to the parfocal plane is skillfully calculated according to the offset, and the accurate focusing is finally carried out.

Description

Rapid parfocal method and device of amplification imaging system
Technical Field
The invention relates to the field of digital pathological imaging, in particular to a quick parfocal method and a quick parfocal device of an amplification imaging system.
Background
In the field of digital pathology, it is one of important targets to realize that a microstructure which replaces manual operation and can be captured and recorded through high-magnification without distortion.
As shown in fig. 1 and 2, the purpose of fast parfocal in the magnifying imaging system is to accurately place the observed sample on the object plane of the optical magnifying system in the optical axial direction (optical axis, usually vertically up and down), so that the camera fixed on the designed image plane can capture a clear and sharp projection (i.e. parfocal) without changing the magnification. If the sample leaves the object plane, the image shot by the camera is not only blurred due to diffraction, but also the correct image cannot be reconstructed through calculation due to information loss. In order to achieve stronger optical resolution, the microscope needs to select an objective lens with a very high numerical aperture, so that the depth of field is very shallow, and the requirement on the motion precision of the alignment focus is very high.
The existing solutions generally apply one of the following principles:
1. maximum contrast method. In the parfocal state, the diffraction effect is minimal, and the degree of "blurring" is minimal, i.e., the contrast is highest. Thus by comparing the contrast at different locations, the distance of parfoci can be found. Representative inventions are 201610508675.6 and 201510961654.5. However, at a certain distance: the degree of "blurring" does not reveal its distance and direction from the parfocal position, and therefore it often takes a lot of time to explore, contrast sharpness on many planes, to find the parfocal position.
2. And (4) estimating the diffraction effect. The defocus distance is estimated by estimating some index of the distribution of concentric point spread functions (point spread functions) caused by the diffraction effect. Representing an invention such as 201510330496.3. In practical application, the diffraction effect image is greatly interfered by the image of the sample, so that the estimation is difficult, and even and continuous blurring appears when the defocusing degree is slightly high, so that the estimation cannot be performed.
3. And (4) a distance measurement method. The distance of a sample to a predetermined position (typically a point along the optical path) is measured using ultra-high precision distance measuring tools, which are expensive and have a low upper limit of precision, such as 201610589541.1 and 201510239075. X.
4. Phase difference method. The optical path is divided into multiple copies by a semi-transparent and semi-reflective mirror, and the designed image planes of each copy are the same. Placing a secondary imaging element in front of or behind the design image plane of each copy is approximately equivalent to measuring contrast at multiple distances simultaneously. For a uniform, continuous sample, the parfocal distance variation of the sample is also uniform and continuous. In parfocal, the main imaging element has higher contrast than the sub imaging element; some degree of defocus will result in a sub-imaging element with a higher contrast than the main and other imaging elements. Therefore, by comparing the contrast of each imaging element, the change of the parfocal distance can be monitored, and the compensation movement is correspondingly carried out. The complexity of the optical system is greatly improved, the problem of consistency among all 'mirror image' optical paths exists, and the cost is high.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: a parfocal method and apparatus are provided which enable a magnified imaging system to be quickly achieved without the addition of expensive external aids.
The invention provides a technical scheme (one) for solving the technical problems, which comprises the following steps: a fast parfocal method of a magnification imaging system comprises a sample bearing unit, a magnification unit and an image acquisition unit which share a common optical axis, and further comprises a motion unit which is suitable for driving the sample bearing unit and/or the magnification unit to reciprocate along the optical axis of the sample bearing unit and/or the magnification unit,
the method comprises the following characteristic steps:
acquiring an image A and an image B of two single-color channels corresponding to a sample wafer under the irradiation of a pair of different-color point light sources, wherein the irradiation points of the pair of different-color point light sources are respectively positioned at two sides of the optical axis of the imaging element;
calculating an offset p between the two images through the image A and the image B, and calculating a defocusing distance l of the current sample from a target focusing surface of the amplifying unit, namely a target moving value h according to the offset p;
and placing the sample at the target focal plane of the amplifying unit according to the target movement value h through the motion unit.
Further, when the offset p is calculated, the image likelihood values y of the image a and the image B after the image a is horizontally moved to the image B or the image B is horizontally moved to the image a by 1 to n pixels are exhaustively calculated, the y values calculated in the past are recorded, and the horizontally moved pixel value x corresponding to the highest y value is taken as the offset p.
Further, when the likelihood value y is calculated, based on a mutual information calculation method or a feature point matching method,
in the case of a mutual information-based calculation method,
considering the grayscale images A and B, for each pixel value y1The proportion of the pixel value y1 in the image A is pA(y1) (ii) a For each pixel value y2The proportion of the pixel value y2 in the image B is pB(y2);
For each pair of pixel value combinations (y1, y2), two pixels with the same coordinates in images A and B have the ratio of y1 in image A to y2 in B as pA,B(y1,y2) The mutual information of the two images is
Figure BDA0003345222760000031
Calculating the mutual information of the image A and the displaced image B by using the same formula according to the levels of all pixels of the image B, repeatedly arranging the mutual information in all the preset offset ranges to obtain a deviation likelihood curve,
the gray level image input by the mutual information calculation may be an original gray level image of a single color channel, or an image obtained by performing gradient filtering or similar filtering operation on the gray level image, and a typical gradient filtering method is to use a laplacian operator, i.e. a matrix
Figure BDA0003345222760000032
Performing convolution operation with the original image;
when based on the feature point matching method,
respectively using SURF or SIFT feature point processing algorithm in the image A and the image B, using feature point description vectors to record the time-frequency domain characteristics of adjacent pixel points of the feature points,
and after all the feature points of the image B are leveled, calculating the difference square of the description vectors of the feature points in the image A in the adjacent position respectively, if the feature points are not adjacent, taking the square of the feature points, then calculating the mean value of the difference square, then taking the reciprocal of the mean value to obtain the feature matching errors of the two images, repeating the steps, and arranging the feature matching errors in all the preset offset ranges to obtain the deviation likelihood curve.
Furthermore, the defocus distance l is calculated by a table look-up method, the table is preset,
when setting a table, firstly searching and finding out an afocal surface by a common method such as a maximum sharpness method and the like, then operating a focusing system to leave a focal plane for a known distance, then carrying out processes of activating a point light source, imaging and calculating an offset, recording the known afocal distance l and the calculated offset p into the table, and then repeating the processes to establish a corresponding table.
Furthermore, the system comprises a plurality of pairs of different color point light sources with different spacing distribution, after the offset p is calculated, the spatial distribution of the light sources is evaluated, the operation of subsequent optional steps is carried out according to the evaluation result, and during the evaluation,
if the absolute value | p | of the offset p is between the preset lower threshold value μmin and the preset upper threshold value μmax, the activated point light source is appropriate;
if | p | is smaller than the lower limit μmin of the threshold, the current defocusing distance is known to be shorter, the combination of the heterochromatic point light source pairs with larger opening distance is selected, and all the steps are executed again;
and if the | p | is larger than the upper limit of the threshold value Mumax, knowing that the current defocusing distance is larger than the measurement range, selecting the combination of the heterochromatic point light source pairs with smaller opening distance, and re-executing all the steps again.
Further, the system comprises a plurality of pairs of heterochromatic point light sources with different combinations of the inverse chromaticities,
the method also comprises the evaluation of whether the sample is matched with the colors of the current light source O and the current light source P, and the operation of the subsequent optional steps is carried out according to the evaluation result,
for n y values from the record, obtain several local maxima { SMax,iGet the median S by statisticsMedAnd the standard fourth-order central moment (i.e. the ratio of the fourth-order central moment to the square of the variance) Skrt
In calculating the local maximum SMax,iIn the case of 1 < x < N-1, if f (x-1) <f (x) and f (x +1) < f (x), then x is a local maximum;
in calculating the median SMedWhen the number of the bits is larger than the set value, arranging y from small to large, and if n is an odd number, taking y arranged at the (n +1)/2 th position as a median; if n is an even number, taking the mean value of two y arranged at the n/2 th and n/2+1 th positions as a median;
in the calculation of the standard fourth-order central moment SkrtWhen y ═ f (x) is regarded as an edge probability distribution curve, the standard fourth-order center distance is calculated, the average value is calculated first,
Figure BDA0003345222760000041
recalculate the variance
Figure BDA0003345222760000051
Finally, calculating the standard fourth-order center distance,
Figure BDA0003345222760000052
get { SMax,1Maximum value S inMax,0Second maximum value SMax,1Calculating a first sample adaptation degree
Figure BDA0003345222760000053
And second sample adaptation degree
Figure BDA0003345222760000054
If the first sample adaptation degree and the second sample adaptation degree are both higher than a preset threshold value, no light source switching operation is carried out;
if any one of the first sample adaptation degree and the second sample adaptation degree is lower than the preset threshold value and a light source pair combined alternative scheme exists, selecting to start the alternative light source scheme, and executing all the steps again;
and if any one of the first sample adaptability and the second sample adaptability is lower than the preset threshold value and no light source pair combination alternative scheme exists, selecting and starting a light source pair combination with the historical first sample adaptability and the historical second sample adaptability which are the highest, and implementing the rest steps in the rapid parfocal method.
The invention provides a technical scheme (II) for solving the technical problems, which comprises the following steps: a rapid parfocal device of an amplification imaging system comprises a sample bearing unit, an amplification unit, an image acquisition unit, a motion unit and a central control unit which share a common optical axis, wherein the motion unit is suitable for driving the sample bearing unit and/or the amplification unit to reciprocate along the optical axis of the sample bearing unit and/or the amplification unit, and the controlled ends of the image acquisition unit and the motion unit are connected to the control end of the central control unit;
the fast parfocal device comprises a point light source unit,
the point light source unit comprises at least one pair of different color point light sources, any one of the point light sources emits monochromatic light, the light emitting colors of the different color point light sources are different and contrasted, and the irradiation points of the different color point light sources are respectively positioned at two sides of the optical axis of the imaging element; the point light source unit is also provided with a point light source control module, and the point light source control module is suitable for controlling the point light source to be switched on and switched off;
the central control unit comprises a deviation degree calculation module
The central control unit is suitable for controlling the image acquisition unit to acquire an image A and an image B of two corresponding single-color channels of the sample under the irradiation of the pair of different-color point light sources;
the deviation degree calculation module is suitable for calculating the offset p between the two images through the image A and the image B, and calculating the defocusing distance 1 of the current sample from the target defocusing plane, namely a target movement value h, according to the offset p;
the central control unit is suitable for controlling the motion unit to place the sample at the parfocal plane of the amplification unit according to the target movement value h.
Further, when calculating the offset p, the deviation calculation module exhaustively calculates the image likelihood values y of the image A and the image B after horizontally moving the image A to the image B or the image B to the image A by 1 to n pixels, records the y values calculated all the time, takes the horizontally moved pixel value x corresponding to the highest y value as the offset, and is specifically based on a mutual information calculation method or a feature point matching method,
in the case of a mutual information-based calculation method,
considering the grayscale images A and B, for each pixel value y1The proportion of the pixel value y1 in the image A is pΛ(y1) (ii) a For each pixel value y2The proportion of the pixel value y2 in the image B is pB(y2);
For each pair of pixel value combinations (y1, y2), two pixels with the same coordinates in images A and B have the ratio of y1 in image A to y2 in B as pA,B(y1,y2) The mutual information of the two images is
Figure BDA0003345222760000061
Calculating the mutual information of the image A and the displaced image B by using the same formula according to the levels of all pixels of the image B, repeatedly arranging the mutual information in all the preset offset ranges to obtain a deviation likelihood curve,
the gray level image input by the mutual information calculation may be an original gray level image of a single color channel, or an image obtained by performing gradient filtering or similar filtering operation on the gray level image, and a typical gradient filtering method is to use a laplacian operator, i.e. a matrix
Figure BDA0003345222760000062
Performing convolution operation with the original image;
when based on the feature point matching method,
respectively using SURF or SIFT feature point processing algorithm in the image A and the image B, using feature point description vectors to record the time-frequency domain characteristics of adjacent pixel points of the feature points,
and after all the feature points of the image B are leveled, calculating the difference square of the description vectors of the feature points in the image A in the adjacent position respectively, if the feature points are not adjacent, taking the square of the feature points, then calculating the mean value of the difference square, then taking the reciprocal of the mean value to obtain the feature matching errors of the two images, repeating the steps, and arranging the feature matching errors in all the preset offset ranges to obtain the deviation likelihood curve.
Further, the point light source unit comprises a plurality of pairs of different color point light sources distributed at different intervals,
the central control unit further comprises a light source spatial distribution evaluation module adapted to evaluate the interval relation between the absolute value | p | of the calculated offset p and a preset lower threshold value μmin and a preset upper threshold value μmax,
if the | p | is between a preset lower threshold value μmin and a preset upper threshold value μmax, the activated point light source is appropriate;
if the absolute value p is smaller than the lower limit value mu min of the threshold value, the current defocusing distance is known to be short, the central control unit selects a different color point light source pair combination with a larger opening distance through the point light source control module, and the parfocal plane detection is carried out again;
if | p | is larger than the upper limit μmax of the threshold, the central control unit can know that the current defocusing distance is larger than the measurement range, and the central control unit selects a different color point light source pair combination with smaller opening distance through the point light source control module and performs the parfocal plane detection again.
Further, the point light source unit includes a plurality of pairs of different color point light sources having different combinations of the reverse differential colors,
the central control unit further comprises a sample light source color matching evaluation module adapted to evaluate whether the sample matches the current light source O and P colors, upon evaluation
For n y values from the record, obtain several local maxima { SMax,iGet the median S by statisticsMedAnd the standard fourth-order central moment (i.e. the ratio of the fourth-order central moment to the square of the variance) Skrt
In calculating the local maximum SMax,iFor each 1 < x < N-1, if f (x-1) < f (x) and f (x +1) < f (x), then x is a local maximum;
in calculating the median SMedWhen the number of the bits is larger than the set value, arranging y from small to large, and if n is an odd number, taking y arranged at the (n +1)/2 th position as a median; if n is an even number, the two y's arranged at the n/2 th and n/2+1 th positions are both takenThe value is a median;
in the calculation of the standard fourth-order central moment SkrtWhen y ═ f (x) is regarded as an edge probability distribution curve, the standard fourth-order center distance is calculated, the average value is calculated first,
Figure BDA0003345222760000081
recalculate the variance
Figure BDA0003345222760000082
Finally, calculating the standard fourth-order center distance,
Figure BDA0003345222760000083
get { SMax,iMaximum value S inMax,0Second maximum value SMax,1Calculating a first sample adaptation degree
Figure BDA0003345222760000084
And second sample adaptation degree
Figure BDA0003345222760000085
If the first sample adaptation degree and the second sample adaptation degree are both higher than a preset threshold value, no light source switching operation is carried out;
if any one of the first sample adaptation degree and the second sample adaptation degree is lower than the preset threshold value and a light source pair combined alternative scheme exists, the central control unit selects to start the alternative light source scheme through the point light source control module and performs parfocal plane detection again;
and if any one of the first sample adaptation degree and the second sample adaptation degree is lower than the preset threshold value and no light source pair combination alternative scheme exists, selecting and starting the light source pair combination with the historical first sample adaptation degree and the historical second sample adaptation degree and carrying out the parfocal plane detection again.
The invention has the beneficial effects that:
by the method and the system, the parfocal plane can be detected quickly, and the sample can be placed at the focusing position of the magnifying imaging system quickly through the movement mechanism. In terms of hardware, components such as a microscope objective lens, an imaging element and the like are not changed, a white light source for acquiring microscopic images is not changed, and only a monochromatic point light source and a corresponding control module are needed to be added. This system can be used in a variety of microscopy systems: either upright or inverted microscopy systems, either transmission or reflection microscopy systems. The pixel arrangement of the imaging element may be a common rectangle.
The technical scheme of the invention applies the characteristic that a sample can be subjected to offset imaging under the irradiation of an offset point light source when the sample is out of focus, skillfully sets the point light sources offset at two sides of the optical axis of an imaging element to illuminate the sample simultaneously or sequentially so as to obtain offset imaging of different degrees, calculates the distance (positive and negative numbers corresponding to offset) which should move up and down towards an parfocal plane according to the offset, and finally carries out accurate focusing.
Drawings
The fast parfocal method and apparatus of the magnifying imaging system of the present invention will be further described with reference to the accompanying drawings.
FIG. 1 is a block diagram of the structural modules of a classical magnification imaging system;
FIG. 2 is a schematic view of the enlarged imaging operation of the system of FIG. 1;
FIG. 3 is a schematic diagram of the offset of the point source to the left of the axis and the imaging position when the sample is below the parfocal plane;
FIG. 4 is a schematic diagram of the imaging position of a point source off-axis to the left of the axis with the sample in parfocal;
FIG. 5 is a schematic diagram of the offset of the point source to the left of the center axis and the position of the image when the sample is above the parfocal plane;
FIG. 6 is a schematic diagram of the offset of the point source to the right of the axis and the imaging position when the sample is below the parfocal plane;
FIG. 7 is a schematic diagram illustrating the shift in the image of a sample below the parfocal plane when a pair of heterochromatic point sources is used in accordance with one embodiment;
FIG. 8 is a schematic diagram of a simulation of the optical path of FIG. 7;
FIG. 9 is a schematic diagram illustrating the shift of the image of a sample above the parfocal plane when a pair of heterochromatic point light sources is used in the first embodiment;
FIG. 10 is a schematic diagram of a simulation of the optical path of FIG. 9
FIG. 11 is a block diagram of a further optimized arrangement of the fast parfocal device;
FIG. 12 is a logic flow diagram of a fast parfocal method in accordance with one embodiment;
FIG. 13 is a logic flow diagram of light source spatial distribution estimation;
FIG. 14 is a flow chart of light source sample fitness evaluation logic;
FIG. 15 is a schematic diagram illustrating a distribution of point light sources in a plurality of pairs according to an embodiment;
FIG. 16 is a schematic diagram of a reflective magnifying imaging system;
fig. 17 is a schematic structural view of an inverted magnifying imaging system.
Detailed Description
Example one
The embodiment relates to a fast parfocal method of a magnification imaging system, wherein the magnification imaging system comprises a sample bearing unit, a magnification unit and an image acquisition unit (namely an imaging element) which share an optical central axis (a central axis or an optical axis for short), and further comprises a motion unit, wherein the motion unit is suitable for driving the sample bearing unit and/or the magnification unit to reciprocate along the optical axis of the sample bearing unit and/or the magnification unit.
By applying the method in the embodiment, on hardware, components such as a microscope objective lens, an imaging element and the like are not changed, and a white light source for acquiring microscopic images is not changed, and only a plurality of monochromatic point light sources and corresponding control modules are added. These monochromatic light sources have a narrow spectrum, but need not have a strictly single frequency as a laser, i.e., satisfy the relative characteristics described below. In order to improve usability and simplify calculation, the two monochromatic point light sources with different light emission colors can be arranged in a pair, and the irradiation points of the monochromatic point light sources in the pair are respectively located on two sides of the optical axis of the imaging element and are symmetrical with the optical central axis of the imaging element, but the form is not strictly required. In the present embodiment, as shown in fig. 15, the monochromatic point light sources are 12, and are arranged in 6 groups (i.e., A1a2, B1B2 … …, etc.), and the monochromatic point light sources of each group are distributed in mirror symmetry with the central axis of the imaging element.
As shown in fig. 1 and 2, when a point light source having a wide emission angle illuminates a sample, an "Image" (Image) of the sample is generated on an imaging element. The "image" is sharpest (i.e. in focus) when the sample is at the parfocal distance of the magnification unit (i.e. conjugate to the imaging element). In this case, on the one hand, according to the basic principle of geometric optics, the position of the "image" relative to the central axis of the optical path does not change regardless of the position of the light source relative to the central axis of the optical path. On the other hand, a magnification unit in which the magnification is constant and the image distance (i.e., the distance of the imaging element from the image side principal point) is constant has a constant parfocal distance.
The inventors have found that when the point light source is offset from the central axis but the position is fixed, and the sample is away from the parfocal distance, the offset of the center of the "image" from the central axis is approximately linearly related to the parfocal distance, i.e. the parfocal distance increases, the offset of the "image" increases in magnitude, and the sample is above and below the parfocal plane, the "image" is offset in the opposite direction, as shown in fig. 3, 4, 5 and 6; when the sample is offset from the focal plane but the defocus distance is fixed, the position of the point source is offset from the central axis, and the offset of the center of the "image" from the central axis is offset accordingly. In particular, when the light source is biased to one side, the "image" will be biased to the other side.
As shown in fig. 7, 8, 9, and 10, if two point light sources (typically, two monochromatic lights of red and green, which will be "images" of red and green, respectively) that do not interfere with each other in imaging are placed at different positions (for example, symmetrical sides) relative to the optical axis, when the two light sources are turned on to image the defocused sample, they will form an "image" that is deviated from the central axis; even if the two images are overlapped, the two images are not interfered by the different color channels, and two independent images are formed after the respective channels are separated. The imaging element collects the formed images, and the relative deviation relation of the images in the images is calculated, so that the defocusing distance of the sample can be calculated.
As shown in fig. 12, the method in this embodiment includes the following steps:
step i: and acquiring images A and images B of two single-color channels corresponding to the sample wafer under the irradiation of a pair of different-color point light sources, wherein the irradiation points of the pair of different-color point light sources are respectively positioned at two sides of the optical axis of the imaging element. In particular, the acquisition is performed by an imaging element. The different color means two monochromatic point light sources with different light emitting colors.
More specifically, a pair of different color point light sources emit red and green lights, respectively, and the imaging element performs photographing after the red and green lights illuminate the sample. And carrying out channel separation on the shot total image, and separating gray level images, namely an image A and an image B, corresponding to the color channels.
Step ii: and calculating the offset p between the two images through the image A and the image B, and calculating the defocusing distance l of the current sample from the target focusing surface of the amplifying unit, namely the target movement value h according to the offset p.
It may be preferable to: when the offset p is calculated, the image likelihood values y of the image A and the image B after the image A is horizontally moved to the image B or the image B is horizontally moved to the image A by 1 to n pixels are exhaustively calculated, the y value calculated in the past is recorded, and the horizontally moved pixel value x corresponding to the highest y value is taken as the offset p. In the calculation, the positive and negative of the offset p are determined according to the difference in the moving direction, so that the moving unit is further controlled according to the positive and negative in the future.
Preferred as further steps are: when the likelihood value y is calculated, based on a mutual information calculation method or a characteristic point matching method,
in the case of a mutual information-based calculation method,
considering the grayscale images A and B, for each pixel value y1The proportion of the pixel value y1 in the image A is pA(y1) (ii) a For each pixel value y2The proportion of the pixel value y2 in the image B is pB(y2);
For each pair of pixel value combinations (y1, y2), two pixels with the same coordinates in images A and B have the ratio of y1 in image A to y2 in B as pA,B(y1,y2) The mutual information of the two images is
Figure BDA0003345222760000111
Calculating the mutual information of the image A and the displaced image B by using the same formula according to the levels of all pixels of the image B, repeatedly arranging the mutual information in all the preset offset ranges to obtain a deviation likelihood curve,
the gray level image input by the mutual information calculation may be an original gray level image of a single color channel, or an image obtained by performing gradient filtering or similar filtering operation on the gray level image, and a typical gradient filtering method is to use a laplacian operator, i.e. a matrix
Figure BDA0003345222760000121
Performing convolution operation with the original image;
when based on the feature point matching method,
respectively using SURF or SIFT feature point processing algorithm in the image A and the image B, using feature point description vectors to record the time-frequency domain characteristics of adjacent pixel points of the feature points,
and after all the feature points of the image B are leveled, calculating the difference square of the description vectors of the feature points in the image A in the adjacent position respectively, if the feature points are not adjacent, taking the square of the feature points, then calculating the mean value of the difference square, then taking the reciprocal of the mean value to obtain the feature matching errors of the two images, repeating the steps, and arranging the feature matching errors in all the preset offset ranges to obtain the deviation likelihood curve.
Theoretically, the results of the above two algorithms are consistent, but the calculated quantity characteristics are different: the calculated amount of the algorithm based on mutual information is in direct proportion to the search range and is increased quickly; the calculation amount of the algorithm based on the feature point matching has a portion irrelevant to the search range and also has a portion which slightly increases as the search range increases. Therefore, if the known offset is small, the algorithm based on mutual information can be performed only in a small range, and the calculation amount is saved; if the offset is uncertain or large, the algorithm based on the feature point matching can save the calculation amount.
In accordance with the foregoing definition, both algorithms yield results in curves, which may be respectively represented by y ═ f (x), where x is the relative number of pixels shifted and y is the likelihood for that number of pixels shifted. The resulting deviation likelihood curve should be a curve resembling a "human" word. The corresponding deviation of the curve with the highest likelihood is the relative offset of the "image".
It is also preferable that: when calculating the defocus distance l according to the offset p, the defocus distance l is calculated by a table look-up method, the table is preset,
when setting a table, firstly searching and finding out an afocal surface by a common method such as a maximum sharpness method and the like, then operating a focusing system to leave a focal plane for a known distance, then carrying out processes of activating paired point light sources, imaging and calculating an offset, recording the known afocal distance l and the calculated offset p into the table, and then repeating the processes to establish a corresponding table.
The defocus distance can also be calculated by a pre-modeling method, i.e. fitting the lookup table to a model curve and then calculating the defocus distance according to the model curve and the obtained offset. The model may be a linear model, which is not described herein again.
Step iii: and placing the sample into the target focal plane of the amplifying unit according to the target movement value h through the motion unit.
It may be preferable to: after the step ii, if the system includes a plurality of pairs of different color point light sources with different spacing distributions, after the offset p is calculated, the spatial distribution of the light sources is also evaluated, and the operation of the subsequent optional step is performed according to the evaluation result. As shown in fig. 13, at the time of evaluation,
if the absolute value | p | of the offset p is between the preset lower threshold value μmin and the preset upper threshold value μmax, the activated point light source is appropriate;
if | p | is smaller than the lower limit μmin of the threshold, the current defocusing distance is known to be shorter, the combination of the heterochromatic point light source pairs with larger opening distance is selected, and all the steps are executed again;
and if the | p | is larger than the upper limit of the threshold value Mumax, knowing that the current defocusing distance is larger than the measurement range, selecting the combination of the heterochromatic point light source pairs with smaller opening distance, and re-executing all the steps again.
It may be preferable to: after step ii, if the system comprises a plurality of pairs of heterochromatic point light sources having different combinations of inverse chromaticities,
the method also comprises the evaluation of whether the sample is matched with the colors of the current light source O and the current light source P, and the operation of the subsequent optional steps is carried out according to the evaluation result. As shown in FIG. 14, at evaluation, for n y values from the record, several local maxima { S ] thereof are obtainedMax,iGet the median S by statisticsMedAnd the standard fourth-order central moment (i.e. the ratio of the fourth-order central moment to the square of the variance) Skrt
In calculating the local maximum SMax,iFor each 1 < x < N-1, if f (x-1) < f (x) and f (x +1) < f (x), then x is a local maximum.
In calculating the median SMedWhen the number of the bits is larger than the set value, arranging y from small to large, and if n is an odd number, taking y arranged at the (n +1)/2 th position as a median; if n is an even number, the mean of the two y's arranged at the n/2 and n/2+1 positions is taken as the median.
In the calculation of the standard fourth-order central moment SkrtWhen y ═ f (x) is regarded as an edge probability distribution curve, the standard fourth-order center distance is calculated, the average value is calculated first,
Figure BDA0003345222760000141
recalculate the variance
Figure BDA0003345222760000142
Finally, calculating the standard fourth-order center distance,
Figure BDA0003345222760000143
get { SMax,iMaximum value S inMax,0Second maximum value SMax,1Calculating a first sample adaptation degree
Figure BDA0003345222760000144
And second sample adaptation degree
Figure BDA0003345222760000145
And if the first sample adaptation degree and the second sample adaptation degree are higher than the preset threshold value, the light source switching operation is not carried out.
And if any one of the first sample adaptation degree and the second sample adaptation degree is lower than a preset threshold value and a light source pair combined alternative scheme exists, selecting to start the alternative light source scheme, and executing all the steps again.
And if any one of the first sample adaptability and the second sample adaptability is lower than a preset threshold value and no light source pair combination alternative scheme exists, selecting and starting the light source pair combination with the historical first sample adaptability and the historical second sample adaptability which are the highest, and implementing the other steps in the rapid parfocal method.
In addition, as shown in fig. 16 and 17, the transmission type or reflection type microscope system which is arranged in an upright or inverted mode can carry out the rapid parfocal process by the method in the embodiment.
Example two
The embodiment relates to a fast parfocal device of an amplification imaging system on the basis of the technical scheme of the method disclosed by the embodiment I.
The magnifying imaging system comprises a sample bearing unit, a magnifying unit, an image acquisition unit, a movement unit and a central control unit, wherein the sample bearing unit, the magnifying unit, the image acquisition unit, the movement unit and the central control unit are coaxial, the movement unit is suitable for driving the sample bearing unit and/or the magnifying unit to reciprocate along the optical axis of the sample bearing unit and/or the magnifying unit, and the controlled ends of the image acquisition unit and the movement unit are connected to the control end of the central control unit.
The image acquisition unit comprises an imaging element and an image acquisition and channel separation module. The motion unit comprises a motion mechanism and a motion control module.
The fast parfocal device comprises a point light source unit. The point light source unit comprises at least one pair of different-color point light sources, any point light source emits monochromatic light, the light emitting colors of the different-color point light sources are different and contrasted, and the irradiation points of the different-color point light sources are respectively positioned at two sides of the optical axis of the imaging element; the point light source unit is also provided with a point light source control module which is suitable for controlling the point light source to be switched on and switched off.
The central control unit is suitable for controlling the image acquisition unit to acquire an image A and an image B of two single-color channels corresponding to the sample under the irradiation of the pair of different-color point light sources. More specifically, a pair of different color point light sources emit red and green lights, respectively, and the imaging element performs photographing after the red and green lights illuminate the sample. And carrying out channel separation on the shot total image, and separating gray level images, namely an image A and an image B, corresponding to the color channels.
The central control unit includes a deviation degree calculation module. The deviation degree calculation module is adapted to calculate the amount of deviation p between the two images from image a and image B.
The central control unit calculates the defocus distance l of the sample from the object in-focus plane according to the offset p, that is, the object movement value h, and the specific calculation in this step is performed by a defocus distance calculation module in the central control unit, and the calculation method in this step is referred to relevant parts in the first embodiment and is not described again.
The central control unit is adapted to control the motion unit to place the sample at the parfocal plane of the amplification unit in dependence on the target movement value h.
It may be preferable to: when calculating the offset p, the deviation calculation module exhaustively calculates the image likelihood values y of the image A and the image B after horizontally moving the image A to the image B or the image B to the image A by 1 to n pixels, records the y values calculated in the past, and takes the corresponding horizontally-moved pixel value x when the y value is the highest as the offset.
Particularly based on a mutual information calculation method or a characteristic point matching method,
in the case of a mutual information-based calculation method,
considering the grayscale images A and B, for each pixel value y1The proportion of the pixel value y1 in the image A is pA(y1) (ii) a For each pixel value y2The proportion of the pixel value y2 in the image B is pB(y2);
For each pair of pixel value combinations (y1, y2), two pixels with the same coordinates in images A and B have the ratio of y1 in image A to y2 in B as pA,B(y1,y2) The mutual information of the two images is
Figure BDA0003345222760000161
Calculating the mutual information of the image A and the displaced image B by using the same formula according to the levels of all pixels of the image B, repeatedly arranging the mutual information in all the preset offset ranges to obtain a deviation likelihood curve,
the gray level image input by the mutual information calculation may be an original gray level image of a single color channel, or an image obtained by performing gradient filtering or similar filtering operation on the gray level image, and a typical gradient filtering method is to use a laplacian operator, i.e. a matrix
Figure BDA0003345222760000162
Performing convolution operation with the original image;
when based on the feature point matching method,
respectively using SURF or SIFT feature point processing algorithm in the image A and the image B, using feature point description vectors to record the time-frequency domain characteristics of adjacent pixel points of the feature points,
and after all the feature points of the image B are leveled, calculating the difference square of the description vectors of the feature points in the image A in the adjacent position respectively, if the feature points are not adjacent, taking the square of the feature points, then calculating the mean value of the difference square, then taking the reciprocal of the mean value to obtain the feature matching errors of the two images, repeating the steps, and arranging the feature matching errors in all the preset offset ranges to obtain the deviation likelihood curve.
It may be preferable to: the point light source unit comprises a plurality of pairs of different-color point light sources which are distributed at different intervals.
The central control unit further comprises a light source spatial distribution evaluation module adapted to evaluate the interval relation between the absolute value | p | of the calculated offset p and a preset lower threshold value μmin, a preset upper threshold value μmax,
if the | p | is between a preset lower threshold value μmin and a preset upper threshold value μmax, the activated point light source is appropriate;
if the absolute value p is smaller than the lower limit value mu min of the threshold value, the current defocusing distance is known to be short, the central control unit selects a different color point light source pair combination with a larger opening distance through the point light source control module, and the parfocal plane detection is carried out again;
if | p | is larger than the upper limit μmax of the threshold, the central control unit can know that the current defocusing distance is larger than the measurement range, and the central control unit selects a different color point light source pair combination with smaller opening distance through the point light source control module and performs the parfocal plane detection again.
It may be preferable to: the point light source unit includes a plurality of pairs of heterochromatic point light sources having different combinations of the inverse chromatics,
the central control unit further comprises a sample light source color matching evaluation module adapted to evaluate whether the sample matches the current light source O and P colors, upon evaluation
For n y values from the record, obtain several local maxima { SMax,iGet the median S by statisticsMedAnd the standard fourth-order central moment (i.e. the ratio of the fourth-order central moment to the square of the variance) Skrt
In calculating the local maximum SMax,iFor each 1 < x < N-1, if f (x-1) < f (x) and f (x +1) < f (x), then x is a local maximum;
in calculating the median SMedWhen the number of the bits is larger than the set value, arranging y from small to large, and if n is an odd number, taking y arranged at the (n +1)/2 th position as a median; if n is an even number, taking the mean value of two y arranged at the n/2 th and n/2+1 th positions as a median;
in the calculation of the standard fourth-order central moment SkrtWhen y ═ f (x) is regarded as an edge probability distribution curve, the standard fourth-order center distance is calculated, the average value is calculated first,
Figure BDA0003345222760000181
recalculate the variance
Figure BDA0003345222760000182
Finally, calculating the standard fourth-order center distance,
Figure BDA0003345222760000183
get { SMax,iMaximum value S inMax,0Second maximum value SMax,1Calculate the first sampleDegree of adaptation
Figure BDA0003345222760000184
And second sample adaptation degree
Figure BDA0003345222760000185
If the first sample adaptation degree and the second sample adaptation degree are both higher than the preset threshold value, the light source switching operation is not carried out;
if any one of the first sample adaptation degree and the second sample adaptation degree is lower than a preset threshold value and a light source pair combined alternative scheme exists, the central control unit selects to start the alternative light source scheme through the point light source control module and performs parfocal plane detection again;
and if any one of the first sample adaptation degree and the second sample adaptation degree is lower than a preset threshold value and no light source pair combination alternative scheme exists, selecting and starting a light source pair group with the highest historical first sample adaptation degree and second sample adaptation degree, and performing the parfocal plane detection again.
The present invention is not limited to the above embodiments, and the technical solutions of the above embodiments of the present invention may be combined with each other in a crossing manner to form a new technical solution, and all technical solutions formed by using equivalent substitutions fall within the scope of the present invention.

Claims (10)

1. A fast parfocal method of a magnification imaging system comprises a sample bearing unit, a magnification unit and an image acquisition unit which share a common optical axis, and further comprises a motion unit which is suitable for driving the sample bearing unit and/or the magnification unit to reciprocate along the optical axis of the sample bearing unit and/or the magnification unit,
the method comprises the following characteristic steps:
acquiring an image A and an image B of two single-color channels corresponding to a sample wafer under the irradiation of a pair of different-color point light sources, wherein the irradiation points of the pair of different-color point light sources are respectively positioned at two sides of the optical axis of the imaging element;
calculating an offset p between the two images through the image A and the image B, and calculating a defocusing distance l of the current sample from a target focusing surface of the amplifying unit, namely a target moving value h according to the offset p;
and placing the sample into the target focal plane of the amplifying unit according to the target movement value h through the motion unit.
2. The fast parfocal method of a magnifying imaging system according to claim 1, wherein: when the offset p is calculated, the image likelihood values y of the image A and the image B after the image A is horizontally moved to the image B or the image B moves to the image A by 1 to n pixels are exhaustively calculated, the y values calculated in the past are recorded, and the horizontally moved pixel value x corresponding to the highest y value is taken as the offset p.
3. The fast parfocal method of a magnifying imaging system according to claim 2, wherein: when the likelihood value y is calculated, based on mutual information calculation method or characteristic point matching method,
in the case of a mutual information-based calculation method,
considering the grayscale images A and B, for each pixel value y1The proportion of the pixel value y1 in the image A is pA(y1) (ii) a For each pixel value y2The proportion of the pixel value y2 in the image B is pB(y2);
For each pair of pixel value combinations (y1, y2), two pixels with the same coordinates in images A and B have the ratio of y1 in image A to y2 in B as pA,B(y1,y2) The mutual information of the two images is
Figure FDA0003345222750000011
Horizontally shifting all pixels of the image B, calculating mutual information of the image A and the shifted image B by using the same formula, repeatedly arranging the mutual information in all preset offset ranges to obtain a deviation likelihood curve,
the gray image input by mutual information calculation is the gray image of the original single-color channel or the gray image subjected to gradient processingThe image after the degree filtering operation is performed, and the typical gradient filtering method is to use Laplace operator, namely matrix
Figure FDA0003345222750000021
Performing convolution operation with the original image;
when based on the feature point matching method,
respectively using SURF or SIFT feature point processing algorithm in the image A and the image B, using feature point description vectors to record the time-frequency domain characteristics of adjacent pixel points of the feature points,
horizontally shifting all the feature points of the image B, calculating the difference square of the description vectors of the feature points in the image A in the adjacent position respectively, if the feature points are not adjacent, taking the square of the feature points, calculating the mean value of the difference squares, taking the reciprocal of the mean value to obtain the feature matching errors of the two images, repeating the steps, arranging the feature matching errors in all the preset offset ranges to obtain an offset likelihood curve,
the likelihood curve takes the y value as the ordinate value and the x value as the abscissa value.
4. The fast parfocal method of a magnifying imaging system according to claim 1, wherein: when calculating the defocusing distance l, the defocusing distance l is calculated by a table look-up method, the table is preset,
when setting a table, firstly searching and finding out a parfocal plane by a general method, wherein the general method comprises a maximum sharpness method,
then operating the focusing system to leave the focal plane for a known distance, then carrying out the processes of activating the point light source, imaging and calculating the offset, recording the known defocusing distance l and the calculated offset p into the table, and then repeating the processes to establish the corresponding table.
5. The fast parfocal method of a magnifying imaging system according to claim 1, wherein: the system comprises a plurality of pairs of different color point light source pairs with different spacing distribution, after calculating the offset p, the system also carries out the spatial distribution evaluation of the light sources and carries out the operation of the subsequent optional steps according to the evaluation result,
at the time of the evaluation, it is,
if the absolute value | p | of the offset p is between the preset lower threshold value μmin and the preset upper threshold value μmax, the activated point light source is appropriate;
if | p | is smaller than the lower limit μmin of the threshold, the current defocusing distance is known to be shorter, the combination of the heterochromatic point light source pairs with larger opening distance is selected, and all the steps are executed again;
and if the | p | is larger than the upper limit of the threshold value Mumax, knowing that the current defocusing distance is larger than the measurement range, selecting the combination of the heterochromatic point light source pairs with smaller opening distance, and re-executing all the steps again.
6. The fast parfocal method of a magnifying imaging system according to any one of claims 2 to 5, wherein: the system includes a plurality of pairs of heterochromatic point light sources having different combinations of contrasting colors,
the method also comprises the evaluation of whether the sample is matched with the colors of the current light source O and the current light source P, and the operation of the subsequent optional steps is carried out according to the evaluation result,
for n y values from the record, obtain several local maxima { SMax,iGet the median S by statisticsMedAnd the standard fourth-order central moment (i.e. the ratio of the fourth-order central moment to the square of the variance) Skrt
In calculating the local maximum SMax,iAt 1 for each one<x<N-1, if f (x-1)<f (x) and f (x +1)<(x), then x is a local maximum;
in calculating the median SMedWhen the number of the bits is larger than the set value, arranging y from small to large, and if n is an odd number, taking y arranged at the (n +1)/2 th position as a median; if n is an even number, taking the mean value of two y arranged at the n/2 th and n/2+1 th positions as a median;
in the calculation of the standard fourth-order central moment SkrtWhen y ═ f (x) is regarded as an edge probability distribution curve, the standard fourth-order center distance is calculated, the average value is calculated first,
Figure FDA0003345222750000031
recalculate the variance
Figure FDA0003345222750000032
Finally, calculating the standard fourth-order center distance,
Figure FDA0003345222750000033
get { SMax,iMaximum value S inMax,0Second maximum value SMax,1Calculating a first sample adaptation degree
Figure FDA0003345222750000034
And second sample adaptation degree
Figure FDA0003345222750000035
If the first sample adaptation degree and the second sample adaptation degree are both higher than a preset threshold value, no light source switching operation is carried out;
if any one of the first sample adaptation degree and the second sample adaptation degree is lower than the preset threshold value and a light source pair combined alternative scheme exists, selecting to start the alternative light source scheme, and executing all the steps again;
and if any one of the first sample adaptability and the second sample adaptability is lower than the preset threshold value and no light source pair combination alternative scheme exists, selecting and starting a light source pair combination with the historical first sample adaptability and the historical second sample adaptability which are the highest, and implementing the rest steps in the rapid parfocal method.
7. A rapid parfocal device of an amplification imaging system comprises a sample bearing unit, an amplification unit, an image acquisition unit, a motion unit and a central control unit which share a common optical axis, wherein the motion unit is suitable for driving the sample bearing unit and/or the amplification unit to reciprocate along the optical axis of the sample bearing unit and/or the amplification unit, and the controlled ends of the image acquisition unit and the motion unit are connected to the control end of the central control unit;
the method is characterized in that:
the fast parfocal device comprises a point light source unit,
the point light source unit comprises at least one pair of different color point light sources, any one of the point light sources emits monochromatic light, the light emitting colors of the different color point light sources are different and contrasted, and the irradiation points of the different color point light sources are respectively positioned at two sides of the optical axis of the imaging element; the point light source unit is also provided with a point light source control module, and the point light source control module is suitable for controlling the point light source to be switched on and switched off;
the central control unit comprises a deviation degree calculation module
The central control unit is suitable for controlling the image acquisition unit to acquire an image A and an image B of two corresponding single-color channels of the sample under the irradiation of the pair of different-color point light sources;
the deviation degree calculation module is suitable for calculating the offset p between the two images through the image A and the image B, and calculating the defocusing distance l of the current sample from the target defocusing plane, namely a target movement value h, according to the offset p;
the central control unit is suitable for controlling the motion unit to place the sample at the parfocal plane of the amplification unit according to the target movement value h.
8. The fast parfocal device of claim 7, wherein: when calculating the offset p, the deviation calculation module exhaustively calculates the image likelihood values y of the image A and the image B after horizontally moving the image A to the image B or the image B to the image A by 1 to n pixels, records the y values calculated in the past, takes the corresponding horizontally-moved pixel value x when the y value is the highest as the offset, and is specifically based on a mutual information calculation method or a characteristic point matching method,
in the case of a mutual information-based calculation method,
considering the grayscale images A and B, for each pixel value y1The proportion of the pixel value y1 in the image A is pA(y1) (ii) a For each pixel value y2The proportion of the pixel value y2 in the image B is pB(y2);
For each pair of pixel valuesIn combination (y1, y2), two pixels with the same coordinates in images A and B have the ratio p of y1 in image A to y2 in BA,B(y1,y2) The mutual information of the two images is
Figure FDA0003345222750000051
Calculating the mutual information of the image A and the displaced image B by using the same formula according to the levels of all pixels of the image B, repeatedly arranging the mutual information in all the preset offset ranges to obtain a deviation likelihood curve,
the gray level image input by mutual information calculation is the gray level image of the original monochromatic channel or the image after gradient filtering operation is performed on the gray level image, and the typical gradient filtering method is to use laplacian operator, namely matrix
Figure FDA0003345222750000052
Performing convolution operation with the original image;
when based on the feature point matching method,
respectively using SURF or SIFT feature point processing algorithm in the image A and the image B, using feature point description vectors to record the time-frequency domain characteristics of adjacent pixel points of the feature points,
and after all the feature points of the image B are leveled, calculating the difference square of the description vectors of the feature points in the image A in the adjacent position respectively, if the feature points are not adjacent, taking the square of the feature points, then calculating the mean value of the difference square, then taking the reciprocal of the mean value to obtain the feature matching errors of the two images, repeating the steps, and arranging the feature matching errors in all the preset offset ranges to obtain the deviation likelihood curve.
9. The fast parfocal device of claim 7, wherein:
the point light source unit comprises a plurality of pairs of different color point light sources which are distributed at different intervals,
the central control unit further comprises a light source spatial distribution evaluation module adapted to evaluate the interval relation between the absolute value | p | of the calculated offset p and a preset lower threshold value μmin and a preset upper threshold value μmax,
if the | p | is between a preset lower threshold value μmin and a preset upper threshold value μmax, the activated point light source is appropriate;
if the absolute value p is smaller than the lower limit value mu min of the threshold value, the current defocusing distance is known to be short, the central control unit selects a different color point light source pair combination with a larger opening distance through the point light source control module, and the parfocal plane detection is carried out again;
if | p | is larger than the upper limit μmax of the threshold, the central control unit can know that the current defocusing distance is larger than the measurement range, and the central control unit selects a different color point light source pair combination with smaller opening distance through the point light source control module and performs the parfocal plane detection again.
10. The fast parfocal device of a magnifying imaging system according to claim 8 or 9, wherein:
the point light source unit includes a plurality of pairs of heterochromatic point light sources having different combinations of the inverse chromatics,
the central control unit further comprises a sample light source color matching evaluation module adapted to evaluate whether the sample matches the current light source O and P colors, upon evaluation
For n y values from the record, obtain several local maxima { SMax,iGet the median S by statisticsMedAnd the standard fourth-order central moment (i.e. the ratio of the fourth-order central moment to the square of the variance) Skrt
In calculating the local maximum SMax,iAt 1 for each one<x<N-1, if f (x-1)<f (x) and f (x +1)<(x), then x is a local maximum;
in calculating the median SMedWhen the number of the bits is larger than the set value, arranging y from small to large, and if n is an odd number, taking y arranged at the (n +1)/2 th position as a median; if n is an even number, taking the mean value of two y arranged at the n/2 th and n/2+1 th positions as a median;
in the calculation of the standard fourth-order central moment SkrtWhen y ═ f (x) is regarded as an edgeCalculating the standard fourth-order center distance of the edge probability distribution curve, firstly calculating the average value of the edge probability distribution curve,
Figure FDA0003345222750000071
recalculate the variance
Figure FDA0003345222750000072
Finally, calculating the standard fourth-order center distance,
Figure FDA0003345222750000073
get { SMax,iMaximum value S inMax,0Second maximum value SMax,1Calculating a first sample adaptation degree
Figure FDA0003345222750000074
And second sample adaptation degree
Figure FDA0003345222750000075
If the first sample adaptation degree and the second sample adaptation degree are both higher than a preset threshold value, no light source switching operation is carried out;
if any one of the first sample adaptation degree and the second sample adaptation degree is lower than the preset threshold value and a light source pair combined alternative scheme exists, the central control unit selects to start the alternative light source scheme through the point light source control module and performs parfocal plane detection again;
and if any one of the first sample adaptation degree and the second sample adaptation degree is lower than the preset threshold value and no light source pair combination alternative scheme exists, selecting and starting the light source pair combination with the historical first sample adaptation degree and the historical second sample adaptation degree and carrying out the parfocal plane detection again.
CN202111319974.2A 2021-11-09 2021-11-09 Rapid parfocal method and device of amplification imaging system Pending CN114205519A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111319974.2A CN114205519A (en) 2021-11-09 2021-11-09 Rapid parfocal method and device of amplification imaging system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111319974.2A CN114205519A (en) 2021-11-09 2021-11-09 Rapid parfocal method and device of amplification imaging system

Publications (1)

Publication Number Publication Date
CN114205519A true CN114205519A (en) 2022-03-18

Family

ID=80647162

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111319974.2A Pending CN114205519A (en) 2021-11-09 2021-11-09 Rapid parfocal method and device of amplification imaging system

Country Status (1)

Country Link
CN (1) CN114205519A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130050516A1 (en) * 2011-08-31 2013-02-28 Daisuke HOJO Imaging device, imaging method and hand-held terminal device
CN106019550A (en) * 2016-07-12 2016-10-12 上海交通大学 High speed micro scanning dynamic focusing device and focusing tracking method
CN108051897A (en) * 2018-01-17 2018-05-18 宁波舜宇仪器有限公司 A kind of micro imaging system and real-time focusing method
CN108873241A (en) * 2018-09-06 2018-11-23 广东万濠精密仪器股份有限公司 A kind of rapid focus measurement method
CN211627931U (en) * 2020-03-16 2020-10-02 中国科学院深圳先进技术研究院 Real-time automatic focusing system for microscope
CN112541932A (en) * 2020-11-30 2021-03-23 西安电子科技大学昆山创新研究院 Multi-source image registration method based on different focal length transformation parameters of dual-optical camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130050516A1 (en) * 2011-08-31 2013-02-28 Daisuke HOJO Imaging device, imaging method and hand-held terminal device
CN106019550A (en) * 2016-07-12 2016-10-12 上海交通大学 High speed micro scanning dynamic focusing device and focusing tracking method
CN108051897A (en) * 2018-01-17 2018-05-18 宁波舜宇仪器有限公司 A kind of micro imaging system and real-time focusing method
CN108873241A (en) * 2018-09-06 2018-11-23 广东万濠精密仪器股份有限公司 A kind of rapid focus measurement method
CN211627931U (en) * 2020-03-16 2020-10-02 中国科学院深圳先进技术研究院 Real-time automatic focusing system for microscope
CN112541932A (en) * 2020-11-30 2021-03-23 西安电子科技大学昆山创新研究院 Multi-source image registration method based on different focal length transformation parameters of dual-optical camera

Similar Documents

Publication Publication Date Title
EP3374817B1 (en) Autofocus system for a computational microscope
US10477097B2 (en) Single-frame autofocusing using multi-LED illumination
CN107113370B (en) Image recording apparatus and method of recording image
RU2523028C2 (en) Image processing device, image capturing device and image processing method
US10755429B2 (en) Apparatus and method for capturing images using lighting from different lighting angles
US9426363B2 (en) Image forming apparatus image forming method and image sensor
CN103003665B (en) Stereo distance measurement apparatus
KR101824936B1 (en) Focus error estimation in images
CN107850754A (en) The image-forming assembly focused on automatically with quick sample
US10623627B2 (en) System for generating a synthetic 2D image with an enhanced depth of field of a biological sample
JP2011085594A (en) Multi-axis integration system and method
JP2015502566A (en) Multifunction autofocus system and method for automated microscope use
CN110824689B (en) Full-automatic microscopic image depth of field expanding system and method thereof
CN111429562B (en) Wide-field color light slice microscopic imaging method based on deep learning
JP2016099570A (en) Microscope system
JP2016528531A (en) Image acquisition method for a microscope system and corresponding microscope system
CN114424102A (en) Image processing apparatus and method for use in an autofocus system
CN107209061B (en) Method for determining complex amplitude of scene-dependent electromagnetic field
CN112367447A (en) Coded illumination real-time focusing scanning imaging device and method
CN113705298A (en) Image acquisition method and device, computer equipment and storage medium
CN114174791A (en) Optical imaging performance testing system and method
US8508589B2 (en) Imaging systems and associated methods thereof
JP2023543338A (en) Method and system for acquiring cytology images in cytopathology examination
CN114205519A (en) Rapid parfocal method and device of amplification imaging system
CN114967093B (en) Automatic focusing method and system based on microscopic hyperspectral imaging platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination