CN113362362B - Bright field microscope panoramic image alignment algorithm based on total variation area selection - Google Patents

Bright field microscope panoramic image alignment algorithm based on total variation area selection Download PDF

Info

Publication number
CN113362362B
CN113362362B CN202110674960.6A CN202110674960A CN113362362B CN 113362362 B CN113362362 B CN 113362362B CN 202110674960 A CN202110674960 A CN 202110674960A CN 113362362 B CN113362362 B CN 113362362B
Authority
CN
China
Prior art keywords
image
total variation
alignment
calculating
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110674960.6A
Other languages
Chinese (zh)
Other versions
CN113362362A (en
Inventor
李小军
高崇军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yipusen Health Technology Shenzhen Co ltd
Original Assignee
Yipusen Health Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yipusen Health Technology Shenzhen Co ltd filed Critical Yipusen Health Technology Shenzhen Co ltd
Priority to CN202110674960.6A priority Critical patent/CN113362362B/en
Publication of CN113362362A publication Critical patent/CN113362362A/en
Application granted granted Critical
Publication of CN113362362B publication Critical patent/CN113362362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the following steps: inputting a plurality of images to be spliced, extracting a prior overlapping area, extracting a characteristic area by secondary total variation, calculating relative offset by adopting MSE mean square error, calculating image global offset, and splicing a panoramic image result; the method comprises the steps of selecting areas by adopting a mode based on secondary total variation, finding out areas with rich contents and strong image edge information as an alignment template, matching the optimal positions by utilizing pixel information through MSE measurement, determining the offset between adjacent images by combining a total variation weighting mode, achieving the aim of considering alignment in two directions, and acquiring a priori knowledge for the overlapping amount of two adjacent view images, thereby overcoming the defect of high calculation complexity of the alignment mode based on pixels.

Description

Bright field microscope panoramic image alignment algorithm based on total variation area selection
Technical Field
The invention relates to the technical field of image processing, in particular to a bright field microscope panoramic image alignment algorithm based on total variation area selection.
Background
Image registration algorithms are mainly divided into two main categories: pixel-based registration algorithms and feature-based registration algorithms.
Registration algorithms based on pixel information are roughly classified into three categories: a cross-correlation method (also called template matching method), a sequential similarity detection matching method and an information interaction method. Compared with other registration methods based on global information content, the registration method based on mutual information has the characteristics of flexibility and accuracy, and becomes one of the most popular image registration methods. Frederik Maes and Andre Collignon apply mutual information to measure statistical dependence or information redundancy between image intensities of corresponding voxels in two images. Xiaoxidizing Wang and Jie Tian in their paper proposed a mutual information based registration method using gradient information instead of pixel intensity information. Hartkens et al introduce feature information in the voxel-based registration algorithm to integrate higher level information about the expected deformation. Butz and Thiran reviewed the general definition of mutual information, selecting edge features for image registration. Frederik Maes, Andre proposes a new histogram-based approach to estimate and maximize mutual information between two multi-modal and possibly multi-band signals. These methods combine image features directly with mutual information methods, with many of the advantages of both feature-based and intensity-based methods. However, mutual information based methods have their own limitations, e.g., when the resolution of the images is low, the images contain less information, or the overlap area is small, mutual information may lead to registration errors (Pluim, etc.). Liu Yan et al uses a template matching method to register an image, selects two parallel line segments from a reference image as a template, slides in an overlapping area of the image to be registered, and compares corresponding position gray level difference values by using similarity measurement to obtain an optimal registration position. Common similarity measures used in template matching methods include ssd (mse), SAD, etc. The template matching method uses the gray information of the pixel columns as the registration basis, the calculation amount can be greatly reduced, but the high mismatching rate can be caused by the fact that the information amount of the selected pixel columns is too small.
The registration algorithm based on image features usually extracts features with special properties of an image as a registration basis, and more commonly, features of contours and corners are used. The method has the characteristics of low time complexity and strong anti-noise capability. Lowe et al propose a registration algorithm based on SIFT features that constructs feature vectors by statistics of gradient histograms in the neighborhood of feature points and assigns one or more directions to each feature point, all subsequent operations on the image data being transformed with respect to the direction, scale and position of the keypoints. Therefore, the method keeps the invariance of rotation, scale scaling and brightness change, and also keeps a certain degree of stability to the view angle change, affine transformation and noise, and has the defect of overlarge calculation amount. Wavelet transformation is a commonly used feature-based image registration method, and El-hazawi explains wavelet-based image registration and realizes parallel computation. Hala s.own and Aboul Ella Hassanien proposed in their paper an efficient image registration technique using Q-shift complex wavelet transform (Q-shift CWT). Experimental results show that compared with the classical wavelet transformation, the algorithm improves the calculation efficiency and obtains the image registration with robustness and consistency. Azhar Quddus and Otman Basir propose a novel, fully automatic, wavelet-based multi-stage image registration technique for image retrieval. They use a multi-scale wavelet representation with Mutual Information (MI) to facilitate the matching of important anatomical structures at multiple resolutions, and the application of this method in the multi-stage wavelet domain is innovative. However, the registration algorithm based on the image features generally needs to further refine and screen feature information by preprocessing such as denoising on the image, and the computational complexity is high.
Because the collected cell images need to be amplified to be beneficial to observation and analysis of doctors, a plurality of visual field images need to be collected on one glass slide, because of the precision problem of hardware equipment, the collected adjacent two images are difficult to splice seamlessly, or one part is overlapped or one part is lost, namely, the problem of dislocation of the upper part, the lower part, the left part and the right part exists, the visual perception is poor when the plurality of visual field images are spliced into a large image to be watched by the doctors, at the moment, an alignment splicing algorithm is needed to align all the visual field images, the overlapping part and the dislocation part are removed, the plurality of visual field images can be spliced seamlessly, and the visual perception of reading the whole image is improved.
The scheme has a great deal of difficulty, firstly, because the scanner shakes in the process of scanning the slide, the scanned cell image has defocusing blur with space change, so that the image alignment difficulty is increased; in addition, due to the operation of dye liquor and sheeting in the sheeting process, a large number of backgrounds exist in the scanned patterns, and the overlapping part is small, so that the content of the images available for alignment is less; then, a large number of characteristic points (such as cell nucleuses) with similar contents exist in the cell image, and the characteristic can generate certain interference on image alignment; finally, common image alignment is based on two images (at most four images), and the scheme aims to solve the splicing alignment of thousands of view maps.
The prior art has the following difficult problems:
(1) the out-of-focus blur of the spatial variation increases the difficulty of image alignment;
(2) the available image content information is less;
(3) the cell image has characteristic points with similar contents, such as cell nucleus;
(4) the panoramic images of thousands of megapixel pictures need to be spliced and aligned, and the task load is large.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a bright field microscope panoramic image alignment algorithm based on total variation area selection. The method comprises the steps of finding out an area with rich content and strong image edge information as an alignment template through area selection based on secondary total variation, and determining the offset of a current image by combining a weighting mode of total variation of adjacent images to achieve the aim of alignment in two directions.
In order to achieve the purpose, the invention adopts the following specific scheme:
the invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the following steps:
s1, inputting a plurality of images to be spliced (aligned); the number is more than 2;
s2, extracting a boundary prior overlapping area of the images to be spliced as an image alignment search area;
s3, extracting a characteristic area by the secondary total variation;
s4, calculating relative offset by using MSE mean square error;
s5, calculating the image global offset according to the relative offset;
s6, splicing panoramic images, and cutting and translating the images according to the offset data calculated in the step S5 to complete the splicing;
wherein, step S3 specifically includes the following steps:
s31, determining an area with rich content and strong image edge information;
s32, fully utilizing gray difference information (gradient) of upper, lower, left and right adjacent points of a pixel point, performing convolution operation on an image by using a sobel operator, and respectively calculating gradients in x and y directions;
s33, improving the sensitivity of the strong edge by adopting a gradient square calculation mode, and calculating the gradient size of each point of the image;
s34, carrying out convolution operation on the characteristic G (the square of the gradient size of each point of the image) to obtain a block quadratic total variation characteristic diagram (TV);
s35, in the reference image I, taking an area (namely an area with most abundant edge information) corresponding to the maximum value position point in the block quadratic total variation characteristic diagram (TV) as the center as an alignment template;
further, in step S4, calculating the relative offset by using MSE (mean square error), the method specifically includes the following steps:
s41, confirming the prior overlapping area (V) of the images to be aligned;
s42, enabling the alignment template (T) to traverse and slide in the prior overlapping area (V);
s43, calculating the similarity between the two by using the pixel information, and measuring by using Mean Square Error (MSE);
and S44, confirming the point with the minimum MSE mean square error as the obtained alignment optimal position, and calculating to obtain the relative offset according to the coordinate difference between the initial position and the alignment optimal position.
Further, in step S2, the image boundary prior overlapping region to be stitched (aligned) is extracted as an image alignment search region, the search range is narrowed, and the calculation amount is reduced.
Further, in step S32, the sobel operators in the x and y directions are:
Figure GDA0003587839890000041
further, in step S33, the equation for the square of the gradient is:
Figure GDA0003587839890000042
where I is the reference picture.
Further, in step S34, the block secondary total variation feature map
Figure GDA0003587839890000043
U is the full 1 matrix operator of the alignment template size, i.e.
Figure GDA0003587839890000044
Where h × w is the height X width of the alignment template region.
Further, in step S43, the Mean Square Error (MSE) calculation formula is as follows:
Figure GDA0003587839890000045
wherein the content of the first and second substances,
(i, j) coordinates for each displacement point;
(x, y) is the corresponding position coordinates of the alignment template T and the prior overlapping area V;
h and w are the height and width of the alignment template T, respectively.
Further, step S5 specifically includes the following steps:
s51, calculating the left-right and up-down offset of the visual field image and the image above the visual field image;
s52, calculating the left-right and up-down offsets of the sight field image and the image positioned at the left side thereof;
s53, the left-right and up-down shift amounts of the view field images (i.e., the left-right and up-down shift amounts required for completing the stitching of the view field images) are calculated by the total variation weighting.
Further, in step S53, the formulas for calculating the left-right and up-down offsets of the sight field image are respectively as follows:
left-right offset formula: lsi,j=(Ls1*TVT+Ls2*TVL)/(TVT+TVL);
Up and downOffset formula: tsi,j=(Ts1*TVT+Ts2*TVL)/(TVT+TVL);
Wherein:
view map Ii,jWith the above image Ii-1,jThe calculated left-right shift is Ls1, the up-down shift is Ts1, and the left image Ii,j-1The left-right deviation is Ls2 and the up-down deviation is Ts2, which are obtained by calculation according to the formula
Figure GDA0003587839890000051
The total variation of the alignment template resulting in the top image and the left image is TVTAnd TVL
By adopting the technical scheme of the invention, the invention has the following beneficial effects:
the invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the steps of inputting a plurality of images to be spliced, extracting prior overlapping areas, extracting characteristic areas by secondary total variation, calculating relative offset by adopting MSE mean square error, calculating image global offset and splicing panoramic image results; the method comprises the steps of selecting areas by adopting a mode based on secondary total variation, finding out areas with rich contents and strong image edge information as an alignment template, matching the optimal positions by utilizing pixel information through MSE measurement, determining the offset between adjacent images by combining a total variation weighting mode, achieving the aim of considering alignment in two directions, and acquiring a priori knowledge for the overlapping amount of two adjacent view images, thereby overcoming the defect of high calculation complexity of the alignment mode based on pixels.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the positions of images during the process of extracting feature areas according to the embodiment of the present invention;
fig. 3 is a schematic view of the field of view, left image and top image positions according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the following figures and specific examples.
The present invention will be described in detail with reference to FIGS. 1 to 3
The invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the following steps:
s1, inputting a plurality of images to be spliced (aligned), wherein the number of the images is more than 2;
s2, extracting a priori overlapping region;
s3, extracting a characteristic area by the secondary total variation;
s4, calculating relative offset by using MSE mean square error;
s5, calculating the image global offset according to the relative offset;
and S6, splicing the panoramic image result, and cutting and translating the image according to the offset data calculated in the step S5 to complete the splicing.
The specific method comprises the following steps:
(1) based on the area selection of the secondary total variation, finding out an area with rich content and strong image edge information as an alignment template T;
(2) the offset of the current image is determined by combining the adjacent (upper left) image total variation weighting mode, and the aim of giving consideration to alignment in two directions is achieved.
Firstly, the scheme adopts a mode based on secondary total variation to select the characteristic area. The method for calculating the image gradient comprises various modes, the method uses a sobel operator to carry out convolution operation on the image, the gradients in the x direction and the y direction are calculated respectively, and the gray difference information (gradient) of upper, lower, left and right adjacent points of a pixel point is fully utilized. The sobel operators in the x and y directions are respectively:
Figure GDA0003587839890000061
assuming that the reference image is I, the gradient size (G') of each point of the image can be calculated:
Figure GDA0003587839890000062
in order to improve the sensitivity of strong edges, a gradient square calculation mode is adopted:
Figure GDA0003587839890000063
calculating block quadratic total variation according to gradient characteristics, and assuming that the size of the T region of the alignment template is h multiplied by w, using a full 1 matrix operator of the size of the alignment template
Figure GDA0003587839890000064
Performing convolution operation on the characteristic G (the gradient size of each point of the image) to obtain a block quadratic total variation characteristic diagram TV:
Figure GDA0003587839890000065
an h × w region corresponding to the maximum position point in the TV (i.e., a region where the edge information is most abundant) is taken as the alignment template T in the reference image I.
Then, matching is performed with the reference image I using the alignment template T. Assuming that a priori overlapping area of an image to be aligned is V, enabling an alignment template T to traverse and slide in the priori overlapping area V, understanding a position relation with reference to FIG. 2, and calculating similarity between the two by using pixel information, wherein a Mean Square Error (MSE) is adopted in the scheme for measurement, a point with the minimum MSE is a required alignment optimal position, and at each displacement point (i, j), the MSE is calculated as:
Figure GDA0003587839890000071
wherein (x, y) is the corresponding position coordinates of the alignment template T and the prior overlap region V, and h and w are the height and width of T respectively.
Finally, the offset of the current image is determined by combining the adjacent (upper left) image total variation weighting mode, so that the aim of considering alignment in two directions is fulfilled, and the position relationship is understood by referring to fig. 3. Because the panoramic images are spliced in an aligned mode, each view map is not only required to be connected with the left view mapLike aligning, it also needs to align with the previous image, and two directions can not only be used, so the offset of each view map needs to be considered comprehensively for the offset obtained by aligning two directions. Suppose view map Ii,jWith the above image Ii-1,jThe calculated left-right shift is Ls1, the up-down shift is Ts1, and the left image Ii,j-1The calculated left-right offset is Ls2, the calculated up-down offset is Ts2, and the total variation of the alignment templates of the upper left image and the lower left image obtained according to the formula (4) is respectively TVTAnd TVLThen calculate to obtain image Ii,jThe left-right and up-down offsets of (d) are respectively:
Lsi,j=(Ls1*TVT+Ls2*TVL)/(TVT+TVL)
Tsi,j=(Ts1*TVT+Ts2*TVL)/(TVT+TVL)。
the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (1)

1. The bright field microscope panoramic image alignment algorithm selected based on the total variation area is characterized by comprising the following steps of:
s1, inputting a plurality of spliced images to be aligned;
s2, extracting a boundary prior overlapping area of the images to be spliced as an image alignment search area;
s3, extracting a characteristic area by the secondary total variation;
s4, calculating relative offset by using MSE mean square error;
s5, calculating the image global offset according to the relative offset;
s6, cutting and translating the images according to the offset data calculated in the step S5 to complete panoramic image splicing;
wherein, step S3 specifically includes the following steps:
s31, determining an area with rich content and strong image edge information;
s32, performing convolution operation on the image by using the gray difference information of upper and lower adjacent points and left and right adjacent points of the pixel point and using a sobel operator to respectively calculate gradients in the x direction and the y direction;
s33, improving the sensitivity of the strong edge by adopting a gradient square calculation mode, and calculating the gradient size of each point of the image;
s34, performing convolution operation on the square of the gradient size of each point of the image to obtain a block secondary total variation characteristic diagram;
s35, taking an area corresponding to the maximum value position point in the block secondary total variation characteristic diagram as the center in the reference image as an alignment template;
in the step S4, in calculating the relative offset by using the MSE mean square error, the method specifically includes the following steps:
s41, confirming the prior overlapping area of the images to be aligned;
s42, enabling the alignment template to traverse and slide in the prior overlapping area;
s43, calculating the similarity between the two by using the pixel information, and measuring the similarity by using MSE mean square error;
s44, confirming the point with the minimum MSE mean square error as the obtained alignment optimal position, and calculating to obtain relative offset according to the coordinate difference between the initial position and the alignment optimal position;
in step S32, the sobel operators in the x and y directions are:
Figure FDA0003587839880000011
in step S33, the equation for the square of the gradient is:
Figure FDA0003587839880000012
wherein I is a reference picture;
in step S34, the block secondary total variation feature map
Figure FDA0003587839880000021
U is the full 1 matrix operator of the alignment template size, i.e.
Figure FDA0003587839880000022
Wherein hxw is the height X width of the alignment template region;
in step S43, the MSE mean square error calculation formula is as follows:
Figure FDA0003587839880000023
wherein the content of the first and second substances,
(i, j) coordinates for each displacement point;
(x, y) is the corresponding position coordinates of the alignment template T and the prior overlapping area V;
h and w are the height and width of the alignment template T, respectively;
step S5 specifically includes the following steps:
s51, calculating the left-right and up-down offset of the visual field image and the image above the visual field image;
s52, calculating the left-right and up-down offsets of the view field image and the image located at the left side thereof;
s53, calculating the left and right offset and the up and down offset of the view field image by using total variation weighting;
in step S53, the formulas for calculating the left-right and up-down offsets of the sight field image are respectively as follows:
left-right offset formula: lsi,j=(Ls1*TVT+Ls2*TVL)/(TVT+TVL);
Formula of upper and lower offset: tsi,j=(Ts1*TVT+Ts2*TVL)/(TVT+TVL);
Wherein:
view map Ii,jWith the above image Ii-1,jThe calculated left-right shift is Ls1, the up-down shift is Ts1, and the left image Ii,j-1The left-right deviation is Ls2 and the up-down deviation is Ts2, which are obtained by calculation according to the formula
Figure FDA0003587839880000024
The total variation of the alignment template resulting in the top and left images is TV respectivelyTAnd TVL
CN202110674960.6A 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection Active CN113362362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674960.6A CN113362362B (en) 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674960.6A CN113362362B (en) 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Publications (2)

Publication Number Publication Date
CN113362362A CN113362362A (en) 2021-09-07
CN113362362B true CN113362362B (en) 2022-06-14

Family

ID=77535020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674960.6A Active CN113362362B (en) 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Country Status (1)

Country Link
CN (1) CN113362362B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744133A (en) * 2021-09-13 2021-12-03 烟台艾睿光电科技有限公司 Image splicing method, device and equipment and computer readable storage medium
CN114708206A (en) * 2022-03-24 2022-07-05 成都飞机工业(集团)有限责任公司 Method, device, equipment and medium for identifying placing position of autoclave molding tool

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236887A (en) * 2011-03-11 2011-11-09 贵州大学 Motion-blurred image restoration method based on rotary difference and weighted total variation
CN107274337B (en) * 2017-06-20 2020-06-26 长沙全度影像科技有限公司 Image splicing method based on improved optical flow
CN107403414B (en) * 2017-07-14 2018-11-02 华中科技大学 A kind of image area selecting method and system being conducive to fuzzy kernel estimates
CN109345496B (en) * 2018-09-11 2021-05-14 中国科学院长春光学精密机械与物理研究所 Image fusion method and device for total variation and structure tensor

Also Published As

Publication number Publication date
CN113362362A (en) 2021-09-07

Similar Documents

Publication Publication Date Title
EP1901228B1 (en) Apparatus, method and program for image matching
Ahmed Comparative study among Sobel, Prewitt and Canny edge detection operators used in image processing
CN110223330B (en) Registration method and system for visible light and infrared images
CN113362362B (en) Bright field microscope panoramic image alignment algorithm based on total variation area selection
US8385687B1 (en) Methods for determining a transformation between images
KR100929085B1 (en) Image processing apparatus, image processing method and computer program recording medium
CN108596878B (en) Image definition evaluation method
US20090209833A1 (en) System and method for automatic detection of anomalies in images
CN107424142A (en) A kind of weld joint recognition method based on saliency detection
CN103258321A (en) Image stitching method
CN112991176A (en) Panoramic image splicing method based on optimal suture line
US20150254854A1 (en) Camera calibration method and apparatus using a color-coded structure
CN111652844B (en) X-ray defect detection method and system based on digital image region growing
CN115082314A (en) Method for splicing optical surface defect images in step mode through self-adaptive feature extraction
Yang et al. Alignment of challenging image pairs: Refinement and region growing starting from a single keypoint correspondence
Anzid et al. A new SURF-based algorithm for robust registration of multimodal images data
Saalfeld Computational methods for stitching, alignment, and artifact correction of serial section data
CN117635421A (en) Image stitching and fusion method and device
RU2647645C1 (en) Method of eliminating seams when creating panoramic images from video stream of frames in real-time
Sarkar et al. A robust method for inter-marker whole slide registration of digital pathology images using lines based features
Conover et al. Towards automatic registration of technical images of works of art
Han et al. Guided filtering based data fusion for light field depth estimation with L0 gradient minimization
Acosta et al. Intensity-based matching and registration for 3D correlative microscopy with large discrepancies
Xi et al. Multi-retinal images stitching based on the maximum fusion and correction ratio of gray average
Xiong et al. Spatially-varying Warping for Panoramic Image Stitching

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant