CN113362362A - Bright field microscope panoramic image alignment algorithm based on total variation area selection - Google Patents

Bright field microscope panoramic image alignment algorithm based on total variation area selection Download PDF

Info

Publication number
CN113362362A
CN113362362A CN202110674960.6A CN202110674960A CN113362362A CN 113362362 A CN113362362 A CN 113362362A CN 202110674960 A CN202110674960 A CN 202110674960A CN 113362362 A CN113362362 A CN 113362362A
Authority
CN
China
Prior art keywords
image
total variation
alignment
panoramic image
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110674960.6A
Other languages
Chinese (zh)
Other versions
CN113362362B (en
Inventor
李小军
高崇军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yipusen Health Technology Shenzhen Co ltd
Original Assignee
Yipusen Health Technology Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yipusen Health Technology Shenzhen Co ltd filed Critical Yipusen Health Technology Shenzhen Co ltd
Priority to CN202110674960.6A priority Critical patent/CN113362362B/en
Publication of CN113362362A publication Critical patent/CN113362362A/en
Application granted granted Critical
Publication of CN113362362B publication Critical patent/CN113362362B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the following steps: inputting a plurality of images to be spliced, extracting a prior overlapping area, extracting a characteristic area by secondary total variation, calculating relative offset by adopting MSE mean square error, calculating image global offset, and splicing a panoramic image result; the method comprises the steps of selecting areas by adopting a mode based on secondary total variation, finding out areas with rich contents and strong image edge information as an alignment template, matching the optimal positions by utilizing pixel information through MSE measurement, and determining the offset between adjacent images by combining a bilateral total variation weighting mode to achieve the aim of considering alignment in two directions, and acquiring a priori knowledge of the overlapping amount of two adjacent visual field images, thereby overcoming the defect of high calculation complexity of the pixel-based alignment mode.

Description

Bright field microscope panoramic image alignment algorithm based on total variation area selection
Technical Field
The invention relates to the technical field of image processing, in particular to a bright field microscope panoramic image alignment algorithm based on total variation area selection.
Background
Image registration algorithms are mainly divided into two main categories: pixel-based registration algorithms and feature-based registration algorithms.
Registration algorithms based on pixel information are roughly classified into three categories: a cross-correlation method (also called template matching method), a sequential similarity detection matching method and an information interaction method. Compared with other registration methods based on global information content, the registration method based on mutual information has the characteristics of flexibility and accuracy, and becomes one of the most popular image registration methods. Frederik Maes and Andre Collignon apply mutual information to measure statistical dependence or information redundancy between image intensities of corresponding voxels in two images. Xiaoxidizing Wang and Jie Tian in their paper proposed a mutual information based registration method using gradient information instead of pixel intensity information. Hartkens et al introduce feature information in the voxel-based registration algorithm to integrate higher level information about the expected deformation. Butz and Thiran reviewed the general definition of mutual information, selecting edge features for image registration. Frederik Maes, Andre proposes a new histogram-based approach to estimate and maximize mutual information between two multi-modal and possibly multi-band signals. These methods combine image features directly with mutual information methods, with many of the advantages of both feature-based and intensity-based methods. However, mutual information based methods have their own limitations, e.g., when the resolution of the images is low, the images contain less information, or the overlap area is small, mutual information may lead to registration errors (Pluim, etc.). Liu Yan et al uses a template matching method to register an image, selects two parallel line segments from a reference image as a template, slides in an overlapping area of the image to be registered, and compares corresponding position gray level difference values by using similarity measurement to obtain an optimal registration position. Common similarity measures used in template matching methods include ssd (mse), SAD, etc. The template matching method uses the gray information of the pixel columns as the registration basis, the calculation amount can be greatly reduced, but the high mismatching rate can be caused by the fact that the information amount of the selected pixel columns is too small.
The registration algorithm based on image features usually extracts features with special properties of an image as a registration basis, and more commonly, features of contours and corners are used. The method has the characteristics of low time complexity and strong anti-noise capability. Lowe et al propose a registration algorithm based on SIFT features that constructs feature vectors by statistics of gradient histograms in the neighborhood of feature points and assigns one or more directions to each feature point, all subsequent operations on the image data being transformed with respect to the direction, scale and position of the keypoints. Therefore, the method keeps the invariance of rotation, scale scaling and brightness change, and also keeps a certain degree of stability to the view angle change, affine transformation and noise, and has the defect of overlarge calculation amount. Wavelet transformation is a commonly used feature-based image registration method, and El-hazawi explains wavelet-based image registration and realizes parallel computation. Hala s.own and Aboul Ella Hassanien proposed in their paper an efficient image registration technique using Q-shift complex wavelet transform (Q-shift CWT). Experimental results show that compared with the classical wavelet transformation, the algorithm improves the calculation efficiency and obtains the image registration with robustness and consistency. Azhar Quddus and Otman Basir propose a novel, fully automatic, wavelet-based multi-stage image registration technique for image retrieval. They use a multi-scale wavelet representation with Mutual Information (MI) to facilitate the matching of important anatomical structures at multiple resolutions, and the application of this method in the multi-stage wavelet domain is innovative. However, the registration algorithm based on the image features generally needs to further refine and screen feature information by preprocessing such as denoising on the image, and the computational complexity is high.
Because the collected cell images need to be amplified to be beneficial to observation and analysis of doctors, a plurality of visual field images need to be collected on one glass slide, because of the precision problem of hardware equipment, the collected adjacent two images are difficult to splice seamlessly, or one part is overlapped or one part is lost, namely, the problem of dislocation of the upper part, the lower part, the left part and the right part exists, the visual perception is poor when the plurality of visual field images are spliced into a large image to be watched by the doctors, at the moment, an alignment splicing algorithm is needed to align all the visual field images, the overlapping part and the dislocation part are removed, the plurality of visual field images can be spliced seamlessly, and the visual perception of reading the whole image is improved.
The scheme has a great deal of difficulty, firstly, because the scanner shakes in the process of scanning the slide, the scanned cell image has defocusing blur with space change, so that the image alignment difficulty is increased; in addition, due to the operation of dye liquor and sheeting in the sheeting process, a large number of backgrounds exist in the scanned patterns, and the overlapping part is small, so that the content of the images available for alignment is less; then, a large number of characteristic points (such as cell nucleuses) with similar contents exist in the cell image, and the characteristic can generate certain interference on image alignment; finally, the common image alignment is based on two images (at most four images), and the scheme aims to solve the problem of splicing and aligning the thousands of view maps.
The prior art has the following difficult problems:
(1) the out-of-focus blur of the spatial variation increases the difficulty of image alignment;
(2) the available image content information is less;
(3) the cell image has characteristic points with similar contents, such as cell nucleus;
(4) the panoramic images of thousands of megapixel pictures need to be spliced and aligned, and the task load is large.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a bright field microscope panoramic image alignment algorithm based on total variation area selection. The method comprises the steps of finding out an area with rich content and strong image edge information as an alignment template through area selection based on secondary total variation, determining the offset of a current image by combining a bilateral total variation weighting mode of adjacent images, and achieving the aim of giving consideration to alignment in two directions.
In order to achieve the purpose, the invention adopts the following specific scheme:
the invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the following steps:
s1, inputting a plurality of images to be spliced (aligned); the number is more than 2;
s2, extracting a priori overlapping region;
s3, extracting a characteristic area by secondary total division;
s4, calculating relative offset by using MSE mean square error;
s5, calculating the global image offset;
and S6, splicing the panoramic image result, and cutting and translating the image according to the offset data calculated in the step S5 to complete the splicing.
Further, in step S2, the image boundary prior overlapping region to be stitched (aligned) is extracted as an image alignment search region, the search range is narrowed, and the calculation amount is reduced.
Further, step S3 specifically includes the following steps:
s31, determining an area with rich content and strong image edge information as an alignment template;
s32, fully utilizing gray difference information (gradient) of upper, lower, left and right adjacent points of a pixel point, performing convolution operation on an image by using a sobel operator, and respectively calculating gradients in x and y directions;
s33, improving the sensitivity of the strong edge by adopting a gradient square calculation mode, and calculating the gradient size of each point of the image;
s34, carrying out convolution operation on the characteristic G (the square of the gradient size of each point of the image) to obtain a block quadratic total variation characteristic diagram (TV);
s35, in the reference image I, an area corresponding to the center of the maximum value position point in the block quadratic total variation feature map (TV) (i.e., an area with the most abundant edge information) is taken as an alignment template.
Further, in step S32, the sobel operators in the x and y directions are:
Figure BDA0003120340400000031
further, in step S33, the equation for the square of the gradient is:
Figure BDA0003120340400000032
where I is the reference picture.
Further, in step S34, the block secondary total variation feature map
Figure BDA0003120340400000033
Wherein the content of the first and second substances,
hXw is the height X width of the alignment template region;
u is the full 1 matrix operator of the alignment template size, i.e.
Figure BDA0003120340400000041
Further, in step S4, calculating the relative offset by using MSE (mean square error), the method specifically includes the following steps:
s41, confirming the prior overlapping area (V) of the images to be aligned;
s42, enabling the alignment template (T) to traverse and slide in the prior overlapping area (V);
s43, calculating the similarity between the two by using the pixel information, and measuring by using Mean Square Error (MSE);
and S44, confirming the point with the minimum MSE mean square error as the required alignment optimal position.
Further, in step S43, the Mean Square Error (MSE) calculation formula is as follows:
Figure BDA0003120340400000042
wherein the content of the first and second substances,
(i, j) coordinates for each displacement point;
(x, y) is the corresponding position coordinates of the alignment template T and the prior overlapping area V;
h and w are the height and width of the alignment template T, respectively.
Further, step S5 specifically includes the following steps:
s51, calculating the left-right and up-down offset of the visual field image and the image above the visual field image;
s52, calculating the left-right and up-down offsets of the view field image and the image located at the left side thereof;
s53, the left and right and up and down shift amounts of the view field image (i.e., the left and right and up and down shift amounts required for completing the stitching of the view field images) are calculated by using the bilateral total variation weighting.
Further, in step S53, the formulas for calculating the left-right and up-down offsets of the sight field image are respectively as follows:
left-right offset formula: lsi,j=(Ls1*TVT+Ls2*TVL)/(TVT+TVL);
Formula of upper and lower offset: tsi,j=(Ts1*TVT+Ts2*TVL)/(TVT+TVL);
Wherein:
view map Ii,jWith the above image Ii-1,jThe calculated left-right shift is Ls1, the up-down shift is Ts1, and the left image Ii,j-1The left-right deviation is Ls2 and the up-down deviation is Ts2, which are obtained by calculation according to the formula
Figure BDA0003120340400000051
The total variation of the alignment template resulting in the top and left images is TV respectivelyTAnd TVL
By adopting the technical scheme of the invention, the invention has the following beneficial effects:
the invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the steps of inputting a plurality of images to be spliced, extracting prior overlapping areas, extracting characteristic areas by secondary total variation, calculating relative offset by adopting MSE mean square error, calculating image global offset and splicing panoramic image results; the method comprises the steps of selecting areas by adopting a mode based on secondary total variation, finding out areas with rich contents and strong image edge information as an alignment template, matching the optimal positions by utilizing pixel information through MSE measurement, and determining the offset between adjacent images by combining a bilateral total variation weighting mode to achieve the aim of considering alignment in two directions, and acquiring a priori knowledge of the overlapping amount of two adjacent visual field images, thereby overcoming the defect of high calculation complexity of the pixel-based alignment mode.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the positions of images during the process of extracting feature areas according to the embodiment of the present invention;
fig. 3 is a schematic view of the field of view, left image and top image positions according to an embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the following figures and specific examples.
The present invention will be described in detail with reference to FIGS. 1 to 3
The invention provides a bright field microscope panoramic image alignment algorithm selected based on total variation areas, which comprises the following steps:
s1, inputting a plurality of images to be spliced (aligned), wherein the number of the images is more than 2;
s2, extracting a priori overlapping region;
s3, extracting a characteristic area by secondary total division;
s4, calculating relative offset by using MSE mean square error;
s5, calculating the global image offset;
and S6, splicing the panoramic image result, and cutting and translating the image according to the offset data calculated in the step S5 to complete the splicing.
The specific method comprises the following steps:
(1) based on the area selection of the secondary total variation, finding out an area with rich content and strong image edge information as an alignment template T;
(2) the offset of the current image is determined by combining the bilateral total variation weighting mode of the adjacent (upper left) image, and the aim of giving consideration to the alignment of two directions is achieved.
Firstly, the scheme adopts a mode based on secondary total variation to select the characteristic area. The method for calculating the image gradient comprises various modes, the method uses a sobel operator to carry out convolution operation on the image, the gradients in the x direction and the y direction are calculated respectively, and the gray difference information (gradient) of upper, lower, left and right adjacent points of a pixel point is fully utilized. The sobel operators in the x and y directions are respectively:
Figure BDA0003120340400000061
assuming that the reference image is I, the gradient size (G') of each point of the image can be calculated:
Figure BDA0003120340400000062
in order to improve the sensitivity of strong edges, a gradient square calculation mode is adopted:
Figure BDA0003120340400000063
calculating block quadratic total variation according to gradient characteristics, and assuming that the size of the T region of the alignment template is h multiplied by w, using a full 1 matrix operator of the size of the alignment template
Figure BDA0003120340400000064
Performing convolution operation on the characteristic G (the gradient size of each point of the image) to obtain a block quadratic total variation characteristic diagram TV:
Figure BDA0003120340400000065
an h × w region corresponding to the maximum position point in the TV (i.e., a region where the edge information is most abundant) is taken as the alignment template T in the reference image I.
Then, matching is performed with the reference image I using the alignment template T. Assuming that a priori overlapping area of an image to be aligned is V, enabling an alignment template T to traverse and slide in the priori overlapping area V, understanding a position relation with reference to FIG. 2, and calculating similarity between the two by using pixel information, wherein a Mean Square Error (MSE) is adopted in the scheme for measurement, a point with the minimum MSE is a required alignment optimal position, and at each displacement point (i, j), the MSE is calculated as:
Figure BDA0003120340400000071
wherein (x, y) is the corresponding position coordinates of the alignment template T and the prior overlap region V, and h and w are the height and width of T respectively.
Finally, the offset of the current image is determined by combining the bilateral total variation weighting mode of the adjacent (upper left) image, so that the aim of giving consideration to the alignment in two directions is fulfilled, and the position relation is understood by referring to fig. 3. Because the panoramic images are aligned and spliced, each view map needs to be aligned with the left image and the upper image, and only one view map can be considered in two directions, the offset of each view map needs to be comprehensively considered in the two directions. Suppose view map Ii,jWith the above image Ii-1,jThe calculated left-right shift is Ls1, the up-down shift is Ts1, and the left image Ii,j-1The calculated left-right offset is Ls2, the calculated up-down offset is Ts2, and the total variation of the alignment templates of the upper left image and the lower left image obtained according to the formula (4) is respectively TVTAnd TVLThen calculate to obtain image Ii,jThe left-right and up-down offsets of (d) are respectively:
Lsi,j=(Ls1*TVT+Ls2*TVL)/(TVT+TVL)
Tsi,j=(Ts1*TVT+Ts2*TVL)/(TVT+TVL)。
the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all modifications and equivalents of the present invention, which are made by the contents of the present specification and the accompanying drawings, or directly/indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. The bright field microscope panoramic image alignment algorithm selected based on the total variation area is characterized by comprising the following steps of:
s1, inputting a plurality of spliced images to be aligned;
s2, extracting a priori overlapping region;
s3, extracting a characteristic area by secondary total division;
s4, calculating relative offset by using MSE mean square error;
s5, calculating the global image offset;
and S6, splicing the panoramic image result.
2. The algorithm for aligning the panoramic image of the bright field microscope with the total variation area based on the selection of the total variation area as claimed in claim 1, wherein in step S2, the a priori overlapped area of the boundaries of the images to be stitched is extracted as the image alignment search area.
3. The algorithm for aligning a panoramic image of a bright field microscope selected based on total variation area according to claim 1, wherein the step S3 specifically comprises the following steps:
s31, determining an area with rich content and strong image edge information as an alignment template;
s32, performing convolution operation on the image by using the gray difference information of upper and lower adjacent points and left and right adjacent points of the pixel point and using a sobel operator to respectively calculate gradients in the x direction and the y direction;
s33, improving the sensitivity of the strong edge by adopting a gradient square calculation mode, and calculating the gradient size of each point of the image;
s34, performing convolution operation on the square of the gradient size of each point of the image to obtain a block secondary total variation characteristic diagram;
and S35, taking the area corresponding to the maximum value position point in the block quadratic total variation characteristic diagram as the center in the reference image as an alignment template.
4. The panoramic image alignment algorithm for a total variation area-based selected bright field microscope according to claim 3, wherein in step S32, the sobel operators in the x and y directions are:
Figure FDA0003120340390000011
5. the total variation region-based bright field microscope panoramic image alignment algorithm of claim 4, wherein in step S33, the calculation formula of the gradient square is:
Figure FDA0003120340390000012
where I is the reference picture.
6. The total variation region-based bright field microscope panoramic image alignment algorithm of claim 5, wherein in step S34, the block quadratic total variation feature map
Figure FDA0003120340390000021
Wherein the content of the first and second substances,
hxw is the height X width of the alignment template region;
u is the full 1 matrix operator of the alignment template size, i.e.
Figure FDA0003120340390000022
7. The panoramic image alignment algorithm for a bright field microscope selected based on total variation area according to claim 1, wherein the step S4 adopts MSE mean square error to calculate the relative offset, and specifically comprises the following steps:
s41, confirming the prior overlapping area of the images to be aligned;
s42, enabling the alignment template to traverse and slide in the prior overlapping area;
s43, calculating the similarity between the two by using the pixel information, and measuring the similarity by using MSE mean square error;
and S44, confirming the point with the minimum MSE mean square error as the required alignment optimal position.
8. The total variation region-based bright field microscope panoramic image alignment algorithm of claim 7, wherein in step S43, the MSE mean square error calculation formula is as follows:
Figure FDA0003120340390000023
wherein the content of the first and second substances,
(i, j) coordinates for each displacement point;
(x, y) is the corresponding position coordinates of the alignment template T and the prior overlapping area V;
h and w are the height and width of the alignment template T, respectively.
9. The algorithm for aligning a panoramic image of a bright field microscope selected based on total variation area according to claim 1, wherein the step S5 specifically comprises the following steps:
s51, calculating the left-right and up-down offset of the visual field image and the image above the visual field image;
s52, calculating the left-right and up-down offsets of the view field image and the image located at the left side thereof;
s53, the left-right and up-down shift amounts of the view field image are calculated by bilateral total variation weighting.
10. The algorithm for aligning a panoramic image for a bright field microscope selected according to the total variation area of claim 9, wherein in step S53, the formulas for calculating the left-right and up-down offsets of the panoramic image are as follows:
left-right offset formula: lsi,j=(Ls1*TVT+Ls2*TVL)/(TVT+TVL);
Formula of upper and lower offset: tsi,j=(Ts1*TVT+Ts2*TVL)/(TVT+TVL);
Wherein:
view map Ii,jWith the above image Ii-1,jThe calculated left-right shift is Ls1, the up-down shift is Ts1, and the left image Ii,j-1The left-right deviation is Ls2 and the up-down deviation is Ts2, which are obtained by calculation according to the formula
Figure FDA0003120340390000031
The total variation of the alignment template resulting in the top and left images is TV respectivelyTAnd TVL
CN202110674960.6A 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection Active CN113362362B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110674960.6A CN113362362B (en) 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110674960.6A CN113362362B (en) 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Publications (2)

Publication Number Publication Date
CN113362362A true CN113362362A (en) 2021-09-07
CN113362362B CN113362362B (en) 2022-06-14

Family

ID=77535020

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110674960.6A Active CN113362362B (en) 2021-06-17 2021-06-17 Bright field microscope panoramic image alignment algorithm based on total variation area selection

Country Status (1)

Country Link
CN (1) CN113362362B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744133A (en) * 2021-09-13 2021-12-03 烟台艾睿光电科技有限公司 Image splicing method, device and equipment and computer readable storage medium
CN114708206A (en) * 2022-03-24 2022-07-05 成都飞机工业(集团)有限责任公司 Method, device, equipment and medium for identifying placing position of autoclave molding tool

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236887A (en) * 2011-03-11 2011-11-09 贵州大学 Motion-blurred image restoration method based on rotary difference and weighted total variation
CN107274337A (en) * 2017-06-20 2017-10-20 长沙全度影像科技有限公司 A kind of image split-joint method based on improvement light stream
WO2019010932A1 (en) * 2017-07-14 2019-01-17 华中科技大学 Image region selection method and system favorable for fuzzy kernel estimation
CN109345496A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method and device of total variation and structure tensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102236887A (en) * 2011-03-11 2011-11-09 贵州大学 Motion-blurred image restoration method based on rotary difference and weighted total variation
CN107274337A (en) * 2017-06-20 2017-10-20 长沙全度影像科技有限公司 A kind of image split-joint method based on improvement light stream
WO2019010932A1 (en) * 2017-07-14 2019-01-17 华中科技大学 Image region selection method and system favorable for fuzzy kernel estimation
CN109345496A (en) * 2018-09-11 2019-02-15 中国科学院长春光学精密机械与物理研究所 A kind of image interfusion method and device of total variation and structure tensor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
葛仕明等: "基于梯度场的拼接缝消除方法", 《计算机辅助设计与图形学学报》 *
郑精灵等: "整体变分算法在图像修补中的应用研究", 《计算机辅助设计与图形学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744133A (en) * 2021-09-13 2021-12-03 烟台艾睿光电科技有限公司 Image splicing method, device and equipment and computer readable storage medium
CN114708206A (en) * 2022-03-24 2022-07-05 成都飞机工业(集团)有限责任公司 Method, device, equipment and medium for identifying placing position of autoclave molding tool

Also Published As

Publication number Publication date
CN113362362B (en) 2022-06-14

Similar Documents

Publication Publication Date Title
CN110223330B (en) Registration method and system for visible light and infrared images
Ahmed Comparative study among Sobel, Prewitt and Canny edge detection operators used in image processing
EP1901228B1 (en) Apparatus, method and program for image matching
CN113362362B (en) Bright field microscope panoramic image alignment algorithm based on total variation area selection
US8385687B1 (en) Methods for determining a transformation between images
US20090209833A1 (en) System and method for automatic detection of anomalies in images
CN107424142A (en) A kind of weld joint recognition method based on saliency detection
CN112991176B (en) Panoramic image splicing method based on optimal suture line
Ali et al. Illumination invariant optical flow using neighborhood descriptors
CN103258321A (en) Image stitching method
US20150254854A1 (en) Camera calibration method and apparatus using a color-coded structure
CN111652844A (en) X-ray defect detection method and system based on digital image region growth
Anzid et al. A new SURF-based algorithm for robust registration of multimodal images data
Yang et al. Alignment of challenging image pairs: Refinement and region growing starting from a single keypoint correspondence
Saalfeld Computational methods for stitching, alignment, and artifact correction of serial section data
CN117635421A (en) Image stitching and fusion method and device
Zhang et al. A simple yet effective image stitching with computational suture zone
CN117078726A (en) Different spectrum image registration method based on edge extraction
Duan et al. Automatic object and image alignment using Fourier descriptors
RU2647645C1 (en) Method of eliminating seams when creating panoramic images from video stream of frames in real-time
CN115082314A (en) Method for splicing optical surface defect images in step mode through self-adaptive feature extraction
Sarkar et al. A robust method for inter-marker whole slide registration of digital pathology images using lines based features
CN118015237B (en) Multi-view image stitching method and system based on global similarity optimal seam
Han et al. Guided filtering based data fusion for light field depth estimation with L0 gradient minimization
Acosta et al. Intensity-based matching and registration for 3D correlative microscopy with large discrepancies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant