CN110544204A - image splicing method based on block matching - Google Patents

image splicing method based on block matching Download PDF

Info

Publication number
CN110544204A
CN110544204A CN201910698570.5A CN201910698570A CN110544204A CN 110544204 A CN110544204 A CN 110544204A CN 201910698570 A CN201910698570 A CN 201910698570A CN 110544204 A CN110544204 A CN 110544204A
Authority
CN
China
Prior art keywords
image
matching
images
blocks
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910698570.5A
Other languages
Chinese (zh)
Inventor
黄茜
王林尧
胡志辉
师聪颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201910698570.5A priority Critical patent/CN110544204A/en
Publication of CN110544204A publication Critical patent/CN110544204A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

the invention discloses an image splicing method based on block matching, which comprises four stages of image acquisition and pretreatment, cutting and matching of blocks to be matched, screening of matching block pairs, and registration and fusion of images. The image acquisition and preprocessing stage comprises: acquiring an image sequence, and preprocessing operations such as graying and filtering of the image; the cutting and matching stage of the block to be matched comprises the following steps: cutting a block to be matched, and matching the block to be matched; the screening stage of the matching block pair comprises the following steps: screening out an optimal matching block pair from the matching block pair, determining a precise matching block pair by utilizing the optimal matching block pair, and calculating the coordinate difference and the moving target displacement of the precise matching block pair; the registration and fusion phase of the images comprises: determining a splicing line and an overlapping area, and splicing the images by using a multi-resolution algorithm. The method can effectively remove the influence of the complex background on the splicing, and can obtain more accurate splicing effect under the condition that the characteristics of the shot target are not obvious.

Description

Image splicing method based on block matching
Technical Field
The invention relates to the field of computer vision and digital image processing, in particular to an image splicing method based on block matching.
background
For some long large-scale moving objects, such as trucks, motor cars, high-speed rails, large ships and the like, when a camera with a fixed position is used for shooting, due to the relation of the field of view of the camera, the overall appearance cannot be presented in one image, the moving objects are often required to be continuously shot, and then after shooting is finished, splicing is carried out according to the time sequence of the images.
The existing image splicing technology is generally realized by adopting a method of feature point extraction and matching, but under the condition that the background is not changed, feature points on a background image can repeatedly appear on each image, and the method based on the feature point extraction and matching can fuse and splice the repeatedly appearing backgrounds on the images as image overlapping regions, so that the spliced image is seriously distorted.
Therefore, the method and the device have important research significance and practical value on how to realize accurate splicing after the camera fixedly shoots the moving target when the background of the moving target is unchanged.
Disclosure of Invention
the invention aims to overcome the defects of the prior art and provide an image splicing method based on block matching, which can effectively remove the influence of a complex background on splicing, can obtain an accurate splicing effect under the condition that the characteristics of a shot object are not obvious, and has good robustness.
the purpose of the invention is realized by the following technical scheme: an image stitching method based on block matching comprises the following steps:
S1, fixing a camera, shooting an object which horizontally moves in front of the camera, and storing a shot original image according to a time sequence;
S2, extracting two images to be spliced from an original image, respectively preprocessing the two images to obtain a first image and a second image, dividing the second image into m multiplied by n image blocks to be matched, wherein the sizes of the blocks to be matched are consistent, and dividing the first image into corresponding m lines of images; matching all blocks to be matched of each line in the second image with blocks to be matched of the line in the first image in sequence to obtain an initial matching block pair set;
s3, eliminating the matching block pair with the matching error and the matching block pair in the background image, and recording the optimal matching block pair; screening a precise matching block pair in the optimal matching block pair, and calculating the displacement of the moving object;
and S4, determining an overlapping area according to the displacement of the moving object, and splicing images according to the overlapping area.
Preferably, the preprocessing step includes performing graying, median filtering, and bilateral filtering in sequence.
preferably, in step S2, all blocks to be matched in each row of the second image are sequentially matched with the blocks to be matched in the row of the first image by using a matching algorithm based on the squared difference metric, and the steps are as follows:
s2-1, reading the first image H1 and the second image H2, and initializing a displacement array C;
s2-2, making the minimum sum of squares error Smin infinite;
S2-3, selecting a template matching block Pi, j in the second image H2, sequentially selecting blocks Qi, k to be matched with the size of Pi, j in the ith row of the first image H1 by steps of a plurality of pixels, calculating the square sum error S of the blocks Qi, k to be matched and the blocks Pi, j, and if S < Smin, making Smin equal to S;
s2-4, repeating the step S2-3 until all the blocks Qi, k to be matched in the ith row in the first image H1 are traversed, wherein the matching block pair corresponding to Smin is an optimal matching block pair, and storing the coordinate difference value, namely the displacement, of the matching block pair into an array C;
S2-5, repeating the steps S2-3 and S2-4 until all blocks Pi, j to be matched in the second image H2 are traversed; and all the optimal matching block pairs in the calculation process form an initial matching block pair set.
preferably, in step S3, the matching block pairs with matching errors and the matching block pairs in the background map are removed, and the steps are:
s3-1-1, initializing an overlap area array R;
S3-1-2, reading the displacement array C obtained in the step S2;
s3-1-3, sequentially detecting whether the value Ci, j in the array C is smaller than a threshold value a, if so, indicating that the matching block pair is an error matching block pair or a matching block pair in the background, and removing;
s3-1-4, subtracting any two of four values of Ci, j, Ci +1, j, Ci, j +1 and Ci +1, j +1 to obtain an absolute value, namely | Ci, j-Ci, j +1|, | Ci, j-Ci +1, j +1|, | Ci, j +1-Ci +1, j +1|, | Ci +1, j-Ci +1, j +1| to judge whether the absolute value is smaller than a threshold b, if the absolute value is smaller than the threshold b, the accurate matching blocks in the images H1 and H2 are merged respectively, and the image pixel value is simultaneously stored in an array R;
S3-1-5, repeat S3-1-3 and S3-1-4 until all values in array C have been traversed.
preferably, in step S3, the precision matching block pair in the best matching block pair is screened, and the displacement of the moving object is calculated, including the steps of:
S3-2-1, reading an overlapping area array R;
S3-2-2, sequentially taking out the precise matching blocks Ni, j in the corresponding first image H1 according to the elements in the array R, respectively expanding the boundaries of the Ni, j by c and d pixels in the horizontal and vertical directions, searching the most similar region Mi, j of the Ni, j in the region formed by all the precise matching blocks of the second image H2 by using a normalization product correlation algorithm, and solving the correlation coefficient p of the most similar region Mi, j;
s3-2-3, if p is larger than the threshold value, the Mi, j and the Ni, j are considered to be matched with each other, the displacement between the Mi, j and the Ni, j is calculated and stored, and if not, the Mi, j and the Ni, j are removed;
s3-2-4, repeating the steps S3-2-2 and S3-2-3 until all elements in the array R are traversed, cutting off the maximum value and the minimum value of the displacement corresponding to each pair of matching blocks, and taking the displacement average value corresponding to each pair of matching blocks as the displacement wg of the moving object.
preferably, in step S4, the overlapping region is determined according to the displacement of the moving object, and the step is:
S4-1-1, reserving the W-wg +1 column of the first original image I1 to the last column of the image B1, wherein W is the total column number of I1;
S4-1-2, reserving the first column to the wg column of images B2 of the second original image I2;
the first column to W-wg column of images G10 of S4-1-3.I1 and the wg +1 column to the last column of images G20 of I2 are overlapping regions.
further, for the overlapping region determined in step S4, the overlapping region is fused by using a multi-resolution fusion algorithm, and the fused images are spliced, the steps are as follows:
s4-2-1, carrying out N-layer Laplacian pyramid decomposition on G10 and G20 to obtain decomposed images L10L10, L11, …, L1N, L20, L21, … and L2N;
s4-2-2, generating an area image D0 in the size of G10, filling white on the left side of the central line of the image and black on the right side of the central line of the image, and performing Gaussian pyramid decomposition on N layers to obtain images D1, … and DN;
s4-2-3. fusing the las pyramid decomposed images of each layer according to formula LMl ═ DlL2L + (1-Dl) L1L;
s4-2-4, reconstructing a fusion image GM according to the fused Lass pyramid decomposition image LMl, l belongs to 1,2, … and N;
s4-2-5, the spliced images are [ B2, GM B1 ].
Preferably, when a plurality of images need to be spliced, the method comprises the following steps:
Forming a group of image sequences by the shot multiple images according to a time sequence;
Extracting two front and rear time sequence images from the image, and splicing the two images to obtain a spliced image;
Deleting the two images from the image sequence, and simultaneously inserting the mosaic into the image sequence according to the acquisition time of the two images;
and sequentially splicing the images in the image sequence until no image exists.
Compared with the prior art, the invention has the following advantages and beneficial effects:
The invention provides a method for partitioning and matching two images to be spliced to obtain an initial matching block pair set, then screening the initial matching block pair set according to the size of the matching degree and the characteristics of the characteristics in a background image to obtain an optimal matching block pair, and further obtaining a precise matching block pair in an overlapping area according to the relation between a partition block in each image and the partition blocks around the partition block, thereby effectively removing the influence of a complex background on splicing and obtaining an acceptable splicing effect under the condition that the characteristics of a shot moving object are not obvious. In addition, after the overlapping area is obtained, the multi-resolution fusion algorithm is used for fusion, then the fused overlapping area is used for splicing, and the spliced image has higher restoration degree.
drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention.
Fig. 2 is a series of pictures of a moving car taken in the present embodiment.
FIG. 3 is a graph showing the results of the pretreatment according to the present embodiment.
fig. 4 is a schematic diagram of the present embodiment dividing the grayscale images H1 and H2.
Fig. 5 is a flowchart of matching blocks to be matched in the present embodiment.
fig. 6 is a flowchart of deleting an error matching block and a background matching block and screening a best matching block pair in the present embodiment.
Fig. 7 is a flowchart for screening the fine matching block pair in the best matching block pair and calculating the displacement of the vehicle in the present embodiment.
FIG. 8 is a graph of the results of using the method described in FIG. 7 to obtain pairs of closely matched blocks.
Fig. 9 is a decomposed image obtained by performing laplacian pyramid decomposition on an image in the present embodiment.
Fig. 10 is a region image generated in the fusion process and its gaussian pyramid decomposition image.
Fig. 11 is a schematic diagram of the present embodiment when splicing is performed.
fig. 12 is a complete car image obtained by stitching a plurality of pictures.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Example 1
The embodiment takes an automobile as an object, and provides an automobile image stitching method based on block matching. The method can overcome the influence of the characteristic points of the background, realize accurate splicing and obtain accurate splicing effect under the condition that the characteristics of the shot object are not obvious. The method is specifically described below with reference to the accompanying drawings.
The automobile image stitching method based on block matching mainly includes four stages, a general flow is shown in fig. 1, and the steps of each stage are specifically described below with reference to the flow chart.
Firstly, collecting and preprocessing images
this stage mainly comprises: acquiring an image sequence, and performing operations such as preprocessing on the image, specifically:
s1, a camera is fixed, a group of images are shot for a moving automobile, 6 images are collected as shown in figure 2 in the embodiment, and the 6 images are sequentially written into an image sequence according to the collection time.
s2, preprocessing a first image I1 and a second image I2 of the image sequence, wherein the preprocessing method comprises the steps of carrying out graying, median filtering and bilateral filtering on each image in sequence to obtain preprocessed images which are respectively marked as H1 and H2, and the result is shown in figure 3.
secondly, cutting and matching of the block to be matched
this stage mainly comprises: cutting a block to be matched, matching the block to be matched and the like, specifically:
and S3, dividing the gray level image H2 into m multiplied by n image blocks Pi, j to be matched, wherein the sizes of the image blocks Pi, j to be matched are consistent. In this example, m and n are both 10, as shown in the right image in fig. 4. The grayscale image H1 is divided into corresponding m-line images as shown in the left image in fig. 4.
And S4, sequentially matching all blocks Pi, j to be matched in the image H2 with the blocks to be matched in the ith row of the image H1 by using a matching algorithm based on square difference measurement. The matching algorithm based on the squared difference metric described herein, see fig. 5, is as follows:
S4-1, reading the gray images H1 and H2, and initializing a displacement array C;
S4-2, making the minimum sum of squares error Smin infinite;
s4-3, selecting a template matching block Pi, j in the image H2, sequentially selecting blocks Qi, k to be matched with the size of Pi, j in the ith row in the image H1 in a step size of 2 pixels, calculating the square sum error S of the blocks Qi, k to be matched and the blocks Pi, j, and if S < Smin, making Smin be S;
s4-4, repeating the step S4-3 until all the blocks Qi, k to be matched in the ith row in the image H1 are traversed, wherein the matching block pair corresponding to Smin is the best matching block pair, and storing the coordinate difference value, namely the displacement, of the matching block pair into an array C;
S4-5, repeating the steps S4-3 and S4-4 until all blocks Pi, j to be matched in the H2 are traversed. And after the completion, storing the best matching block pair, and storing the calculated coordinate difference values into an array C.
screening of matching block pairs
this stage mainly comprises: screening out the best matching block pair from the matching block pair, determining the precision matching block pair by utilizing the best matching block pair, and calculating the coordinate difference and the automobile displacement of the precision matching block pair, specifically comprising the following steps:
S5, eliminating the matching block pair with the matching error and the matching block pair in the background image, and recording the best matching block pair. Referring to fig. 6, the present embodiment may employ the following method:
S5-1, initializing an overlap region array R;
s5-2, reading the displacement array C obtained in the step S4;
s5-3, sequentially detecting whether the value Ci, j in the array C is smaller than a threshold value a (the threshold value a in the embodiment is 150), if so, indicating that the matching block pair is an error matching block pair or a matching block pair in the background, and removing the error matching block pair or the matching block pair;
s5-4, subtracting any two of four values of Ci, j, Ci +1, j, Ci, j +1 and Ci +1, j +1 to obtain an absolute value, namely | Ci, j-Ci, j +1|, | Ci, j-Ci +1, j +1|, | Ci, j +1-Ci +1, j +1|, | Ci +1, j-Ci +1, j +1| to judge whether the precision matching blocks are smaller than a threshold b (the embodiment b takes 150), if the precision matching blocks are smaller than the precision matching blocks, the precision matching blocks in the image H1 and the image H2 are merged respectively, and the gray values are stored in two groups of arrays simultaneously;
s5-5, repeat S5-3 and S5-4 until all values in array C have been traversed.
and S6, screening the precise matching block pair in the optimal matching block pair and calculating the automobile displacement wg. Referring to fig. 7, the present embodiment may employ the following method:
S6-1, reading the array R obtained in the step S5;
S6-2, sequentially taking out the corresponding precise matching blocks Ni, j in H1 according to the elements in the array R, respectively expanding the boundary of Ni, j to c and d pixels (in the embodiment, c takes 50 and d takes 10) in the horizontal and vertical directions, searching the most similar area Mi, j of Ni, j in the area formed by all the precise matching blocks of H2 by using a normalization product correlation algorithm, and solving the correlation coefficient p of the most similar area Mi, j;
s6-3, if p > is 0.6 of the set threshold, determining that Mi, j and Ni, j are mutually matched, calculating and storing the displacement between Mi, j and Ni, j, and otherwise, rejecting the displacement;
s6-4, repeating the steps S6-2 and S6-3 until all elements in the array R are traversed, cutting off the maximum value and the minimum value of the displacement corresponding to each pair of matching blocks, and calculating the displacement average value corresponding to each pair of matching blocks, namely the displacement wg of the automobile.
Figure 8 shows the calculated pair of exact match blocks for the two images of figure 4.
fourthly, registration and fusion of images
This stage mainly comprises: and determining an overlapping area, and splicing the images by using a multi-resolution fusion algorithm. Specifically, the method comprises the following steps:
s7, splicing I1 and I2 according to automobile displacement wg, and the steps are as follows:
S7-1, reserving the W-wg +1 column of the first original image I1 to the last column of the image B1, wherein W is the total column number of I1;
S7-2, reserving the first column to the wg column of images B2 of the second original image I2;
the first column to W-wg column of images G10 and the wg +1 column to the last column of images G20 of I2 of S7-3.I1 are overlapping regions.
if the fusion is not considered, one of the overlapped areas in the two images can be selected to be combined with B1 and B2 to obtain a spliced image. However, in order to improve the real image of the stitched image, the embodiment proposes to perform fusion on the two overlapped regions by using a multi-resolution fusion algorithm before stitching, and the method is as follows:
s7-3-1, performing N-layer Laplacian pyramid decomposition on G10 and G20 to obtain decomposed images L10L10, L11, …, L1N, L20, L21, … and L2N. This example, N, takes 4 and is shown in exploded view in fig. 9.
S7-3-2, with the size of G10 as the size, a region image D0 is generated. As shown in fig. 10, the left side of the line in the image is filled with white and the right side is filled with black, and N-level gaussian pyramid decomposition is performed to obtain images D1, …, DN.
s7-3-3. each layer of the las pyramid decomposed image is fused according to equation LMl ═ DlL2L + (1-Dl) L1L.
s7-3-4, reconstructing a fusion image GM according to the fused Lass pyramid decomposition image LMl, l belongs to 1,2, … and N
after the fusion image GM is obtained by the fusion method, referring to FIG. 11, the reserved B1, B2 and GM are merged to obtain a splicing image obtained by splicing the current first image and the second image.
s8, replacing I1 and I2 with the spliced new image to form a new image sequence.
And S9, repeating the steps S2 to S8 until the splicing of the complete automobile is completed, wherein the images after the splicing of the image sequence shown in the FIG. 2 is completed are shown in FIG. 12. As can be seen from the splicing diagram, the splicing method has high reduction degree and good robustness.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk.
while the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. An image stitching method based on block matching is characterized by comprising the following steps:
s1, fixing a camera, shooting an object which horizontally moves in front of the camera, and storing a shot original image according to a time sequence;
s2, extracting two images to be spliced from an original image, respectively preprocessing the two images to obtain a first image and a second image, dividing the second image into m multiplied by n image blocks to be matched, wherein the sizes of the blocks to be matched are consistent, and dividing the first image into corresponding m lines of images; matching all blocks to be matched of each line in the second image with blocks to be matched of the line in the first image in sequence to obtain an initial matching block pair set;
S3, eliminating the matching block pair with the matching error and the matching block pair in the background image, and recording the optimal matching block pair; screening a precise matching block pair in the optimal matching block pair, and calculating the displacement of the moving object;
and S4, determining an overlapping area according to the displacement of the moving object, and splicing images according to the overlapping area.
2. the block matching-based image stitching method according to claim 1, wherein the preprocessing step comprises graying, median filtering and bilateral filtering in sequence.
3. the image stitching method based on block matching according to claim 1, wherein in step S2, all the blocks to be matched in each row of the second image are sequentially matched with the blocks to be matched in the row of the first image by using a matching algorithm based on a squared difference metric, and the steps are as follows:
s2-1, reading the first image H1 and the second image H2, and initializing a displacement array C;
s2-2, making the minimum sum of squares error Smin infinite;
S2-3, selecting a template matching block Pi, j in the second image H2, sequentially selecting blocks Qi, k to be matched with the size of Pi, j in the ith row of the first image H1 by steps of a plurality of pixels, calculating the square sum error S of the blocks Qi, k to be matched and the blocks Pi, j, and if S < Smin, making Smin equal to S;
s2-4, repeating the step S2-3 until all the blocks Qi, k to be matched in the ith row in the first image H1 are traversed, wherein the matching block pair corresponding to Smin is an optimal matching block pair, and storing the coordinate difference value, namely the displacement, of the matching block pair into an array C;
s2-5, repeating the steps S2-3 and S2-4 until all blocks Pi, j to be matched in the second image H2 are traversed; and all the optimal matching block pairs in the calculation process form an initial matching block pair set.
4. the image stitching method based on block matching according to claim 3, wherein in step S3, the matching block pairs with matching errors and the matching block pairs in the background image are eliminated, and the steps are as follows:
S3-1-1, initializing an overlap area array R;
S3-1-2, reading the displacement array C obtained in the step S2;
S3-1-3, sequentially detecting whether the value Ci, j in the array C is smaller than a threshold value a, if so, indicating that the matching block pair is an error matching block pair or a matching block pair in the background, and removing;
s3-1-4, subtracting any two of four values of Ci, j, Ci +1, j, Ci, j +1 and Ci +1, j +1 to obtain an absolute value, namely | Ci, j-Ci, j +1|, | Ci, j-Ci +1, j +1|, | Ci, j +1-Ci +1, j +1|, | Ci +1, j-Ci +1, j +1| to judge whether the absolute value is smaller than a threshold b, if the absolute value is smaller than the threshold b, the accurate matching blocks in the images H1 and H2 are merged respectively, and the image pixel value is simultaneously stored in an array R;
S3-1-5, repeat S3-1-3 and S3-1-4 until all values in array C have been traversed.
5. The image stitching method based on block matching according to claim 4, wherein in step S3, the precise matching block pair in the best matching block pair is screened, and the displacement of the moving object is calculated, and the steps are as follows:
S3-2-1, reading an overlapping area array R;
S3-2-2, sequentially taking out the precise matching blocks Ni, j in the corresponding first image H1 according to the elements in the array R, respectively expanding the boundaries of the Ni, j by c and d pixels in the horizontal and vertical directions, searching the most similar region Mi, j of the Ni, j in the region formed by all the precise matching blocks of the second image H2 by using a normalization product correlation algorithm, and solving the correlation coefficient p of the most similar region Mi, j;
s3-2-3, if p is larger than the threshold value, the Mi, j and the Ni, j are considered to be matched with each other, the displacement between the Mi, j and the Ni, j is calculated and stored, and if not, the Mi, j and the Ni, j are removed;
s3-2-4, repeating the steps S3-2-2 and S3-2-3 until all elements in the array R are traversed, cutting off the maximum value and the minimum value of the displacement corresponding to each pair of matching blocks, and taking the displacement average value corresponding to each pair of matching blocks as the displacement wg of the moving object.
6. the block matching-based image stitching method according to claim 5, wherein in step S4, the overlapping region is determined according to the displacement of the moving object, and the steps are as follows:
S4-1-1, reserving the W-wg +1 column of the first original image I1 to the last column of the image B1, wherein W is the total column number of I1;
s4-1-2, reserving the first column to the wg column of images B2 of the second original image I2;
The first column to W-wg column of images G10 of S4-1-3.I1 and the wg +1 column to the last column of images G20 of I2 are overlapping regions.
7. the image stitching method based on block matching according to claim 6, wherein for the overlapping region determined in step S4, the overlapping region is fused by a multi-resolution fusion algorithm, and the fused images are stitched, and the steps are as follows:
s4-2-1, carrying out N-layer Laplacian pyramid decomposition on G10 and G20 to obtain decomposed images L10L10, L11, …, L1N, L20, L21, … and L2N;
S4-2-2, generating an area image D0 in the size of G10, filling white on the left side of the central line of the image and black on the right side of the central line of the image, and performing Gaussian pyramid decomposition on N layers to obtain images D1, … and DN;
s4-2-3. fusing the las pyramid decomposed images of each layer according to formula LMl ═ DlL2L + (1-Dl) L1L;
S4-2-4, reconstructing a fusion image GM according to the fused Lass pyramid decomposition image LMl, l belongs to 1,2, … and N;
s4-2-5, the spliced images are [ B2, GM B1 ].
8. the image stitching method based on block matching according to claim 1, wherein when a plurality of images need to be stitched, the method comprises the following steps:
Forming a group of image sequences by the shot multiple images according to a time sequence;
extracting two front and rear time sequence images from the image, and splicing the two images to obtain a spliced image;
Deleting the two images from the image sequence, and simultaneously inserting the mosaic into the image sequence according to the acquisition time of the two images;
and extracting and splicing the images in the image sequence again until no image exists.
CN201910698570.5A 2019-07-31 2019-07-31 image splicing method based on block matching Pending CN110544204A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910698570.5A CN110544204A (en) 2019-07-31 2019-07-31 image splicing method based on block matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910698570.5A CN110544204A (en) 2019-07-31 2019-07-31 image splicing method based on block matching

Publications (1)

Publication Number Publication Date
CN110544204A true CN110544204A (en) 2019-12-06

Family

ID=68710396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910698570.5A Pending CN110544204A (en) 2019-07-31 2019-07-31 image splicing method based on block matching

Country Status (1)

Country Link
CN (1) CN110544204A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127353A (en) * 2019-12-16 2020-05-08 重庆邮电大学 High-dynamic image ghost removing method based on block registration and matching
CN111161283A (en) * 2019-12-26 2020-05-15 可牛网络技术(北京)有限公司 Method and device for processing picture resources and electronic equipment
CN111563867A (en) * 2020-07-14 2020-08-21 成都中轨轨道设备有限公司 Image fusion method for improving image definition
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
WO2023029113A1 (en) * 2021-08-31 2023-03-09 广东艾檬电子科技有限公司 Image splicing method, terminal device, and computer-readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101782969A (en) * 2010-02-26 2010-07-21 浙江大学 Reliable image characteristic matching method based on physical positioning information
CN104796623A (en) * 2015-02-03 2015-07-22 中国人民解放军国防科学技术大学 Method for eliminating structural deviation of stitched video based on pyramid block matching and functional optimization
US20190213434A1 (en) * 2007-06-21 2019-07-11 Fotonation Limited Image capture device with contemporaneous image correction mechanism

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190213434A1 (en) * 2007-06-21 2019-07-11 Fotonation Limited Image capture device with contemporaneous image correction mechanism
CN101782969A (en) * 2010-02-26 2010-07-21 浙江大学 Reliable image characteristic matching method based on physical positioning information
CN104796623A (en) * 2015-02-03 2015-07-22 中国人民解放军国防科学技术大学 Method for eliminating structural deviation of stitched video based on pyramid block matching and functional optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钱龙: ""静态背景下的动态汽车图像拼接"", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127353A (en) * 2019-12-16 2020-05-08 重庆邮电大学 High-dynamic image ghost removing method based on block registration and matching
CN111127353B (en) * 2019-12-16 2023-07-25 重庆邮电大学 High-dynamic image ghost-removing method based on block registration and matching
CN111161283A (en) * 2019-12-26 2020-05-15 可牛网络技术(北京)有限公司 Method and device for processing picture resources and electronic equipment
CN111161283B (en) * 2019-12-26 2023-08-04 可牛网络技术(北京)有限公司 Picture resource processing method and device and electronic equipment
CN111563867A (en) * 2020-07-14 2020-08-21 成都中轨轨道设备有限公司 Image fusion method for improving image definition
CN113205457A (en) * 2021-05-11 2021-08-03 华中科技大学 Microscopic image splicing method and system
WO2023029113A1 (en) * 2021-08-31 2023-03-09 广东艾檬电子科技有限公司 Image splicing method, terminal device, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
CN110544204A (en) image splicing method based on block matching
CN111063021B (en) Method and device for establishing three-dimensional reconstruction model of space moving target
CN112288875B (en) Rapid three-dimensional reconstruction method for unmanned aerial vehicle mine inspection scene
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
CN110163213B (en) Remote sensing image segmentation method based on disparity map and multi-scale depth network model
CN114782691A (en) Robot target identification and motion detection method based on deep learning, storage medium and equipment
CN109872278B (en) Image cloud layer removing method based on U-shaped network and generation countermeasure network
CN110223370B (en) Method for generating complete human texture map from single-view picture
CN111882668B (en) Multi-view three-dimensional object reconstruction method and system
CN111553858B (en) Image restoration method and system based on generation countermeasure network and application thereof
CN111539887A (en) Neural network image defogging method based on mixed convolution channel attention mechanism and layered learning
CN107369204B (en) Method for recovering basic three-dimensional structure of scene from single photo
CN110021065A (en) A kind of indoor environment method for reconstructing based on monocular camera
CN108416803A (en) A kind of scene depth restoration methods of the Multi-information acquisition based on deep neural network
CN112561909B (en) Fusion variation-based image countermeasure sample generation method
CN106530407A (en) Three-dimensional panoramic splicing method, device and system for virtual reality
CN112288788A (en) Monocular image depth estimation method
CN112163996A (en) Flat-angle video fusion method based on image processing
CN111105354A (en) Depth image super-resolution method and device based on multi-source depth residual error network
CN113962878B (en) Low-visibility image defogging model method
CN111105451A (en) Driving scene binocular depth estimation method for overcoming occlusion effect
CN108924434B (en) Three-dimensional high dynamic range image synthesis method based on exposure transformation
CN117115359B (en) Multi-view power grid three-dimensional space data reconstruction method based on depth map fusion
CN115578260B (en) Attention method and system for directional decoupling of image super-resolution
CN114004848A (en) Multi-view remote sensing topographic image point cloud reconstruction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191206