CN112037134A - Image splicing method for background homogeneous processing, storage medium and terminal - Google Patents

Image splicing method for background homogeneous processing, storage medium and terminal Download PDF

Info

Publication number
CN112037134A
CN112037134A CN202010948902.3A CN202010948902A CN112037134A CN 112037134 A CN112037134 A CN 112037134A CN 202010948902 A CN202010948902 A CN 202010948902A CN 112037134 A CN112037134 A CN 112037134A
Authority
CN
China
Prior art keywords
image
spliced
background
points
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010948902.3A
Other languages
Chinese (zh)
Other versions
CN112037134B (en
Inventor
李伟斌
胡斌
原可义
魏东
王跃军
赵凡
马洪林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Computational Aerodynamics Institute of China Aerodynamics Research and Development Center
Original Assignee
Computational Aerodynamics Institute of China Aerodynamics Research and Development Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Computational Aerodynamics Institute of China Aerodynamics Research and Development Center filed Critical Computational Aerodynamics Institute of China Aerodynamics Research and Development Center
Priority to CN202010948902.3A priority Critical patent/CN112037134B/en
Publication of CN112037134A publication Critical patent/CN112037134A/en
Application granted granted Critical
Publication of CN112037134B publication Critical patent/CN112037134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image splicing method, a storage medium and a terminal for background homogeneous processing, which belong to the technical field of image processing, and are characterized in that a first gray value of a background area of an image to be spliced is obtained; and assigning the first gray value to all pixel points of the background area to realize pixel homogenization treatment of the image background area, so that the gray of the background area is kept consistent without feature point interference, and the false matching of the leaf feature points of the background area and the target area is avoided. Meanwhile, a fuzzy preprocessing mode is not adopted, and loss of the characteristic points of the target area is prevented. The method effectively reduces the probability of mismatching of the feature points, effectively ensures the sufficiency of the number of the feature points, greatly improves the success rate, the splicing speed and the precision of image splicing, and achieves the aim of obtaining the global high-definition images of the blades.

Description

Image splicing method for background homogeneous processing, storage medium and terminal
Technical Field
The invention relates to the technical field of image processing, in particular to a fan blade image splicing method, a storage medium and a terminal for background homogeneity processing.
Background
The blade is one of key parts for effectively capturing wind energy of the fan, and the condition that the structure of the blade is normal is guaranteed. In the normal working process of a fan on an aircraft, the fan serving as a stressed structural member is required to bear strong wind load, and is also subjected to erosion damage of external environments such as gas rain wash, sand and stone impact, strong ultraviolet irradiation and the like, blade damage is easy to occur, and the main types of the fan include cracks, gel coat aging damage, sand holes and the like. The method carries out related research aiming at the damage detection of the fan blade, has important significance in the aspects of improving the stability, the safety, the wind energy utilization efficiency and the like of the fan, and has great economic value.
When the damage detection of the fan blade is carried out, damage information including small-scale cracks needs to be obtained, so that a high-definition image of the blade needs to be obtained. However, the resolution of the current camera is difficult to meet the application requirements, and therefore image splicing technology is required to be applied to splice multiple local images of the blade so as to obtain a global high-definition image of the blade.
The image splicing is mainly divided into 3 steps of image preprocessing, image registration and image fusion, wherein the purpose of preprocessing is to eliminate the influence of noise points on results, and a more popular method is fuzzy preprocessing. However, for in-service fan blade images, the following problems exist in this preprocessing mode: (1) background and target of an in-service fan blade image are monotonous, the number of characteristic points is small, and characteristic point information is possibly erased in preprocessing (mean value smoothing processing and Gaussian blur) of image blur, so that the number of characteristic point capture is reduced; (2) the preprocessing mode of image blurring still cannot increase the difference of the similar characteristic points between the image background and the target, and the situation of mismatching between the background of two images to be spliced and the target may occur. These problems may directly affect the feature point matching result and the image stitching precision, and even result in failure of image stitching when the problems are serious. Therefore, in order to solve the problems of few feature points of the blade image and improper feature point extraction preprocessing, an applicable stitching method needs to be developed to increase the number of feature point extractions and eliminate the situation of mismatching between the background and the target.
Disclosure of Invention
The invention aims to solve the problem that the characteristic interference of a background on a target cannot be eliminated in the image splicing process of the existing fan blade, and provides an image splicing method, a storage medium and a terminal for background homogeneous processing.
The purpose of the invention is realized by the following technical scheme: a method of background homogeneous processed image stitching, the method comprising:
acquiring a first gray value of a background area of an image to be spliced;
and assigning the first gray value to all pixel points of the background area to realize pixel homogenization treatment on the image background area so as to realize image splicing.
As an option, the first gray value is specifically a gray average value of all pixel points in the background region.
As an option, before the step of performing homogenization processing on the pixels in the image background area, the method further includes: and carrying out segmentation processing on the image based on the image gray value feature or the target boundary feature or the texture feature or the morphological feature to obtain a background area and a target area of the image.
As an option, the step of performing homogenization processing on the pixels in the image background area includes a feature point extraction step: and calculating the gray value change of pixels in the target area of the moving window in each direction, and determining whether angular points exist in the window according to the gray value change of the area in the window so as to extract the characteristic points.
As an option, after the extracting of the feature points, the method further includes: and matching the feature points of the images to be spliced according to the feature points of the right region of the images to be spliced of the front frame and the feature points of the left region of the images to be spliced of the rear frame.
As an option, the feature point matching includes: and carrying out gray level comparison on the feature points of the right region of the image to be spliced in the front frame and the feature points of the left region of the image to be spliced in the rear frame, and if the gray level difference is smaller than a threshold value, determining that the feature points are matched.
As an option, the feature point matching step specifically includes:
and calculating Euclidean distances of all feature points in the two images to be spliced, wherein the two feature points corresponding to the minimum Euclidean distances are matched feature points.
As an option, the step of image stitching is further included after the step of feature point matching: determining a transformation matrix of the images to be spliced according to the matching characteristic points in the images to be spliced of the front frame and the images to be spliced of the rear frame; and transforming the image to be spliced of the front frame or the image to be spliced of the rear frame according to the transformation matrix and carrying out image fusion, thereby realizing splicing of the image to be spliced of the front frame and the image to be spliced of the rear frame.
It should be further noted that the technical features corresponding to the above options can be combined with each other or replaced to form a new technical solution.
The invention also includes a storage medium having stored thereon computer instructions which, when executed, perform the steps of the above-described method of background homogeneous image stitching.
The invention also includes a terminal, which includes a memory and a processor, wherein the memory stores computer instructions capable of running on the processor, and the processor executes the steps of the image stitching method for background homogeneous processing when running the computer instructions.
Compared with the prior art, the invention has the beneficial effects that:
(1) the method comprises the steps of obtaining a first gray value of a non-characteristic pixel point in a background area of an image to be spliced; the first gray value is assigned to all pixel points of the background area, pixel homogenization processing of the image background area is achieved, gray levels of the background area are kept consistent, interference of feature points is avoided, interference of the background area on feature points of blades of the target area is avoided, meanwhile, a fuzzy preprocessing mode is not adopted, loss of the feature points of the target area is prevented, probability of error matching of the feature points is effectively reduced, sufficiency of the number of the feature points is effectively guaranteed, success rate of image splicing, splicing speed and accuracy are greatly improved, and the purpose of obtaining the global high-definition images of the blades is achieved.
(2) After the extraction of the feature points is realized, the feature points of the image to be spliced are matched according to the feature points of the right region of the image to be spliced of the front frame and the feature points of the left region of the image to be spliced of the rear frame, so that the calculated amount of feature matching can be reduced, the image splicing speed is increased, and the image splicing efficiency is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention.
FIG. 1 is a flowchart of a method of example 1 of the present invention;
FIG. 2 is a schematic diagram of an image to be stitched of a previous frame;
FIG. 3 is a diagram of a previous frame to be stitched after background homogenization processing;
FIG. 4 is a diagram illustrating a segmentation result of a background and a target region of an image to be stitched in a previous frame;
FIG. 5 is a schematic diagram of feature points in an image to be stitched of a previous frame;
FIG. 6 is a diagram showing a segmentation result of a background and a target region of an image to be spliced in a later frame;
FIG. 7 is a background-homogenized image of a subsequent frame to be stitched;
FIG. 8 is a schematic diagram of the matching results of feature points of the images to be spliced of the front frame and the rear frame;
fig. 9 is a schematic diagram of image stitching results.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it should be understood that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the description of the present invention, it should be noted that directions or positional relationships indicated by "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", and the like are directions or positional relationships described based on the drawings, and are only for convenience of description and simplification of description, and do not indicate or imply that the device or element referred to must have a specific orientation, be configured and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly stated or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
In addition, the technical features involved in the different embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
In the prior art, the image splicing process needs preprocessing such as mean smoothing and Gaussian blur on the image, however, as the characteristic points of the fan blade are less, the blur preprocessing may erase a small amount of characteristic point information on the fan blade, the sufficiency of the quantity of the characteristic points cannot be effectively ensured, and the later image splicing is not facilitated; secondly, the mean smoothing adopted in the prior art is to realize the segmentation of the background and the foreground target of the image, and the mean smoothing and other methods cannot be solved aiming at the problem that the background of the image to be spliced and the target are in wrong matching due to the fact that the characteristic points of the background area are close to the characteristic points of the target area in the image splicing process. The invention provides a preprocessing method for removing background characteristic points, which aims at solving the problems that the fuzzy preprocessing in the existing image processing can possibly erase the characteristic point information and can not increase the difference of the similar characteristic points between the image background and the target. Specifically, based on the idea of reducing the influence of the background area, the image segmentation technology is firstly adopted to segment the background area and the target area, and then the homogenization treatment is carried out on the background area, so that the gray level of the background area is kept consistent, and no feature point interference exists. In addition, in order to reduce the mutual influence of the feature points in the target area and increase the image stitching speed, only the feature points of the right part of the image to be stitched of the previous frame and the left part of the image to be stitched of the next frame are considered. And finally, matching the extracted feature points, and completing image splicing so as to achieve the purpose of obtaining a high-definition image of the fan blade.
Example 1
As shown in fig. 1, in embodiment 1, an image stitching method for background homogenization processing includes the steps of:
s011: acquiring a first gray value of a background area of an image to be spliced; specifically, the first gray value is a gray value of a pixel point corresponding to the non-feature point. The first gray value can be determined by directly obtaining the gray value of the pixel point corresponding to the non-feature point in the background region, and also can be determined by obtaining the second gray value of the pixel point corresponding to the feature point in the target region, wherein the first gray value is any gray value except the second gray value. As an option, if the second gray value is close to 255 levels and the first gray value is close to or equal to 0, it is an optimal preference to further remove the feature points in the background region. As an option, the first gray value is specifically a gray average value of all pixel points in the background region, and a specific calculation formula thereof is as follows:
Figure BDA0002676259260000061
in the above formula, the first and second carbon atoms are,
Figure BDA0002676259260000062
expressing the average value of the gray level, expressing the background area by omega \ omega, expressing the gray value of the pixel point by u and expressing the pixel point by x.
S012: and assigning the first gray value to all pixel points of the background area to realize pixel homogenization treatment on the image background area so as to realize image splicing. As shown in fig. 3, reference numeral 1 in fig. 3 denotes a boundary curve of a background region and a target region of an image to be stitched of a previous frame.
Specifically, the invention carries out homogenization treatment on pixels in the background area of the image to be spliced of the previous frame, the schematic diagram of the image to be spliced of the previous frame is shown in fig. 2, the homogenization treatment enables the gray level of the background area of the image to be spliced of the previous frame to be consistent, no characteristic point interference exists, the interference of the background area on the characteristic points of the blade in the target area is avoided, meanwhile, a fuzzy preprocessing mode is not adopted, the loss of the characteristic points in the target area is prevented, the probability of characteristic point mismatching is effectively reduced, the sufficiency of the number of the characteristic points is effectively ensured, the success rate, the splicing speed and the precision of image splicing are greatly improved, and the aim of obtaining the global high-definition image of the.
Further, before the step of performing homogenization processing on the pixels in the image background area, the method further comprises the following steps:
s00: and (5) image segmentation processing. Specifically, the image to be stitched of the previous frame is segmented based on the image gray-value feature or the target boundary feature or the texture feature or the morphological feature, so as to obtain a background region and a target region of the image, as shown in fig. 4. As a specific embodiment, the present invention implements image segmentation based on image gray-scale value features, and implements a background region and a target region by solving the following minimization problem:
Figure BDA0002676259260000071
in the above formula, where φ is a function on the domain ΩMu > 0, upsilon ≧ 0 and α > 0 are constants, and c is the gray-level average of u (x) over the region { x | φ (x) < 0 }. The solution obtained by the above formula is phi*The target region and the background region obtained by the segmentation can be respectively expressed as ω ═ x | Φ*(x) Not less than 0} and omega \ omega ═ x | phi*(x)<0}。
More specifically, before the image segmentation processing step, image graying processing is further included, that is, graying processing is performed on the first image to be stitched. The image area is recorded as omega, and the pixel value of each pixel point x in the grayed image is recorded as u (x).
Further, step S01 includes, after the step of performing homogenization processing on the pixels in the background area of the image, a step of extracting feature points:
s02: calculating the gray value change of pixels in a target area of a moving window in each direction, determining whether an angular point exists in the window according to the gray value change of the area in the window, and further extracting the characteristic points, wherein the method specifically comprises the following steps:
s021: for the image u, the variation of the gray value of the pixel in each direction in the target area of the moving window, i.e. the variation of the statistical pixel in the area W, is represented by a matrix:
Figure BDA0002676259260000081
in the above formula, ux、uyRespectively representing the gradient components of the pixel points.
S022: calculated value F (x, y) ═ det (H)/trace (H)2(ii) a Where det is the determinant of the matrix and trace is the trace of the matrix;
s023: for the pixel point (x, y), when the following relation is satisfied, the pixel point is judged as a feature point:
Figure BDA0002676259260000082
in particular, when the pixel point value is greater than the threshold value T1Then, the pixel point is represented as an angular point, and the extraction of the characteristic point, namely the previous frame is realizedThe characteristic points in the images to be stitched are shown in fig. 5, and a mark 2 in fig. 5 is a boundary curve of a background area and a target area in the image to be stitched of a later frame. It should be further explained that the pixel points near the corner points have large changes in the gradient direction or the gradient profession, and are feature points with stable properties.
Further, after the extraction of the feature points is implemented in step S02, the method further includes:
s024: and matching the feature points of the images to be spliced according to the feature points of the right region of the images to be spliced of the front frame and the feature points of the left region of the images to be spliced of the rear frame. As an option, only the feature points of the right one-third area of the image to be spliced in the front frame and the feature points of the left one-third area of the image to be spliced in the rear frame are reserved for feature point matching, so that the calculation amount of feature matching can be reduced, the image splicing speed is increased, and the image splicing efficiency is improved.
Further, the step S02 of extracting feature points further includes a step of matching feature points, and it should be further explained that, before the step of matching features, temporally (at image positions) consecutive images to be stitched, that is, the images to be stitched of the next frame to be stitched that are stitched with the images to be stitched of the previous frame are also processed in the steps S00-S02, the result of segmenting the background and the target region of the images to be stitched of the next frame is shown in fig. 6, the image to be stitched of the next frame after the background homogenization processing is shown in fig. 7, and the mark 3 in fig. 7 indicates the position of the feature point, on this basis, the step of matching feature points is performed on the images to be stitched of the previous frame and the images to be stitched of the next frame, and includes the following steps:
s03: and carrying out gray level comparison on the feature points of the right region of the image to be spliced in the front frame and the feature points of the left region of the image to be spliced in the rear frame, and if the gray level difference is smaller than a threshold value, determining that the feature points are matched. Specifically, the euclidean distances of all feature points in two images to be spliced are calculated, and the two feature points corresponding to the minimum euclidean distances are regarded as the same point to complete matching, and the method specifically comprises the following steps:
s31: performing appropriate Gaussian blur processing on the image;
s22: sampling by taking the characteristic point as a center, and taking a region of k multiplied by k pixels;
s33: down-sampling the sampling area to 8 x 8 to generate a 64-dimensional vector, and performing normalization processing on the vector to enable each feature point to be represented by a 64-dimensional vector;
s34: calculating Euclidean distances between all feature points in two images to be spliced, arranging the distances from small to large for each point of the image to be spliced of the previous frame, and if the ratio of the first 2 distances is less than a certain threshold value T2And considering the two points with the minimum distance as matching points, and further realizing the feature point matching of the image to be spliced of the front frame and the image to be spliced of the rear frame, wherein the feature point matching result of the image to be spliced of the front frame and the image to be spliced of the rear frame is shown in fig. 8, and the mark 4 in fig. 8 is a feature point matching line segment.
Further, after the step of matching the feature points in step S03, the method further includes an image stitching step:
s041: determining a transformation matrix of the images to be spliced according to the matching characteristic points in the images to be spliced of the front frame and the images to be spliced of the rear frame; specifically, 4 feature points are randomly extracted from the feature points of the image to be spliced in the front frame and the target area of the image to be spliced in the rear frame, and a transformation matrix H is calculated.
S042: and transforming the image to be spliced of the front frame or the image to be spliced of the rear frame according to the transformation matrix and carrying out image fusion, thereby realizing splicing of the image to be spliced of the front frame and the image to be spliced of the rear frame. Specifically, the stitching result of the image to be stitched in the previous frame and the image to be stitched in the next frame is shown in fig. 9. As an option, the first image (image to be stitched in the previous frame) is transformed by using a transformation matrix H and is fused with the second image (image to be stitched in the next frame), wherein the intersection part performs pixel value giving according to a linear weighting method, and the formula is as follows:
Figure BDA0002676259260000101
in the above formula, omega1And Ω2Is the domain of definition of the two images after transformation, u1And u2Is the corresponding image gray value, k1And k2Is a weight, satisfies k1+k2=1,0<k1<1,0<k2<1。
The method adopts the image segmentation technology to segment the background area and the target area, and further carries out homogenization treatment on the background area, so that the gray level of the background area is kept consistent, the background area has no characteristic points, the interference of the background area on the characteristic points of the blades of the target area is avoided, meanwhile, a fuzzy preprocessing mode is not adopted, the loss of the characteristic points of the target area is prevented, the probability of error matching of the characteristic points is further effectively reduced, the sufficiency of the number of the characteristic points is effectively ensured, the success rate of image splicing, the splicing speed and the precision are greatly improved, and the aim of obtaining the global high-definition images of the blades is fulfilled. In addition, only the feature points of the right part of the image to be spliced in the front frame and the left part of the image to be spliced in the rear frame are reserved and matched, so that the mutual influence of the feature points in the target area is reduced. According to the method, the idea of background homogenization treatment is provided according to the characteristics of the fan blade images, and partial characteristic points of two images to be spliced are considered, so that the probability of error matching of the characteristic points is effectively reduced, the image splicing speed is increased to a certain extent, and the success rate and the precision of image splicing are greatly improved.
Example 2
The present embodiment provides a storage medium, having the same inventive concept as embodiment 1, and having stored thereon computer instructions, which when executed, perform the steps of one of the background homogeneous processing image stitching methods described in embodiment 1.
Based on such understanding, the technical solution of the present embodiment or parts of the technical solution may be essentially implemented in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Example 3
The present embodiment also provides a terminal, which has the same inventive concept as embodiment 1, and includes a memory and a processor, where the memory stores computer instructions executable on the processor, and the processor executes the computer instructions to perform the steps of the image stitching method for background homogeneous processing described in embodiment 1. The processor may be a single or multi-core central processing unit or a specific integrated circuit, or one or more integrated circuits configured to implement the present invention.
Each functional unit in the embodiments provided by the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The above detailed description is for the purpose of describing the invention in detail, and it should not be construed that the detailed description is limited to the description, and it will be apparent to those skilled in the art that various modifications and substitutions can be made without departing from the spirit of the invention.

Claims (10)

1. An image stitching method for background homogeneous processing is characterized by comprising the following steps: the method comprises the following steps:
acquiring a first gray value of a background area of an image to be spliced;
and assigning the first gray value to all pixel points of the background area to realize pixel homogenization treatment on the image background area so as to realize image splicing.
2. The image stitching method for background homogeneous processing according to claim 1, characterized in that: the first gray value is specifically a gray average value of all pixel points in the background region.
3. The image stitching method for background homogeneous processing according to claim 1, characterized in that: before the step of performing homogenization processing on the pixels in the image background area, the method further comprises the following steps:
and carrying out segmentation processing on the image based on the image gray value feature or the target boundary feature or the texture feature or the morphological feature to obtain a background area and a target area of the image.
4. The image stitching method for background homogeneous processing according to claim 1, characterized in that: the step of carrying out homogenization treatment on the pixels in the image background area comprises the step of extracting characteristic points:
and calculating the gray value change of pixels in the target area of the moving window in each direction, and determining whether angular points exist in the window according to the gray value change of the area in the window so as to extract the characteristic points.
5. The image stitching method for background homogeneous processing according to claim 4, characterized in that: after the extraction of the feature points, the method further comprises the following steps:
and matching the feature points of the images to be spliced according to the feature points of the right region of the images to be spliced of the front frame and the feature points of the left region of the images to be spliced of the rear frame.
6. The image stitching method for background homogeneous processing according to claim 5, characterized in that: the feature point matching includes:
and carrying out gray level comparison on the feature points of the right region of the image to be spliced in the front frame and the feature points of the left region of the image to be spliced in the rear frame, and if the gray level difference is smaller than a threshold value, determining that the feature points are matched.
7. The image stitching method for background homogeneous processing according to claim 6, characterized in that: the feature point matching step specifically includes:
and calculating Euclidean distances of all feature points in the two images to be spliced, wherein the two feature points corresponding to the minimum Euclidean distances are matched feature points.
8. The image stitching method for background homogeneous processing according to claim 6, characterized in that: the image splicing step is also included after the characteristic point matching step:
determining a transformation matrix of the images to be spliced according to the matching characteristic points in the images to be spliced of the front frame and the images to be spliced of the rear frame;
and transforming the image to be spliced of the front frame or the image to be spliced of the rear frame according to the transformation matrix and carrying out image fusion, thereby realizing splicing of the image to be spliced of the front frame and the image to be spliced of the rear frame.
9. A storage medium having stored thereon computer instructions, characterized in that: the computer instructions when executed perform the steps of a background homogeneous image stitching method as claimed in any one of claims 1 to 8.
10. A terminal comprising a memory and a processor, the memory having stored thereon computer instructions executable on the processor, the terminal comprising: the processor, when executing the computer instructions, performs the steps of a background homogeneous processed image stitching method as claimed in any one of claims 1 to 8.
CN202010948902.3A 2020-09-10 2020-09-10 Image stitching method for background homogeneous processing, storage medium and terminal Active CN112037134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010948902.3A CN112037134B (en) 2020-09-10 2020-09-10 Image stitching method for background homogeneous processing, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010948902.3A CN112037134B (en) 2020-09-10 2020-09-10 Image stitching method for background homogeneous processing, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN112037134A true CN112037134A (en) 2020-12-04
CN112037134B CN112037134B (en) 2023-04-21

Family

ID=73584806

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010948902.3A Active CN112037134B (en) 2020-09-10 2020-09-10 Image stitching method for background homogeneous processing, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN112037134B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724176A (en) * 2021-08-23 2021-11-30 广州市城市规划勘测设计研究院 Multi-camera motion capture seamless connection method, device, terminal and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654902A (en) * 2012-01-16 2012-09-05 江南大学 Contour vector feature-based embedded real-time image matching method
CN106023077A (en) * 2016-05-18 2016-10-12 深圳市神州龙资讯服务有限公司 Dynamic analysis and splicing method for images
CN107665486A (en) * 2017-09-30 2018-02-06 深圳绰曦互动科技有限公司 A kind of method for automatically split-jointing, device and terminal device applied to radioscopic image
CN107770618A (en) * 2017-11-02 2018-03-06 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium
CN108573470A (en) * 2017-03-08 2018-09-25 北京大学 Image split-joint method and device
CN108876755A (en) * 2018-06-28 2018-11-23 大连海事大学 A kind of construction method of the color background of improved gray level image
CN110390640A (en) * 2019-07-29 2019-10-29 齐鲁工业大学 Graph cut image split-joint method, system, equipment and medium based on template
CN111551929A (en) * 2020-05-07 2020-08-18 中国电子科技集团公司第十四研究所 Background suppression method based on radar image statistical characteristics

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102654902A (en) * 2012-01-16 2012-09-05 江南大学 Contour vector feature-based embedded real-time image matching method
CN106023077A (en) * 2016-05-18 2016-10-12 深圳市神州龙资讯服务有限公司 Dynamic analysis and splicing method for images
CN108573470A (en) * 2017-03-08 2018-09-25 北京大学 Image split-joint method and device
CN107665486A (en) * 2017-09-30 2018-02-06 深圳绰曦互动科技有限公司 A kind of method for automatically split-jointing, device and terminal device applied to radioscopic image
CN107770618A (en) * 2017-11-02 2018-03-06 腾讯科技(深圳)有限公司 A kind of image processing method, device and storage medium
CN108876755A (en) * 2018-06-28 2018-11-23 大连海事大学 A kind of construction method of the color background of improved gray level image
CN110390640A (en) * 2019-07-29 2019-10-29 齐鲁工业大学 Graph cut image split-joint method, system, equipment and medium based on template
CN111551929A (en) * 2020-05-07 2020-08-18 中国电子科技集团公司第十四研究所 Background suppression method based on radar image statistical characteristics

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724176A (en) * 2021-08-23 2021-11-30 广州市城市规划勘测设计研究院 Multi-camera motion capture seamless connection method, device, terminal and medium

Also Published As

Publication number Publication date
CN112037134B (en) 2023-04-21

Similar Documents

Publication Publication Date Title
Wang et al. Pretraining is all you need for image-to-image translation
Pan et al. Learning dual convolutional neural networks for low-level vision
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
Wang et al. Cycle-snspgan: Towards real-world image dehazing via cycle spectral normalized soft likelihood estimation patch gan
Rhemann et al. High resolution matting via interactive trimap segmentation
CN111598796B (en) Image processing method and device, electronic equipment and storage medium
CN110827397B (en) Texture fusion method for real-time three-dimensional reconstruction of RGB-D camera
CN113221925B (en) Target detection method and device based on multi-scale image
EP2869265A1 (en) Method and apparatus for alpha matting
JP2013536960A (en) System and method for synthesizing portrait sketches from photographs
Gu et al. Blur removal via blurred-noisy image pair
CN114862861B (en) Lung lobe segmentation method and device based on few-sample learning
Daisy et al. A smarter exemplar-based inpainting algorithm using local and global heuristics for more geometric coherence
Swami et al. Candy: Conditional adversarial networks based end-to-end system for single image haze removal
CN112037134A (en) Image splicing method for background homogeneous processing, storage medium and terminal
KR102466061B1 (en) Apparatus for denoising using hierarchical generative adversarial network and method thereof
CN113421210B (en) Surface point Yun Chong construction method based on binocular stereoscopic vision
CN111461139B (en) Multi-target visual saliency layered detection method in complex scene
Yang et al. Hierarchical joint bilateral filtering for depth post-processing
Xiao et al. Single-image dehazing algorithm based on convolutional neural networks
Zhou et al. Image Dehazing Algorithm Based on Particle Swarm Optimization for Sky Region Segmentation
Lin et al. Text image super-resolution by image matting and text label supervision
Chen et al. Robust video content alignment and compensation for clear vision through the rain
CN110910310A (en) Face image reconstruction method based on identity information
Gan Low complexity image/video super resolution using edge and nonlocal self-similarity constraint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant