CN108460724B - Adaptive image fusion method and system based on Mahalanobis distance discrimination - Google Patents

Adaptive image fusion method and system based on Mahalanobis distance discrimination Download PDF

Info

Publication number
CN108460724B
CN108460724B CN201810114537.9A CN201810114537A CN108460724B CN 108460724 B CN108460724 B CN 108460724B CN 201810114537 A CN201810114537 A CN 201810114537A CN 108460724 B CN108460724 B CN 108460724B
Authority
CN
China
Prior art keywords
image
pixel
img1
point
img2
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810114537.9A
Other languages
Chinese (zh)
Other versions
CN108460724A (en
Inventor
舒军
李鑫武
涂杏
沈开斌
蒋明威
杨露
吴柯
舒心怡
潘健
王淑青
陈张言
徐成鸿
李志愧
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN201810114537.9A priority Critical patent/CN108460724B/en
Publication of CN108460724A publication Critical patent/CN108460724A/en
Application granted granted Critical
Publication of CN108460724B publication Critical patent/CN108460724B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a self-adaptive image fusion method and a self-adaptive image fusion system based on Mahalanobis distance discrimination, wherein a target image is divided into three parts in the fusion process, the leftmost part of a coincidence region is completely taken from data in img1, the right part of the coincidence region is completely taken from an image obtained by transforming img2, and a proper lens imaging point set is automatically selected and planned by evaluating and selecting the overlapping region part of each camera, so that the fused image is more real, the details are more prominent, and the system detection error caused by detail loss is greatly reduced. The Markov distance discriminant analysis method is introduced into the image fusion technology, noise interference among pixels in the overlapped region coordinates can be effectively overcome, proper camera imaging point sets are automatically selected and planned, the overall smoothness of an image formed by the evaluated pixel points is better, the reduction degree of a real scene is high, and the accuracy of image fusion is greatly improved.

Description

Adaptive image fusion method and system based on Mahalanobis distance discrimination
Technical Field
The invention belongs to the field of image processing, and particularly relates to a self-adaptive image fusion method and system based on Mahalanobis distance discrimination.
Background
Machine vision has become a hot topic of industrial field research at present, and visual introduction in an industrial production field can better complete items such as identification, detection, packaging and transportation of workpieces, but when workpiece visual information is obtained, workpiece visual fields needing to be identified in some production items are large, and a single camera cannot well meet actual requirements, so a multi-camera imaging system (as shown in fig. 1) with a parallel optical axis structure is usually adopted, and finally required image data is obtained by using an image stitching algorithm.
The core of the image stitching algorithm is mainly image registration and image fusion, the image registration method is mature after years of research by a large number of researchers, but in the image fusion part, due to complex industrial field environments (illumination, temperature and the like) and assembly errors of an imaging system, imaging results of cameras have certain differences, the synthesis effect of the traditional image synthesis method on the basis is not good, and small detail errors can cause major errors in projects such as workpiece detection and the like. Therefore, it is very critical to determine the preference for the overlapping region of a multi-camera imaging system.
The existing image fusion method is mainly divided into two types: a simple method is that the right image img2 in fig. 2 is transformed into a new image through a transformation matrix H, and then the left image img1 is directly added into the new image, so that a very obvious splicing seam is formed after splicing; another method is to transform the right image img2 into a new image by a transform matrix H, and during the fusion process, divide the target image into three parts, the leftmost part is completely taken from the data in img1, the right part of the overlapped region is completely taken from the transformed image in img2, and the middle overlapped part is a weighted average of the two images, which may result in the loss of the detail part of the overlapped region and also may have some seams. Therefore, neither of the above two methods is applicable in the field of workpiece inspection.
Disclosure of Invention
According to the invention, the overlapped area parts of the cameras are evaluated and preferred, and the proper imaging point sets of the lenses are automatically selected and planned, so that the fused images are more real, the details are more prominent, and the system detection error caused by the loss of the details is greatly reduced.
In order to achieve the above object, the technical solution provided by the present invention is an adaptive image fusion method based on mahalanobis distance discrimination, comprising the following steps:
step 1, determining the left part and the right part of a composite image: aiming at an overlapping region of two images img1 and img2 obtained after image registration, a mapping relation of the two images in the overlapping region is constructed after a transformation matrix H is calculated according to matched same feature points, and img2 is transformed into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlMapping the frame coordinates of the synthetic image one by one;
step 2, determining pixels of the overlapping area of the composite image: defining the set of pixel information of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namely
Figure BDA0001570263150000021
Wherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
step 3, obtaining C1,C2Pixel mean ofIs composed of
Figure BDA0001570263150000022
Calculating the pixel mean variance of the image overlapping region:
Figure BDA0001570263150000023
step 4, respectively obtaining KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
Figure BDA0001570263150000024
Figure BDA0001570263150000025
i=1,2,3...n
step 5, comparing KiInner C1i,C2iDifference of similarity of
Figure BDA0001570263150000026
The size of (d);
(1) if it is
Figure BDA0001570263150000027
The overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it is
Figure BDA0001570263150000028
The overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it is
Figure BDA0001570263150000029
Then at the overlapping coordinatePoint KiRandomly selecting one pixel point as image fusion;
step 6, integrating all selected pixel points
Figure BDA00015702631500000210
And one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
Further, the image registration in step 1 is realized by a SIFT algorithm.
Further, the transformation matrix H in step 1 is calculated as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1
The invention also provides a self-adaptive image fusion system based on Mahalanobis distance discrimination, which comprises the following modules:
the left and right part acquisition module of the synthetic image is used for obtaining the overlapping area of the img1 and the img2 of the two images after the images are registered, calculating a transformation matrix H according to the matched same characteristic points, constructing the mapping relation of the two images in the overlapping area, and transforming the img2 into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlMapping the frame coordinates of the synthetic image one by one;
a composite image overlap region pixel acquisition module for defining the pixel information set of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namely
Figure BDA0001570263150000031
Wherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
an overlap region pixel mean variance calculation module for calculating C1,C2Has a pixel mean of
Figure BDA0001570263150000032
Calculating the pixel mean variance of the image overlapping region:
Figure BDA0001570263150000033
a similarity difference calculation module for respectively calculating KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
Figure BDA0001570263150000034
Figure BDA0001570263150000035
i=1,2,3...n
a similarity difference comparison module for comparing KiInner C1i,C2iDifference of similarity of
Figure BDA0001570263150000036
The size of (d);
(1) if it is
Figure BDA0001570263150000037
The overlapped coordinate point K is determinediInner C1iIs superior to C2iI.e. in the figureAutomatically selecting a pixel point of img1 during image fusion;
(2) if it is
Figure BDA0001570263150000038
The overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it is
Figure BDA0001570263150000039
Then at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
a composite image overlap region determining module for integrating all selected pixel points
Figure BDA0001570263150000041
And one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
Further, image registration in the acquisition module of the left and right parts of the composite image is realized by SIFT algorithm.
Further, the calculation method of the transformation matrix H in the left and right part acquisition modules of the composite image is as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1
Compared with the prior art, the invention has the advantages and beneficial effects that: the Markov distance discriminant analysis method is introduced into the image fusion technology, noise interference among pixels in the overlapped region coordinates can be effectively overcome, proper camera imaging point sets are automatically selected and planned, the overall smoothness of an image formed by the evaluated pixel points is better, the reduction degree of a real scene is high, and the accuracy of image fusion is greatly improved.
Drawings
Fig. 1 is a multi-camera imaging system with a parallel optical axis configuration.
Fig. 2 is a schematic view of an imaging region in image fusion.
Fig. 3 is a schematic diagram of a pixel point corresponding to a region coordinate in the embodiment of the present invention.
FIG. 4 is a diagram illustrating a distribution of pixel information of an image synthesized according to an embodiment of the present invention.
FIG. 5 is a flow chart of an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further explained with reference to the drawings and the embodiments.
Step 1, determining the left part and the right part of a composite image: by SIFT algorithm [1]]After the image registration is completed, the overlapping area (black part in fig. 2) of the two images img1 and img2 can be obtained, the mapping relation of the two images in the overlapping area is constructed after the transformation matrix H is calculated according to the matched same feature points, and the img2 is transformed into a new image through the transformation matrix H. Establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlOne to one in the frame coordinates of the composite image. The image information of the middle area needs to be selected preferentially in img1 and img 2.
The calculation mode of the transformation matrix H is as follows: let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping of the images, H1T-T1, H2T-T2, we obtain: t1 ═ T2(H1 ═ H2-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1The H matrix is a 3 x 3 matrix.
[1] Wenweast, Zhang, research on panoramic image stitching technology based on SIFT algorithm [ J ] computer system application, 2017,26(07):227 + 231.
Step 2, determining pixels of the overlapping area of the composite image: defining the set of pixel information of img1 in the overlap region as C1Img2 transformed overlapping area imageThe element information set is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namely
Figure BDA0001570263150000051
n represents the number of pixel points needing to be selected in the overlapping area, the coordinate position of the overlapping area is known, namely the coordinate information of the pixel points needing to be filled in the overlapping area is known, the img1 and the img2 have corresponding pixel information at each position needing to be filled, and a certain point needing to be filled is assumed to be i, C1iIt represents the pixel information corresponding to this point coordinate position in img1, similarly to C2iThe pixel information corresponding to this point coordinate position in img2 is represented; as shown in fig. 3;
step 3, obtaining C1,C2Has a pixel mean of
Figure BDA0001570263150000052
Calculating the pixel mean variance of the image overlapping region:
Figure BDA0001570263150000053
step 4, respectively obtaining KiInner Cli,C2iMahalanobis distance to overlap region pixel mean variance (i.e., similarity difference):
Figure BDA0001570263150000054
Figure BDA0001570263150000055
i=1,2,3...n
step 5, comparing KiInner C1i,C2iDifference of similarity of
Figure BDA0001570263150000056
The size of (2):
(1) if it is
Figure BDA0001570263150000057
The overlapped coordinate point K is determinediInner C1iIs superior to C2iI.e. automatically select the pixel point of img1 when fusing images.
(2) If it is
Figure BDA0001570263150000058
The overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, the pixel points transformed by img2 are automatically selected during image fusion.
(3) If it is
Figure BDA0001570263150000059
Then at the overlapping coordinate point KiAnd randomly selecting one pixel point as image fusion.
Step 6, integrating all selected pixel points
Figure BDA00015702631500000510
And one of the two images is mapped into the corresponding overlapping area coordinate system to be determined as the overlapping area of the composite image. Kr,Km*,KlThe pixel points in (1) are mapped one by one in the coordinate frame of the image to be synthesized.
The invention also provides a self-adaptive image fusion system based on Mahalanobis distance discrimination, which comprises the following modules:
the left and right part acquisition module of the synthetic image is used for obtaining the overlapping area of the img1 and the img2 of the two images after the images are registered, calculating a transformation matrix H according to the matched same characteristic points, constructing the mapping relation of the two images in the overlapping area, and transforming the img2 into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearFor the right side of the composite imageA moiety of andrand KlMapping the frame coordinates of the synthetic image one by one;
a composite image overlap region pixel acquisition module for defining the pixel information set of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namely
Figure BDA0001570263150000061
Wherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
an overlap region pixel mean variance calculation module for calculating C1,C2Has a pixel mean of
Figure BDA0001570263150000062
Calculating the pixel mean variance of the image overlapping region:
Figure BDA0001570263150000063
a similarity difference calculation module for respectively calculating KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
Figure BDA0001570263150000064
Figure BDA0001570263150000065
i=1,2,3 ... n
similarity difference comparison module, useIn comparison KiInner C1i,C2iDifference of similarity of
Figure BDA0001570263150000066
The size of (d);
(1) if it is
Figure BDA0001570263150000067
The overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it is
Figure BDA0001570263150000068
The overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it is
Figure BDA0001570263150000069
Then at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
a composite image overlap region determining module for integrating all selected pixel points
Figure BDA00015702631500000610
And one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
The image registration in the acquisition module of the left part and the right part of the synthetic image is realized by SIFT algorithm; the calculation method of the transformation matrix H in the acquisition module of the left and right parts of the composite image is as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1The H matrix is a 3 x 3 matrix.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (6)

1. The adaptive image fusion method based on Mahalanobis distance discrimination is characterized by comprising the following steps of:
step 1, determining the left part and the right part of a composite image: aiming at an overlapping region of two images img1 and img2 obtained after image registration, a mapping relation of the two images in the overlapping region is constructed after a transformation matrix H is calculated according to matched same feature points, and img2 is transformed into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlMapping the frame coordinates of the synthetic image one by one;
step 2, determining pixels of the overlapping area of the composite image: defining the set of pixel information of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namely
Figure FDA0003095563980000011
Wherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
step 3, obtaining C1,C2Has a pixel mean of
Figure FDA0003095563980000012
Calculating the pixel mean variance of the image overlapping region:
Figure FDA0003095563980000013
step 4, respectively obtaining KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
Figure FDA0003095563980000014
Figure FDA0003095563980000015
step 5, comparing KiInner C1i,C2iDifference of similarity of
Figure FDA0003095563980000016
The size of (d);
(1) if it is
Figure FDA0003095563980000017
The overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it is
Figure FDA0003095563980000018
The overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it is
Figure FDA0003095563980000019
Then at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
step 6, integrating all selected pixel points
Figure FDA00030955639800000110
And one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
2. The adaptive image fusion method based on mahalanobis distance discrimination as claimed in claim 1, wherein: the image registration in step 1 is realized by SIFT algorithm.
3. The adaptive image fusion method based on mahalanobis distance discrimination as claimed in claim 1, wherein: the transformation matrix H in step 1 is calculated as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; from the mapping relationship of the images, H1T 1, H2T 2, and T1T 2 (H1H 2)-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1
4. The adaptive image fusion system based on Mahalanobis distance discrimination is characterized by comprising the following modules:
the left and right part acquisition module of the synthetic image is used for obtaining the overlapping area of the img1 and the img2 of the two images after the images are registered, calculating a transformation matrix H according to the matched same characteristic points, constructing the mapping relation of the two images in the overlapping area, and transforming the img2 into a new image through the transformation matrix H; establishing frame coordinates of the synthesized image based on the coordinate position of img1, defining pixel information K of img1 excluding the overlapped arealFor the left part of the composite image, the img2 transformed image removes the pixel information K of the overlapping arearTo synthesize the right part of the image, and apply KrAnd KlMapping the frame coordinates of the synthetic image one by one;
composite image overlap region pixel acquisitionA fetching module for defining a set of pixel information of img1 in the overlap region as C1The pixel information set of the overlapped region after img2 transformation is C2Set of pixel information K for the overlapping area of the entire imagem={C1,C2Any point in the image overlapping area coordinates contains two pixel points, namely
Figure FDA0003095563980000021
Wherein n represents the number of pixel points needing to be selected in the overlapping area, and a certain point needing to be filled is set as i, C1iIndicating the pixel information, C, in img1 corresponding to this point coordinate position2iPixel information representing the coordinate position of this point in img 2;
an overlap region pixel mean variance calculation module for calculating C1,C2Has a pixel mean of
Figure FDA0003095563980000022
Calculating the pixel mean variance of the image overlapping region:
Figure FDA0003095563980000023
a similarity difference calculation module for respectively calculating KiInner C1i,C2iMahalanobis distance to the variance of the mean of the pixels in the overlap region, i.e., the similarity difference:
Figure FDA0003095563980000024
Figure FDA0003095563980000025
a similarity difference comparison module for comparing KiInner C1i,C2iDifference of similarity of
Figure FDA0003095563980000031
The size of (d);
(1) if it is
Figure FDA0003095563980000032
The overlapped coordinate point K is determinediInner C1iIs superior to C2iNamely, automatically selecting the pixel point of img1 during image fusion;
(2) if it is
Figure FDA0003095563980000033
The overlapped coordinate point K is determinediInner C2iIs superior to C1iNamely, automatically selecting pixels after img2 transformation during image fusion;
(3) if it is
Figure FDA0003095563980000034
Then at the overlapping coordinate point KiRandomly selecting one pixel point as image fusion;
a composite image overlap region determining module for integrating all selected pixel points
Figure FDA0003095563980000035
And one of the two images is mapped into the corresponding coordinate system of the overlapping area to be determined as the overlapping area of the composite image.
5. The mahalanobis distance discrimination-based adaptive image fusion system of claim 4, wherein: and image registration in the acquisition module of the left part and the right part of the synthetic image is realized by SIFT algorithm.
6. The mahalanobis distance discrimination-based adaptive image fusion system of claim 4, wherein: the calculation method of the transformation matrix H in the acquisition module of the left and right parts of the composite image is as follows:
let T be any point on the overlap region, imaged as T1 in img1 and T2 in img 2; the mapping relationship of imaging is obtained as H1T 1,h2 ═ T2, resulting in T1 ═ T2(H1 ═ H2-1) (ii) a The H matrix represents the mapping relation of the same characteristic points of img1 and img2, and is derived by the following formula: H-H1 × H2-1
CN201810114537.9A 2018-02-05 2018-02-05 Adaptive image fusion method and system based on Mahalanobis distance discrimination Active CN108460724B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810114537.9A CN108460724B (en) 2018-02-05 2018-02-05 Adaptive image fusion method and system based on Mahalanobis distance discrimination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810114537.9A CN108460724B (en) 2018-02-05 2018-02-05 Adaptive image fusion method and system based on Mahalanobis distance discrimination

Publications (2)

Publication Number Publication Date
CN108460724A CN108460724A (en) 2018-08-28
CN108460724B true CN108460724B (en) 2021-10-22

Family

ID=63239714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810114537.9A Active CN108460724B (en) 2018-02-05 2018-02-05 Adaptive image fusion method and system based on Mahalanobis distance discrimination

Country Status (1)

Country Link
CN (1) CN108460724B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544498A (en) * 2018-11-29 2019-03-29 燕山大学 A kind of image adaptive fusion method
CN112801871B (en) * 2021-01-29 2024-04-05 中汽创智科技有限公司 Image self-adaptive fusion method based on Chebyshev distance discrimination
CN115953332B (en) * 2023-03-15 2023-08-18 四川新视创伟超高清科技有限公司 Dynamic image fusion brightness adjustment method, system and electronic equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004295691A (en) * 2003-03-27 2004-10-21 Pentax Corp Method, program, and device for image quality determination
US20090010507A1 (en) * 2007-07-02 2009-01-08 Zheng Jason Geng System and method for generating a 3d model of anatomical structure using a plurality of 2d images
CN101951487B (en) * 2010-08-19 2012-06-27 深圳大学 Panoramic image fusion method, system and image processing equipment
CN101984463A (en) * 2010-11-02 2011-03-09 中兴通讯股份有限公司 Method and device for synthesizing panoramic image
US20140267684A1 (en) * 2013-03-15 2014-09-18 Chemlmage Corporation System and method for detecting contamination in food using hyperspectral imaging
CN103226708B (en) * 2013-04-07 2016-06-29 华南理工大学 A kind of multi-model fusion video hand division method based on Kinect
CN103294024B (en) * 2013-04-09 2015-07-08 宁波杜亚机电技术有限公司 Intelligent home system control method
CN103279923B (en) * 2013-06-14 2015-12-23 西安电子科技大学 Based on topography's method for amalgamation processing of overlapping region
CN107330922A (en) * 2017-07-04 2017-11-07 西北工业大学 Video moving object detection method of taking photo by plane based on movable information and provincial characteristics
CN107369148B (en) * 2017-09-20 2019-09-10 湖北工业大学 Based on the multi-focus image fusing method for improving SML and Steerable filter

Also Published As

Publication number Publication date
CN108460724A (en) 2018-08-28

Similar Documents

Publication Publication Date Title
Zhu et al. Very large-scale global sfm by distributed motion averaging
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN111047510B (en) Large-field-angle image real-time splicing method based on calibration
CN102313536B (en) Method for barrier perception based on airborne binocular vision
TWI555379B (en) An image calibrating, composing and depth rebuilding method of a panoramic fish-eye camera and a system thereof
CN111553939B (en) Image registration algorithm of multi-view camera
Liao et al. Model-free distortion rectification framework bridged by distortion distribution map
US11488322B2 (en) System and method for training a model in a plurality of non-perspective cameras and determining 3D pose of an object at runtime with the same
CN108460724B (en) Adaptive image fusion method and system based on Mahalanobis distance discrimination
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN103607554A (en) Fully-automatic face seamless synthesis-based video synthesis method
CN107154014A (en) A kind of real-time color and depth Panorama Mosaic method
CN111028155A (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN112767546B (en) Binocular image-based visual map generation method for mobile robot
US11843865B2 (en) Method and device for generating vehicle panoramic surround view image
CN109544498A (en) A kind of image adaptive fusion method
GB2567245A (en) Methods and apparatuses for depth rectification processing
Fang et al. Self-supervised camera self-calibration from video
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium
CN106971381B (en) A kind of wide angle camera visual field line of demarcation generation method with the overlapping ken
Zhang et al. Building a stereo and wide-view hybrid RGB/FIR imaging system for autonomous vehicle
CN115839719A (en) Lightweight SLAM (Simultaneous localization and mapping) segmented mapping method
Yang et al. VR panorama mosaic algorithm based on particle swarm optimization and mutual information
Kitt et al. Trinocular optical flow estimation for intelligent vehicle applications
CN109089100B (en) Method for synthesizing binocular stereo video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant