WO2022074554A1 - Method of generating an aligned monochrome image and related computer program - Google Patents

Method of generating an aligned monochrome image and related computer program Download PDF

Info

Publication number
WO2022074554A1
WO2022074554A1 PCT/IB2021/059120 IB2021059120W WO2022074554A1 WO 2022074554 A1 WO2022074554 A1 WO 2022074554A1 IB 2021059120 W IB2021059120 W IB 2021059120W WO 2022074554 A1 WO2022074554 A1 WO 2022074554A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
rectified
monochromatic
images
generating
Prior art date
Application number
PCT/IB2021/059120
Other languages
French (fr)
Inventor
Mauro BURGO
Alberto Luigi COLOGNI
Glauco Bigini
Matteo Corno
Luca FRANCESCHETTI
Sergio Matteo Savaresi
Original Assignee
E-Novia S.P.A.
Politecnico Di Milano
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by E-Novia S.P.A., Politecnico Di Milano filed Critical E-Novia S.P.A.
Publication of WO2022074554A1 publication Critical patent/WO2022074554A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present disclosure generally relates to applications which provide for image acquisitions and more particularly it relates to a method of aligning multi-spectral images of the same scene.
  • the main approaches for acquiring images of the same target at multiple wavelengths simultaneously are:
  • Multi-spectral multi-camera devices are preferred as they solve the problem of having to acquire images of the same target at different wavelengths of the light spectrum at the same instant, in order to process the information contained in the acquired images.
  • the realignment / superposition of the images consists in particular in the roto- translation of the image planes so that the coordinates of the corresponding pixels coincide.
  • the image alignment algorithms fail when trying to align objects located at different observation depths: the alignment of them varies as the distance from the camera varies due to the parallax effect.
  • the algorithm developed by Jhan et al. aims to solve the problem of alignment of multi- spectral images acquired by MSC cameras.
  • This algorithm uses the information of lens distortion and roto-translation between cameras obtained from the calibration. The result obtained is better than the initial alignment, neighboring objects do not have "ghosting" effects as they are aligned correctly.
  • the limitation linked to the constraint of being able to capture images representing objects at the same distance from the camera persists, while objects at different distances in the same image equally reproduce the same alignment error (Jhan et al., The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2 / W6, 2017).
  • the algorithm aligns images acquired through a color camera and a thermal camera to estimate intrinsic / extrinsic parameters, lens distortion and roto- translation between cameras.
  • the background is then removed, isolating objects at the same distance and aligning them individually using indices of disparity estimated for each single portion of the image.
  • the solution therefore does not solve the alignment of objects at different distances in the same image.
  • Kise et al. IEEE Conference on Intelligent Transportation Systems, Proceedings, 1TSC, 109-114, 2006.
  • the algorithm of Kise et al. aims to create a multi-spectral panoramic image from images captured in stereo-vision by combining multiple consecutive images captured with the vehicle in motion.
  • An objective of this disclosure is to at least partially overcome the limitations of the approaches present in the state of the art, mentioned above.
  • One purpose of the present disclosure is the realization of an image processing algorithm for the superimposition and alignment of the images which, through the creation of suitable pixel descriptors, finds the unique correspondences for each acquired image, regardless of the color of the images.
  • This method can be implemented via software by means of an appropriate program which, when executed by a microprocessor unit, allows to perform the operations defined in the attached claim 1, thus obtaining a device capable of instantly acquiring two or more images of a same target subject at different wavelengths of the visible spectrum, using two or more cameras and two or more optical sensors and to realign (superimpose) these images by finding the unique matches pixel by pixel for any target distance / distance ratio between the cameras, which is not known a priori.
  • Figure 1 shows a block diagram representative of the algorithm object of this disclosure and the related steps
  • Figure 2 shows the two images, left and right, captured by two image capture devices and rectified according to the relative passage of the algorithm object of this disclosure
  • Figure 3 shows the two images of Figure 2 following a transformation into black and white and subsequent equalization of the respective intensity histograms
  • Figure 4 shows a disparity map calculated on the images of Figure 3;
  • Figure 5 shows the pixel alignment process, described by the algorithm object of this disclosure, between the right image and the disparity map of Figure 3;
  • Figure 6 shows the final aligned image.
  • the acquisition of images is implemented through a plurality of image capture devices, such as cameras, tuned to different filter wavelengths.
  • the image capture devices are two cameras.
  • the algorithm of this disclosure could ideally be divided into two parts: the first consisting of a calibration step of the instruments, to be carried out at the beginning, preferably only once; the second, consisting in the effective alignment of the stereoscopic images.
  • the calibration step stems from the calibration method, available in the prior art, proposed by Zhang el al., which requires that at least one image capture device observe a planar pattern with (at least two) different orientations.
  • the above method involves the following operations:
  • the target calibration object for calibration has a checkerboard pattern and is therefore also referred to as a checkerboard',
  • Intrinsic and extrinsic parameters of the cameras are used in the second part algorithm, which includes the steps listed below' and illustrated in Figure 1 .
  • a first image, or left image, of a scene is captured with the first camera, is tuned to a first filter wavelength, preferably in the visible light field.
  • a second image, or right image, of said scene with the second camera is captured, simultaneously with the capture of the first image, tuned to a second filter wavelength different from said first filter wavelength, preferably in the near infrared range.
  • the information obtained from the calibration process is used to rectify said first image, left, and said second image, right, as a function of lens distortion parameters and as a function of roto-translation parameters between the optical centers of the two cameras, generating a first rectified image and a second rectified image, respectively.
  • the rectification returns the images corrected by the distortion of the lens and projected on a common image plane, as shown in Figure 2.
  • Said first and second rectified images are subsequently transformed into monochrome images. If in steps A1 and A2 monochrome images are acquired by the cameras, then clearly the conversion operation into monochrome images is not necessary.
  • the intensity histograms of the first and second monochrome black and white images, obtained at the end of step B, are equalized in order to improve the visibility of the details captured in the image and to compensate for possible differences in the exposure parameters of the two cameras, where an image intensity histogram means a graphic representation of the number of pixels in an image as a function of their intensity.
  • the histograms of the images are generally made up of containers, or bins, each of which represents a certain range of intensity values. The number of bins into which the entire intensity range is divided is usually on the order of the square root of the number of pixels.
  • the image histogram is calculated by examining all the pixels in the image and assigning them to a bin based on the intensity of the pixels.
  • the final value of each bin is the number of pixels assigned to it and is represented both by the height of a bar and by a color scale that colors each bar.
  • the taller bars have colors closer to red.
  • the equalization of the histograms in this algorithm consists in defining an image transformation that modifies the intensity of each pixel, so that the processed histogram is as flat as possible, i.e. the image has no under or over exposed areas. This allows to use the entire available intensity scale and improves the contrast of the image. Equalization may be implemented with known algorithms such as the approach described in Digital Image Processing, by R.C. Gonzalez and R.E. Woods Addison Wesley.
  • the equalization of the histograms in the present algorithm is carried out in a numerical computing environment, preferably in a Matlab computing environment, with the "Histeq" function, using the image as input and without specifying other parameters.
  • a first equalized monochromatic image and a second equalized monochromatic image are generated, corresponding to said first and said second rectified monochromatic images obtained at the end of the step, respectively, as shown in Figure 3.
  • corresponding pixels lie on the same line passing through the images and the images seem to be captured by two cameras with optical axes parallel to each other.
  • an image of disparity of pixels is generated, shown in Figure 4, by running a semi-global matching (SGM) algorithm on said second equalized monochromatic image, taking as reference said first equalized monochromatic image.
  • SGM semi-global matching
  • the used SGM algorithm is similar to that described by Hirschmuller et al., in which a pixel-by-pixel correlation is sought through the intensity values.
  • the output of said SGM algorithm is therefore a map that relates each pixel of the first image to a pixel of the second image (Hirschmuller et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, 30, 328-341, 2008).
  • the SGM algorithm is preferably calibrated to obtain a number of correlations between pixels greater than 70%.
  • the pixels of said aligned monochromatic image are generated as a weighted average of corresponding pixels of the first rectified monochromatic image and of the auxiliary image.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

This disclosure illustrates an image processing algorithm for overlapping and aligning images which, through the creation of appropriate pixel descriptors, finds the unique correspondences for each acquired image, regardless of the color of the images. This method can be implemented via software by means of an appropriate program which, when executed by a microprocessor unit, allows to perform the operations defined in the attached claim 1, thus obtaining a device capable of instantly acquiring two or more images of the same target subject at different wavelengths of the visible spectrum, using two or more cameras and two or more optical sensors and to realign (superimpose) these images by finding the unique matches pixel by pixel for any target distance / distance ratio between the cameras, which is not known a priori.

Description

METHOD OF GENERATING AN ALIGNED MONOCHROME IMAGE AND
RELATED COMPUTER PROGRAM
TECHNICAL FIELD
The present disclosure generally relates to applications which provide for image acquisitions and more particularly it relates to a method of aligning multi-spectral images of the same scene.
BACKGROUND
The main approaches for acquiring images of the same target at multiple wavelengths simultaneously are:
• using a single camera on which 2 or more optical filters are applied sequentially and subsequently acquiring images in such a way as not to have any alignment problem as the correspondence of the pixels is intrinsic in the coordinates in the image plane of the pixels;
• using 2 or more cameras that acquire images at suitably high distances from the target so that the ratio of the target distance / distance between the cameras is great in order to minimize the parallax problem due to the two different points of view.
In both cases, the problem of roto-translation of the image planes, thus the overlapping of the images, is avoided but the application scope of the system is limited.
However, the use of a single image acquisition device has important limitations, as it implies the impossibility of acquiring the 2 images at the same instant as the time between the 2 snapshots is required for switching the optical filters, this limits the application to shots of subjects whose position remains unchanged during the whole acquisition time.
For this reason multi-spectral multi-camera devices (Multi-Spectral Cameras, MSC) are preferred as they solve the problem of having to acquire images of the same target at different wavelengths of the light spectrum at the same instant, in order to process the information contained in the acquired images.
For a correct functioning and processing, it is necessary that these devices are able to align / superimpose the acquired images of the same subject from different cameras.
The realignment / superposition of the images consists in particular in the roto- translation of the image planes so that the coordinates of the corresponding pixels coincide.
However, this procedure is particularly difficult when the target distance / distance ratio between cameras is low or varies in the same image (target at different depths).
In fact, the image alignment algorithms fail when trying to align objects located at different observation depths: the alignment of them varies as the distance from the camera varies due to the parallax effect.
The multi-spectral multi-lens cameras currently on the market align the images simply by superimposing them, without exploiting any information that describes the relative position of the cameras. For this reason, their operation is guaranteed only at long distances (e.g. aerial shots, supported by drones). By acquiring images containing objects at close distances, the alignment fails, thus creating a "ghosting" effect between the various spectra captured by the lenses.
The algorithm developed by Jhan et al. aims to solve the problem of alignment of multi- spectral images acquired by MSC cameras. This algorithm uses the information of lens distortion and roto-translation between cameras obtained from the calibration. The result obtained is better than the initial alignment, neighboring objects do not have "ghosting" effects as they are aligned correctly. However, the limitation linked to the constraint of being able to capture images representing objects at the same distance from the camera persists, while objects at different distances in the same image equally reproduce the same alignment error (Jhan et al., The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-2 / W6, 2017).
The same limitation exists in the pedestrian recognition algorithm developed by Krotosky et al. The algorithm aligns images acquired through a color camera and a thermal camera to estimate intrinsic / extrinsic parameters, lens distortion and roto- translation between cameras. The background is then removed, isolating objects at the same distance and aligning them individually using indices of disparity estimated for each single portion of the image. The solution therefore does not solve the alignment of objects at different distances in the same image. (Krotosky et al., IEEE Conference on Intelligent Transportation Systems, Proceedings, 1TSC, 109-114, 2006). The algorithm of Kise et al. aims to create a multi-spectral panoramic image from images captured in stereo-vision by combining multiple consecutive images captured with the vehicle in motion. However, the obtained result is not satisfactory since the scene portrayed in the aligned images is almost planar, with a very low three- dimensional component and the used method imposes a fixed distance from the camera. (Kise et al., Computers and Electronics in Agriculture, 60, 67-75, 2008).
The main limitations of the above approaches for multi-spectral cameras available in the state of the art are mainly related to the target-camera and camera-camera distance; indeed, the use of multiple cameras at a suitably high distance limits the use of the system to applications where it is necessary to acquire images of a target at a great distance. Approaches with targets at close distances (low target distance / distance ratio between cameras), on the other hand, limit their use to applications in which the distance to the target is known and fixed.
SUMMARY
An objective of this disclosure is to at least partially overcome the limitations of the approaches present in the state of the art, mentioned above.
One purpose of the present disclosure is the realization of an image processing algorithm for the superimposition and alignment of the images which, through the creation of suitable pixel descriptors, finds the unique correspondences for each acquired image, regardless of the color of the images.
This method can be implemented via software by means of an appropriate program which, when executed by a microprocessor unit, allows to perform the operations defined in the attached claim 1, thus obtaining a device capable of instantly acquiring two or more images of a same target subject at different wavelengths of the visible spectrum, using two or more cameras and two or more optical sensors and to realign (superimpose) these images by finding the unique matches pixel by pixel for any target distance / distance ratio between the cameras, which is not known a priori.
BRIEF DESCRIPTION OF THE DRAWINGS
Other advantages will become evident from the following detailed description of its preferred embodiments, presented by way of non-limiting example, with reference to the attached figures, in which: Figure 1 shows a block diagram representative of the algorithm object of this disclosure and the related steps;
Figure 2 shows the two images, left and right, captured by two image capture devices and rectified according to the relative passage of the algorithm object of this disclosure;
Figure 3 shows the two images of Figure 2 following a transformation into black and white and subsequent equalization of the respective intensity histograms;
Figure 4 shows a disparity map calculated on the images of Figure 3;
Figure 5 shows the pixel alignment process, described by the algorithm object of this disclosure, between the right image and the disparity map of Figure 3;
Figure 6 shows the final aligned image.
DESCRIPTION OF EXAMPLE EMBODIMENTS
The acquisition of images is implemented through a plurality of image capture devices, such as cameras, tuned to different filter wavelengths.
According to a preferred embodiment of the present invention, the image capture devices are two cameras.
According to one aspect, the algorithm of this disclosure could ideally be divided into two parts: the first consisting of a calibration step of the instruments, to be carried out at the beginning, preferably only once; the second, consisting in the effective alignment of the stereoscopic images.
The calibration step stems from the calibration method, available in the prior art, proposed by Zhang el al., which requires that at least one image capture device observe a planar pattern with (at least two) different orientations. In particular, the above method involves the following operations:
- Defining a calibration object, or calibration target, by printing a graphic pattern on a flat surface. In particular, the target calibration object for calibration has a checkerboard pattern and is therefore also referred to as a checkerboard',
- Capturing some images, at least two, of the calibration object with different orientations, moving the cameras or the calibration object;
- Detecting the characteristic points, or feature points, of the pattern of said calibration object in said taken images;
- Estimating intrinsic and extrinsic parameters of said images; - Estimating the coefficients of a radial distortion, due for example to the distortion of the lenses of the capturing devices, of said images, preferably using a linear* least squares method;
- Estimating roto-translation between the optical centers of the cameras;
- Refining all parameters using a minimize function.
Further details on the calibration procedure can be found in Zhang et al., A Flexible New Technique for Camera Calibration , Microsoft Research, One Microsoft Way, 1998.
Intrinsic and extrinsic parameters of the cameras, estimated in the above calibration process, in particular the distortion of the lens (in each single camera) and the roto- translation between the optical centers of each camera with respect to the other cameras, are used in the second part algorithm, which includes the steps listed below' and illustrated in Figure 1 .
A1 A first image, or left image, of a scene is captured with the first camera, is tuned to a first filter wavelength, preferably in the visible light field.
A2 A second image, or right image, of said scene with the second camera is captured, simultaneously with the capture of the first image, tuned to a second filter wavelength different from said first filter wavelength, preferably in the near infrared range.
B The information obtained from the calibration process is used to rectify said first image, left, and said second image, right, as a function of lens distortion parameters and as a function of roto-translation parameters between the optical centers of the two cameras, generating a first rectified image and a second rectified image, respectively. The rectification returns the images corrected by the distortion of the lens and projected on a common image plane, as shown in Figure 2. Said first and second rectified images are subsequently transformed into monochrome images. If in steps A1 and A2 monochrome images are acquired by the cameras, then clearly the conversion operation into monochrome images is not necessary.
C The intensity histograms of the first and second monochrome black and white images, obtained at the end of step B, are equalized in order to improve the visibility of the details captured in the image and to compensate for possible differences in the exposure parameters of the two cameras, where an image intensity histogram means a graphic representation of the number of pixels in an image as a function of their intensity. The histograms of the images are generally made up of containers, or bins, each of which represents a certain range of intensity values. The number of bins into which the entire intensity range is divided is usually on the order of the square root of the number of pixels. The image histogram is calculated by examining all the pixels in the image and assigning them to a bin based on the intensity of the pixels. The final value of each bin is the number of pixels assigned to it and is represented both by the height of a bar and by a color scale that colors each bar. The taller bars have colors closer to red. The equalization of the histograms in this algorithm consists in defining an image transformation that modifies the intensity of each pixel, so that the processed histogram is as flat as possible, i.e. the image has no under or over exposed areas. This allows to use the entire available intensity scale and improves the contrast of the image. Equalization may be implemented with known algorithms such as the approach described in Digital Image Processing, by R.C. Gonzalez and R.E. Woods Addison Wesley. According to an embodiment of the present invention, the equalization of the histograms in the present algorithm is carried out in a numerical computing environment, preferably in a Matlab computing environment, with the "Histeq" function, using the image as input and without specifying other parameters. Following this step, a first equalized monochromatic image and a second equalized monochromatic image are generated, corresponding to said first and said second rectified monochromatic images obtained at the end of the step, respectively, as shown in Figure 3. After these transformations, corresponding pixels lie on the same line passing through the images and the images seem to be captured by two cameras with optical axes parallel to each other.
D Using the obtained images, an image of disparity of pixels is generated, shown in Figure 4, by running a semi-global matching (SGM) algorithm on said second equalized monochromatic image, taking as reference said first equalized monochromatic image. The used SGM algorithm is similar to that described by Hirschmuller et al., in which a pixel-by-pixel correlation is sought through the intensity values. The output of said SGM algorithm is therefore a map that relates each pixel of the first image to a pixel of the second image (Hirschmuller et al., IEEE Transactions on Pattern Analysis and Machine Intelligence, 30, 328-341, 2008). According to an aspect of the present invention, the SGM algorithm is preferably calibrated to obtain a number of correlations between pixels greater than 70%. This value derives from experimental results according to which, without this calibration, the values for which it is not possible to calculate disparity and therefore the correlation between the first and second image in the disparity map would be greater than 80% of the image pixels. An alignment of 20% of the image would therefore not be useful and significant for the purposes of the application. Using the suggested calibration instead, more than 70% of the pixels in the images may be aligned.
E The pixels of an auxiliary image are calculated, corresponding to the pixels of said second rectified monochromatic image translated by a distance given by values of said disparity image read at the coordinates of said pixels of the second rectified monochromatic image, as shown in Figure 5. Each pixel of the second image is then translated by its disparity.
F Finally, the pixels of said aligned monochromatic image, shown in Figure 6, are generated as a weighted average of corresponding pixels of the first rectified monochromatic image and of the auxiliary image.
The method and procedure described above allow to overcome the limitations of the approaches present in the prior art of the aforementioned multi-spectral cameras, in particular
- Thanks to the use of 2 or more cameras, the instant acquisition of multiple images at different wavelengths is possible, allowing their application even in contexts where the target is in motion;
- The use of an image superimposition algorithm that allows you to acquire images of the target at any distance by calculating from time to time the correspondences between the pixels and the rototranslation necessary to superimpose the 2 images regardless of the color of the subject and the target distance ratio / distance between cameras.
The present invention has so far been described with reference to preferred embodiments. It is understood that there may be other embodiments which refer to the same inventive concept defined by the scope of the following claims.

Claims

1. A method of generating an aligned monochrome image, comprising the following operations: capturing a first image of a scene with a first image capture device tuned to a first filter wavelength; capturing a second image of said scene with a second image capture device tuned to a second filter wavelength different from said first filter wavelength, wherein said second image is captured simultaneously with said first image; rectifying said first image and said second image in function of distortion parameters of lenses and in function of roto-translation parameters between optical centers of said first device and said second image capture device, and generating a corresponding first rectified monochromatic image and a corresponding second rectified image, respectively; equalizing intensity histograms of said first rectified monochromatic image and of said second rectified monochromatic image by generating a corresponding first equalized monochromatic image and a second equalized monochromatic image, respectively; generating a pixel disparity image of said second equalized monochromatic image with respect to said first equalized monochromatic image, by executing a semi- global matching algorithm on said second equalized monochromatic image taking as reference said first equalized monochromatic image; calculating pixels of an auxiliary image, corresponding to pixels of said second rectified monochromatic image translated by a distance given by values of said disparity image read at coordinates of said pixels of the second rectified monochromatic image; generating pixels of said aligned monochromatic image as a weighted average of corresponding pixels of the first rectified monochromatic image and of the auxiliary image.
2. The method according to claim 1, wherein said first image and said second image are monochrome.
3. The method according to claim 1, comprising the steps of generating a first rectified image and a second rectified image corresponding to said first image and said second image by said step of rectifying said first image and said second image as a function of distortion parameters of lenses and as a function of roto-translation parameters between optical centers of said first image capture device and of said second image capture device, then converting the first rectified image and the second rectified image into monochromatic images generating said corresponding first rectified monochromatic image and said corresponding second rectified image, respectively.
4. The method according to one of the preceding claims, comprising a preliminary calibration operation of said first and second capture devices according to claim 1, in which said capture devices can be calibrated individually or simultaneously, comprising the following steps:
- Defining a calibration object having a graphic pattern on a flat surface;
- Capturing at least two test images of the calibration object with different orientations, moving the capture devices or the calibration object;
- Detecting points characterizing the graphic pattern of said calibration object in said test images;
- Estimating intrinsic and extrinsic parameters of said test images;
- Estimating coefficients of a radial distortion of said test images, preferably using a linear least squares method;
- Estimating a roto-translation between optical centers of said first and second capture devices;
- Refining said intrinsic and extrinsic parameters using a minimization function.
5. The method according to claim 4, wherein said calibration object carries a checkerboard pattern.
6. The method according to one of the preceding claims, wherein one of said first and second filter wavelengths is in the visible light range, and the other of said first and second filter wavelengths is in the near infrared range.
7. A computer program, comprising software code installable in an internal memory of a microprocessor unit, configured to perform the operations of the method according to one of claims 1 to 6 when said software code is executed by the microprocessor unit.
PCT/IB2021/059120 2020-10-06 2021-10-05 Method of generating an aligned monochrome image and related computer program WO2022074554A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IT102020000023494 2020-10-06
IT102020000023494A IT202000023494A1 (en) 2020-10-06 2020-10-06 METHOD OF GENERATION OF AN ALIGNED MONOCHROME IMAGE AND RELATIVE COMPUTER PROGRAM

Publications (1)

Publication Number Publication Date
WO2022074554A1 true WO2022074554A1 (en) 2022-04-14

Family

ID=73793734

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/059120 WO2022074554A1 (en) 2020-10-06 2021-10-05 Method of generating an aligned monochrome image and related computer program

Country Status (2)

Country Link
IT (1) IT202000023494A1 (en)
WO (1) WO2022074554A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150373316A1 (en) * 2014-06-23 2015-12-24 Ricoh Co., Ltd. Disparity Estimation for Multiview Imaging Systems
US20160196653A1 (en) * 2014-12-31 2016-07-07 Flir Systems, Inc. Systems and methods for dynamic registration of multimodal images
WO2020001034A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Image processing method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150373316A1 (en) * 2014-06-23 2015-12-24 Ricoh Co., Ltd. Disparity Estimation for Multiview Imaging Systems
US20160196653A1 (en) * 2014-12-31 2016-07-07 Flir Systems, Inc. Systems and methods for dynamic registration of multimodal images
WO2020001034A1 (en) * 2018-06-30 2020-01-02 华为技术有限公司 Image processing method and device

Also Published As

Publication number Publication date
IT202000023494A1 (en) 2022-04-06

Similar Documents

Publication Publication Date Title
US11546576B2 (en) Systems and methods for dynamic calibration of array cameras
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CA3157197A1 (en) Systems and methods for surface normals sensing with polarization
US20140055632A1 (en) Feature based high resolution motion estimation from low resolution images captured using an array source
CN108629756B (en) Kinectv2 depth image invalid point repairing method
US11838490B2 (en) Multimodal imaging sensor calibration method for accurate image fusion
CN110322485B (en) Rapid image registration method of heterogeneous multi-camera imaging system
CN111080709A (en) Multispectral stereo camera self-calibration algorithm based on track feature registration
CN114494462A (en) Binocular camera ranging method based on Yolov5 and improved tracking algorithm
KR20200132065A (en) System for Measuring Position of Subject
CN109584312A (en) Camera calibration method, device, electronic equipment and computer readable storage medium
CN115222785A (en) Infrared and visible light image registration method based on binocular calibration
Heather et al. Multimodal image registration with applications to image fusion
CN106846385B (en) Multi-sensing remote sensing image matching method, device and system based on unmanned aerial vehicle
Wang et al. A Robust Multispectral Point Cloud Generation Method Based on 3D Reconstruction from Multispectral Images
CN111145254B (en) Door valve blank positioning method based on binocular vision
WO2023240963A1 (en) Multispectral multi-sensor synergistic processing method and apparatus, and storage medium
WO2022074554A1 (en) Method of generating an aligned monochrome image and related computer program
Ding et al. 3D LiDAR and color camera data fusion
Zhang et al. Guided feature matching for multi-epoch historical image blocks pose estimation
Karaca et al. Ground-based panoramic stereo hyperspectral imaging system with multiband stereo matching
Shahbazi et al. Seamless co-registration of images from multi-sensor multispectral cameras
KR102439922B1 (en) Simultaneous real-time acquisition system of RGB image/joint position label data pairs using heterogeneous cameras
US20230316577A1 (en) Cloud-based training and camera correction
Broggi et al. Handling rolling shutter effects on semi-global matching in automotive scenarios

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21799098

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21799098

Country of ref document: EP

Kind code of ref document: A1