CN108053368B - Cross-scale resolution light field image super-resolution and depth estimation method and device - Google Patents

Cross-scale resolution light field image super-resolution and depth estimation method and device Download PDF

Info

Publication number
CN108053368B
CN108053368B CN201711362812.0A CN201711362812A CN108053368B CN 108053368 B CN108053368 B CN 108053368B CN 201711362812 A CN201711362812 A CN 201711362812A CN 108053368 B CN108053368 B CN 108053368B
Authority
CN
China
Prior art keywords
resolution
light field
image
super
disparity map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711362812.0A
Other languages
Chinese (zh)
Other versions
CN108053368A (en
Inventor
刘烨斌
赵漫丹
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201711362812.0A priority Critical patent/CN108053368B/en
Publication of CN108053368A publication Critical patent/CN108053368A/en
Application granted granted Critical
Publication of CN108053368B publication Critical patent/CN108053368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method and a device for light field image super-resolution and depth estimation of cross-scale resolution, wherein the method comprises the following steps: performing upsampling by a single-image super-resolution SISR method; after the high-resolution image at the middle visual angle is sampled down, the SISR method is used for up-sampling, and an error map is obtained; acquiring a disparity map, performing global optimization on the disparity map by using the inherent property of a light field, and performing hole filling operation at the same time, so that the region which cannot be estimated by the disparity map is estimated again; transmitting the error map by using the global disparity map so as to restore high-resolution information of the image to realize super-resolution reconstruction of the light field; and carrying out depth estimation on the scene according to the super-resolution light field image to obtain a depth estimation result. The method can utilize the global disparity map to transmit the error map, so that the image recovers high-resolution information, scene depth information is obtained through a high-resolution light field, robustness is improved, and calculation speed is high.

Description

Cross-scale resolution light field image super-resolution and depth estimation method and device
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a device for super-resolution and depth estimation of a light field image with cross-scale resolution.
Background
Light field imaging is a completely new means of recording three-dimensional information of a scene. Different from the traditional imaging mode, the light field can record not only the intensity information of light rays at a certain position, but also the light ray distribution condition from a certain angle range at the position, so that two-dimensional plane imaging is converted into four-dimensional light field imaging, and the four-dimensional light field imaging comprises two space dimensions and two angle dimensions.
However, due to the limitation of sensors in the optical field, spatial and angular resolutions are mutually restricted, for example, commercial optical field cameras, which are composed of a plurality of micro lens arrays, are convenient to carry, but have low spatial resolution and are difficult to meet the requirements; another approach is a multi-camera array, but this implementation is expensive and bulky and is to be solved.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the present invention is to provide a method for super-resolution of light field images and depth estimation, which has strong robustness and fast computation speed.
Another objective of the present invention is to provide a light field image super-resolution and depth estimation device with cross-scale resolution.
In order to achieve the above object, an embodiment of the present invention provides a cross-scale resolution light field image super-resolution and depth estimation method, including the following steps: up-sampling a light field image at a low resolution viewing angle by a single image super-resolution (SISR) method; after the high-resolution image at the middle visual angle is sampled down, the SISR method is used for up-sampling, and an error map is obtained; acquiring disparity maps of the high-resolution image and the up-sampled image in the light field, performing global optimization on the disparity maps by using the inherent properties of the light field, and performing hole filling operation at the same time, so that the region which cannot be estimated by the disparity map estimation is re-estimated; transmitting the error map by using a global disparity map so as to restore high-resolution information of the image to realize super-resolution reconstruction of a light field; and carrying out depth estimation of the scene according to the super-resolution light field image to obtain a depth estimation result.
The cross-scale-resolution light field image super-resolution and depth estimation method can reconstruct high-resolution light field imaging under the condition of greatly reducing equipment cost, thereby obtaining a high-spatial-resolution light field, acquiring scene depth information through the high-resolution light field, having strong robustness and high calculation speed, avoiding inaccurate estimation of partial regions caused by shielding factors in the light field, adding a global optimization process, and accurately calculating the light field disparity map.
In addition, the light field image super-resolution and depth estimation method based on the cross-scale resolution according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the method further includes: the missing high frequency detail part of the SISR method is inferred using a high resolution image at the central view.
Further, in an embodiment of the present invention, the acquiring a disparity map of the high-resolution image and the upsampled image in the light field and performing global optimization on the disparity map by using the inherent property of the light field further includes: obtaining a disparity map of the light field center position and any position by a cross model estimation method; and obtaining the disparity map through a global optimization process so as to avoid estimation errors of partial regions caused by occlusion factors in the light field.
Further, in an embodiment of the present invention, the performing a hole filling operation further includes:
hole is supplemented by using the hole peripheral neighborhood information, and the supplement formula is as follows:
Figure BDA0001512174910000021
wherein f is a neighborhood pixel value, omega is a weight, i is an abscissa of the current hole position to be filled, j is an ordinate of the current hole position to be filled, k is an abscissa of the window neighborhood position, and l is an ordinate of the window neighborhood position.
Further, in an embodiment of the present invention, the performing depth estimation of a scene according to the super-resolved light field image further includes: and estimating the depth of the scene by using the depth information of the reconstructed high-resolution light field image.
In order to achieve the above object, another embodiment of the present invention provides a cross-scale resolution light field image super-resolution and depth estimation apparatus, including: the sampling module is used for performing up-sampling on the light field image at the low-resolution viewing angle by a single-image super-resolution SISR method; the acquisition module is used for performing up-sampling by using the SISR method after down-sampling the high-resolution image at the middle visual angle and acquiring an error map; the optimization module is used for acquiring the disparity maps of the high-resolution image and the up-sampled image in the light field, performing global optimization on the disparity maps by using the inherent properties of the light field, and performing hole filling operation at the same time, so that the region which cannot be estimated by the disparity map is estimated again; the reconstruction module is used for transmitting the error map by utilizing the global disparity map so as to restore high-resolution information of the image and realize super-resolution reconstruction of the light field; and the estimation module is used for carrying out depth estimation on the scene according to the light field image after super-resolution to obtain a depth estimation result.
The cross-scale-resolution light field image super-resolution and depth estimation device can reconstruct high-resolution light field imaging under the condition of greatly reducing equipment cost, thereby obtaining a high-spatial-resolution light field, acquiring scene depth information through the high-resolution light field, having strong robustness and high calculation speed, avoiding inaccurate estimation of partial regions caused by shielding factors in the light field, adding a global optimization process, and accurately calculating the light field disparity map.
In addition, the light field image super-resolution and depth estimation device with cross-scale resolution according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the method further includes: the missing high frequency detail part of the SISR method is inferred using a high resolution image at the central view.
Further, in an embodiment of the present invention, the optimization module further includes: the acquisition unit is used for acquiring a disparity map of the central position and any position of the light field by a cross model estimation method; and the optimization unit is used for obtaining the disparity map through a global optimization process so as to avoid estimation errors of partial regions caused by shielding factors in the light field.
Further, in an embodiment of the present invention, the performing a hole filling operation further includes:
hole is supplemented by using the hole peripheral neighborhood information, and the supplement formula is as follows:
Figure BDA0001512174910000031
wherein f is a neighborhood pixel value, omega is a weight, i is an abscissa of the current hole position to be filled, j is an ordinate of the current hole position to be filled, k is an abscissa of the window neighborhood position, and l is an ordinate of the window neighborhood position.
Further, in an embodiment of the present invention, the performing depth estimation of a scene according to the super-resolved light field image further includes: and estimating the depth of the scene by using the depth information of the reconstructed high-resolution light field image.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of a cross-scale resolution light field image super-resolution and depth estimation method according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a cross-scale resolution light field image super-resolution and depth estimation method according to one embodiment of the present invention;
FIG. 3 is a diagram illustrating a light field disparity map according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a cross-scale resolution light field image super-resolution and depth estimation method according to one embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a cross-scale resolution light field image super-resolution and depth estimation device according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
The light field image super-resolution and depth estimation method of cross-scale resolution proposed according to the embodiment of the present invention will be described below with reference to the accompanying drawings, and first, the light field image super-resolution and depth estimation apparatus of cross-scale resolution proposed according to the embodiment of the present invention will be described with reference to the accompanying drawings.
FIG. 1 is a flow chart of a cross-scale resolution light field image super-resolution and depth estimation method according to an embodiment of the present invention.
As shown in fig. 1, the cross-scale resolution light field image super-resolution and depth estimation method includes the following steps:
in step S101, the light field image at a low resolution view angle is up-sampled by a single-image super resolution SISR method.
It is understood that, in conjunction with fig. 1 and fig. 2, the embodiment of the present invention may use a single-picture super-resolution (SISR) method to upsample through a low-resolution light field, and the formula is as follows:
Figure BDA0001512174910000041
specifically, embodiments of the present invention up-sample the low resolution image in the light field first by using the SISR method. Since the single-picture super-resolution method can only address the case where the super-resolution factor is low, when the super-resolution factor is too large (approximately four times), much image detail information will be lost using the SISR method.
In step S102, after down-sampling the high-resolution image at the intermediate view angle, up-sampling is performed by the SISR method, and an error map is acquired.
Specifically, given the light field image input by the embodiment of the present invention, the high resolution image located in the middle of the light field can provide global high frequency detail information, which is a lost part in the SISR process, and needs to be restored by simulating the process. Even high resolution image RHRFirst down-samplingTo the same size as the low resolution image and then upsampled using the same SISR method, where the result is compared with RHRThe difference, which can be considered as the part of the high frequency detail information lost by the SISR method, is represented as:
DR=RHR-fSISM(RHR↓),
wherein D isRI.e. an error map.
In step S103, disparity maps of the high-resolution image and the up-sampled image in the light field are obtained, global optimization is performed on the disparity maps by using the inherent properties of the light field, and a hole filling operation is performed at the same time, so that a region that cannot be estimated by the disparity map is re-estimated.
Further, in an embodiment of the present invention, acquiring a disparity map of a high-resolution image and an up-sampled image in a light field, and performing global optimization on the disparity map by using the inherent property of the light field, further includes: obtaining a disparity map of the light field center position and any position by a cross model estimation method; and obtaining a disparity map through a global optimization process so as to avoid estimation errors of partial regions caused by occlusion factors in a light field.
Further, in an embodiment of the present invention, the hole filling operation further includes:
hole is supplemented by using the hole peripheral neighborhood information, and the supplement formula is as follows:
Figure BDA0001512174910000051
wherein f is a neighborhood pixel value, omega is a weight, i is an abscissa of the current hole position to be filled, j is an ordinate of the current hole position to be filled, k is an abscissa of the window neighborhood position, and l is an ordinate of the window neighborhood position.
It is understood that the error map obtained in step S102 can be propagated to each viewing angle in the low resolution of the light field by means of the parallax map. In order to keep the consistency of the super-resolution light field images, a global optimization strategy is added. As shown in fig. 3 and 4, a "cross" model estimation is adopted, in the graph, red 1 represents a high-resolution image in a light field, a disparity map is calculated for a horizontal blue 2 image, due to the existence of a shielding phenomenon, the disparity map estimation of a partial region is inaccurate, but the partial region is exactly distributed by taking a high-definition red image as a center, so that the disparity map of a shielding region under each view angle is supplemented by using the center symmetry property, the shielding is calculated by using optical flows twice at each view angle, and then the region obtained by the optical flows is expanded by using an expansion algorithm. Taking a red 1 high-resolution image as a center, dividing the whole light field into an upper part, a lower part, a left part and a right part, wherein the kernel of the expansion algorithm selected for each part is [0,1, 0; 0,1, 0; 0,0,0, [0,0, 0; 0,1, 0; 0,1,0, [0,0, 0; 1,1, 0; 0,0,0, [0,0, 0; 0,1, 1; 0,0,0]. Therefore, the parallax map information of the shielding area of each part of the light field image which is centered on the high-resolution image is filled by the parallax map of the other side area of the light field image. In the process of transmitting the error map by using the disparity map, some holes cannot be filled, and at this time, the holes need to be supplemented by using the information of the neighborhood around the holes:
Figure BDA0001512174910000052
wherein f is a neighborhood pixel value, omega is a weight, and the constraint is carried out by two conditions of a space position and a pixel value.
After the disparity map at the blue position in fig. 4 is obtained, a global optimization strategy is implemented, outliers in the disparity map are removed, pixel-by-pixel RMSE (Root Mean Square Error) is used for calculation, the two largest singular points are removed, and then the rest are averaged.
In step S104, the error map is transferred by using the global disparity map, so that the image recovers high resolution information to realize super-resolution reconstruction of the light field.
It is understood that, in the embodiment of the present invention, the error map D obtained in step S102 may be obtained by using the optimized disparity mapRTransmitting the information to each visual angle of the light field to obtain the missing high-frequency detail information under each visual angle, namely:
DS=fwarping(DR),
wherein D isSIs the high frequency detail information at that view angle.
In step S105, depth estimation of the scene is performed from the super-resolution light field image, and a depth estimation result is obtained.
Further, in an embodiment of the present invention, the depth estimation of the scene according to the super-resolved light field image further includes: and estimating the depth of the scene by using the depth information of the reconstructed high-resolution light field image.
Further, in an embodiment of the present invention, the method further includes: the missing high frequency detail part of the SISR method is inferred using the high resolution image at the central view.
It can be understood that the embodiment of the invention obtains the image through SISR method
Figure BDA0001512174910000061
With high-frequency detail information DSAnd adding to obtain the final expected output. That is to say that the first and second electrodes,
Figure BDA0001512174910000062
and depth estimation of the scene is performed using the high resolution light field image.
According to the cross-scale-resolution light field image super-resolution and depth estimation method provided by the embodiment of the invention, the high-resolution light field imaging can be reconstructed under the condition of greatly reducing the equipment overhead, so that a high-spatial-resolution light field is obtained, the scene depth information is obtained through the high-resolution light field, the robustness is very high, the calculation speed is high, and a good spatial super-resolution effect can be achieved for the light field shot by a synthesized light field, a commercial light field camera and a light field microscopic camera.
Next, a light field image super-resolution and depth estimation apparatus of cross-scale resolution proposed according to an embodiment of the present invention is described with reference to the drawings.
FIG. 5 is a schematic structural diagram of a cross-scale resolution light field image super-resolution and depth estimation device according to an embodiment of the present invention.
As shown in fig. 5, the cross-scale resolution light field image super-resolution and depth estimation apparatus 10 includes: a sampling module 100, an acquisition module 200, an optimization module 300, a reconstruction module 400, and an estimation module 500.
The sampling module 100 is configured to perform upsampling on a light field image at a low resolution viewing angle by a single-image super-resolution SISR method. The obtaining module 200 is configured to perform upsampling by using a SISR method after downsampling the high resolution image at the intermediate view angle, and obtain an error map. The optimization module 300 is configured to obtain a disparity map of a high-resolution image and an upsampled image in the light field, perform global optimization on the disparity map by using the inherent property of the light field, and perform hole filling operation at the same time, so that a region that cannot be estimated by disparity map estimation is re-estimated. The reconstruction module 400 is configured to transmit the error map by using the global disparity map, so that the image recovers high resolution information to realize super-resolution reconstruction of the light field. The estimation module 500 is configured to perform depth estimation on a scene according to the super-resolved light field image, and obtain a depth estimation result. The device 10 of the embodiment of the invention can reconstruct the high-resolution light field imaging under the condition of greatly reducing the equipment overhead so as to obtain the scene depth information through the high-resolution light field, and has strong robustness and high calculation speed.
Further, in an embodiment of the present invention, the method further includes: the missing high frequency detail part of the SISR method is inferred using the high resolution image at the central view.
Further, in an embodiment of the present invention, the optimization module further includes: the acquisition unit is used for acquiring a disparity map of the central position and any position of the light field by a cross model estimation method; and the optimization unit is used for obtaining the disparity map through a global optimization process so as to avoid estimation errors of partial regions caused by shielding factors in the light field.
Further, in an embodiment of the present invention, the hole filling operation further includes:
hole is supplemented by using the hole peripheral neighborhood information, and the supplement formula is as follows:
Figure BDA0001512174910000071
wherein f is a neighborhood pixel value, omega is a weight, i is an abscissa of the current hole position to be filled, j is an ordinate of the current hole position to be filled, k is an abscissa of the window neighborhood position, and l is an ordinate of the window neighborhood position.
Further, in an embodiment of the present invention, the depth estimation of the scene according to the super-resolved light field image further includes: and estimating the depth of the scene by using the depth information of the reconstructed high-resolution light field image.
It should be noted that the above explanation of the embodiment of the method for super-resolution of light field image and depth estimation across scale resolutions is also applicable to the device for super-resolution of light field image and depth estimation across scale resolutions in this embodiment, and is not repeated here.
According to the cross-scale resolution light field image super-resolution and depth estimation device provided by the embodiment of the invention, the light field imaging with high resolution can be reconstructed under the condition of greatly reducing the equipment overhead, so that the light field with high spatial resolution is obtained, the scene depth information is obtained through the high-resolution light field, the robustness is very strong, the calculation speed is high, and a good spatial super-resolution effect can be achieved for the light field shot by a synthesized light field, a commercial light field camera and a light field microscopic camera.
In the description of the present invention, it is to be understood that the terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," "axial," "radial," "circumferential," and the like are used in the orientations and positional relationships indicated in the drawings for convenience in describing the invention and to simplify the description, and are not intended to indicate or imply that the referenced device or element must have a particular orientation, be constructed and operated in a particular orientation, and are not to be considered limiting of the invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the present invention, unless otherwise expressly stated or limited, the first feature "on" or "under" the second feature may be directly contacting the first and second features or indirectly contacting the first and second features through an intermediate. Also, a first feature "on," "over," and "above" a second feature may be directly or diagonally above the second feature, or may simply indicate that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature may be directly under or obliquely under the first feature, or may simply mean that the first feature is at a lesser elevation than the second feature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (2)

1. A cross-scale resolution light field image super-resolution and depth estimation method is characterized by comprising the following steps:
the method comprises the steps that a light field image at a low-resolution visual angle is up-sampled by a single-image super-resolution SISR method;
after the high-resolution image at the middle visual angle is sampled down, the SISR method is used for up-sampling, and an error map is obtained;
acquiring disparity maps of the high-resolution image and the up-sampled image in the light field, performing global optimization on the disparity maps by using the inherent properties of the light field, and performing hole filling operation at the same time, so that the region which cannot be estimated by the disparity map estimation is re-estimated;
transmitting the error map by using a global disparity map so as to restore high-resolution information of the image to realize super-resolution reconstruction of a light field; and
performing depth estimation of a scene according to the super-resolution light field image to obtain a depth estimation result;
wherein, the obtaining of the disparity map of the high resolution image and the up-sampled image in the light field and the global optimization of the disparity map by using the inherent property of the light field further comprises: obtaining a disparity map of the light field center position and any position by a cross model estimation method; obtaining the disparity map through a global optimization process so as to avoid estimation errors of partial regions caused by shielding factors in the light field;
the hole filling operation further comprises: hole is supplemented by using the hole peripheral neighborhood information, and the supplement formula is as follows:
Figure FDA0002508320480000011
wherein f is a neighborhood pixel value, omega is a weight, i is an abscissa of the current hole position to be filled, j is an ordinate of the current hole position to be filled, k is an abscissa of the window neighborhood position, and l is an ordinate of the window neighborhood position;
the method comprises the following steps of transmitting the error map by using a global disparity map to restore high-resolution information of an image so as to realize super-resolution reconstruction of a light field, wherein the method comprises the following steps: transmitting the error map to each view angle of the light field by using the optimized disparity map to obtain missing high-frequency detail information under each view angle, namely:
DS=fwarping(DR),
wherein D isSFor high frequency detail information at that view, DRIs the error map;
the depth estimation of the scene according to the super-resolved light field image further comprises: estimating the depth of the scene by using the depth information of the reconstructed high-resolution light field image;
deducing the lost high-frequency detail part of the SISR method by using the high-resolution image positioned in the central view angle;
images obtained by the SISR method
Figure FDA0002508320480000021
With high-frequency detail information DSAdd to get the final desired output:
Figure FDA0002508320480000022
and depth estimation of the scene is performed using the high resolution light field image.
2. A cross-scale resolution light field image super-resolution and depth estimation device is characterized by comprising:
the sampling module is used for performing up-sampling on the light field image at the low-resolution viewing angle by a single-image super-resolution SISR method;
the acquisition module is used for performing up-sampling by using the SISR method after down-sampling the high-resolution image at the middle visual angle and acquiring an error map;
the optimization module is used for acquiring the disparity maps of the high-resolution image and the up-sampled image in the light field, performing global optimization on the disparity maps by using the inherent properties of the light field, and performing hole filling operation at the same time, so that the region which cannot be estimated by the disparity map is estimated again;
the reconstruction module is used for transmitting the error map by utilizing the global disparity map so as to restore high-resolution information of the image and realize super-resolution reconstruction of the light field; and
the estimation module is used for carrying out depth estimation on a scene according to the light field image after super-resolution to obtain a depth estimation result;
wherein the optimization module further comprises: obtaining a disparity map of the light field center position and any position by a cross model estimation method; obtaining the disparity map through a global optimization process so as to avoid estimation errors of partial regions caused by shielding factors in the light field;
the hole filling operation further comprises: hole is supplemented by using the hole peripheral neighborhood information, and the supplement formula is as follows:
Figure FDA0002508320480000023
wherein f is a neighborhood pixel value, omega is a weight, i is an abscissa of the current hole position to be filled, j is an ordinate of the current hole position to be filled, k is an abscissa of the window neighborhood position, and l is an ordinate of the window neighborhood position;
the method comprises the following steps of transmitting the error map by using a global disparity map to restore high-resolution information of an image so as to realize super-resolution reconstruction of a light field, wherein the method comprises the following steps:
transmitting the error map to each view angle of the light field by using the optimized disparity map to obtain missing high-frequency detail information under each view angle, namely:
DS=fwarping(DR),
wherein D isSFor high frequency detail information at that view, DRIs the error map;
the depth estimation of the scene according to the super-resolved light field image further comprises: estimating the depth of the scene by using the depth information of the reconstructed high-resolution light field image;
deducing the lost high-frequency detail part of the SISR method by using the high-resolution image positioned in the central view angle;
images obtained by the SISR method
Figure FDA0002508320480000031
With high-frequency detail information DSAdd to get the final desired output:
Figure FDA0002508320480000032
and depth estimation of the scene is performed using the high resolution light field image.
CN201711362812.0A 2017-12-18 2017-12-18 Cross-scale resolution light field image super-resolution and depth estimation method and device Active CN108053368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711362812.0A CN108053368B (en) 2017-12-18 2017-12-18 Cross-scale resolution light field image super-resolution and depth estimation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711362812.0A CN108053368B (en) 2017-12-18 2017-12-18 Cross-scale resolution light field image super-resolution and depth estimation method and device

Publications (2)

Publication Number Publication Date
CN108053368A CN108053368A (en) 2018-05-18
CN108053368B true CN108053368B (en) 2020-11-03

Family

ID=62133422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711362812.0A Active CN108053368B (en) 2017-12-18 2017-12-18 Cross-scale resolution light field image super-resolution and depth estimation method and device

Country Status (1)

Country Link
CN (1) CN108053368B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111507900A (en) * 2020-04-09 2020-08-07 上海云从汇临人工智能科技有限公司 Image processing method, system, machine readable medium and equipment
CN113808185B (en) * 2021-11-19 2022-03-25 北京的卢深视科技有限公司 Image depth recovery method, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104662589A (en) * 2012-08-21 2015-05-27 派力肯影像公司 Systems and methods for parallax detection and correction in images captured using array cameras
CN104079914B (en) * 2014-07-02 2016-02-10 山东大学 Based on the multi-view image ultra-resolution method of depth information
CN105335929A (en) * 2015-09-15 2016-02-17 清华大学深圳研究生院 Depth map super-resolution method
CN106204489A (en) * 2016-07-12 2016-12-07 四川大学 Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102002165B1 (en) * 2011-09-28 2019-07-25 포토내이션 리미티드 Systems and methods for encoding and decoding light field image files

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104662589A (en) * 2012-08-21 2015-05-27 派力肯影像公司 Systems and methods for parallax detection and correction in images captured using array cameras
CN104079914B (en) * 2014-07-02 2016-02-10 山东大学 Based on the multi-view image ultra-resolution method of depth information
CN105335929A (en) * 2015-09-15 2016-02-17 清华大学深圳研究生院 Depth map super-resolution method
CN106204489A (en) * 2016-07-12 2016-12-07 四川大学 Single image super resolution ratio reconstruction method in conjunction with degree of depth study with gradient conversion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于阵列图像的自适应光场三维重建算法研究;丁伟利 等;《仪器仪表学报》;20160915;第37卷(第9期);2156-2165 *

Also Published As

Publication number Publication date
CN108053368A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
JP7159057B2 (en) Free-viewpoint video generation method and free-viewpoint video generation system
CN108074218B (en) Image super-resolution method and device based on light field acquisition device
CN109447919B (en) Light field super-resolution reconstruction method combining multi-view angle and semantic texture features
JP5739584B2 (en) 3D image synthesizing apparatus and method for visualizing vehicle periphery
CN107909640B (en) Face relighting method and device based on deep learning
CN105243637B (en) One kind carrying out full-view image joining method based on three-dimensional laser point cloud
EP3035285B1 (en) Method and apparatus for generating an adapted slice image from a focal stack
CN103925912B (en) Interior visual field optical segmentation type large CCD images geometry joining method
US11831931B2 (en) Systems and methods for generating high-resolution video or animated surface meshes from low-resolution images
CN108053368B (en) Cross-scale resolution light field image super-resolution and depth estimation method and device
CN110462678A (en) Image processing apparatus and image processing method
CN105096252A (en) Manufacturing method of band-shaped omni-directional street scenery image map
CN110738731A (en) 3D reconstruction method and system for binocular vision
CN103065318A (en) Curved surface projection method and device of multi-camera panorama system
CN115086550B (en) Meta imaging system
US20240073523A1 (en) Systems and methods for generating depth information from low-resolution images
CN113554744A (en) Rapid scanning three-dimensional imaging method and device for large-volume scattering sample
CN104574338A (en) Remote sensing image super-resolution reconstruction method based on multi-angle linear array CCD sensors
JP2006119843A (en) Image forming method, and apparatus thereof
CN113436130B (en) Intelligent sensing system and device for unstructured light field
CN103398701A (en) Satellite-borne non-colinear TDI (time delay integral) CCD (charge coupled device) image splicing method based on object space projection plane
CN117291808A (en) Light field image super-resolution processing method based on stream prior and polar bias compensation
CN108537804A (en) A kind of interesting target extracting method of parallelly compressed perception imaging system
CN107146281B (en) Lunar surface high-resolution DEM extraction method
JP2013069012A (en) Multi-lens imaging apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant