CN107392852B - Super-resolution reconstruction method, device and equipment for depth image and storage medium - Google Patents

Super-resolution reconstruction method, device and equipment for depth image and storage medium Download PDF

Info

Publication number
CN107392852B
CN107392852B CN201710557157.8A CN201710557157A CN107392852B CN 107392852 B CN107392852 B CN 107392852B CN 201710557157 A CN201710557157 A CN 201710557157A CN 107392852 B CN107392852 B CN 107392852B
Authority
CN
China
Prior art keywords
low
resolution
map
image
frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710557157.8A
Other languages
Chinese (zh)
Other versions
CN107392852A (en
Inventor
王旭
温炜杰
江健民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201710557157.8A priority Critical patent/CN107392852B/en
Publication of CN107392852A publication Critical patent/CN107392852A/en
Application granted granted Critical
Publication of CN107392852B publication Critical patent/CN107392852B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention is suitable for the technical field of computers, and provides a super-resolution reconstruction method, a device, equipment and a storage medium for a depth image, wherein the method comprises the following steps: the method comprises the steps of preprocessing an input low-resolution depth map and a corresponding high-resolution color map to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth map and a high-frequency information image of the high-resolution color map, respectively performing feature extraction and full convolution operation on the two obtained high-frequency information images to generate a high-frequency full convolution feature map, amplifying the high-frequency full convolution feature map of the low-resolution depth map, performing full convolution feature fusion with the high-frequency full convolution feature map of the high-resolution color map, reconstructing an obtained fusion image, amplifying the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, and then superposing the low-frequency information image with the reconstructed high-frequency image to obtain the high-resolution depth image, so that the super-resolution reconstruction efficiency of the depth image is improved.

Description

Super-resolution reconstruction method, device and equipment for depth image and storage medium
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a super-resolution reconstruction method, a super-resolution reconstruction device, super-resolution reconstruction equipment and a storage medium for a depth image.
Background
The depth image records the distance information from an object in a three-dimensional scene to a camera imaging plane, can be used for virtual viewpoint drawing, realizes functions such as free viewpoint and the like, and provides immersive experience for people. However, due to the limitations of current hardware technology and the like, the resolution of the depth image captured by the depth camera is low, and in a 3D scene, we often need a depth image consistent with the resolution of a color image, such as in the technology of making multi-view 3D video and naked eye 3D television. Therefore, in order to reconstruct the depth information of an object in a three-dimensional scene more accurately and restore a more real scene, an efficient and feasible image super-resolution method is needed to perform super-resolution reconstruction on the acquired low-resolution depth image.
Currently, the relevant work for super-resolution of images can be broadly divided into four categories: the first method is the classical filtering method, that is, an up-sampling filter is designed to filter the low-resolution depth map. Yang et al propose to use a joint bilateral filter to add corresponding weights to the smooth region for each depth map patch by comparing the color similarity of the intermediate pixel and its surrounding neighboring regions, but the classical filtering methods all have the problem of being too fixed, and once a certain filtering mode is determined, the corresponding filter structure is fixed, so that the method cannot adapt to some other unaccounted conditions and scenes well. The second method is to consider the depth image super-resolution as a function optimization problem, for example, Diebel et al propose that the problem is represented by a markov random field, but such a method for solving the image super-resolution based on function optimization needs to definitely plan the optimization problem, so that the final super-resolution effect has a great relationship with the selected optimization function, and at present, such a method is developed more mature, and is difficult to significantly improve in effect. The third method is a dictionary learning method, and finds the relationship between low-resolution and high-resolution depth blocks by a sparse coding method, and expresses high-dimensional features by using a small amount of data. Yang et al propose seek a high resolution and low resolution relation coefficient to get the method of the high resolution image, but the super resolution method based on sparse coding not only rebuilds for a long time, can't accomplish the real-time and need do extra regularization, it is comparatively complicated to realize.
The fourth method is a convolutional neural network-based method, which mainly uses the prior knowledge between low-resolution and high-resolution image blocks to perform super-resolution reconstruction. The method is different from a third dictionary learning method in that a certain dictionary does not need to be explicitly learned, for example, Osendorfer et al propose that image super-resolution is realized by a convolution sparse coding method through the promotion of a convolution dictionary; dong et al propose a simple end-to-end super-resolution convolutional neural network (SRCNN) to reconstruct an image and achieve a better effect; christian et al, using the generative neural network (SRGAN), take into account the subjective perception of human vision in the effect of image restoration; mehdi et al use a depth residual network (Res-Net) to super-resolution images. However, the above method based on the convolutional neural network basically performs super-resolution on color images, and networks for super-resolution of depth images are not uncommon, and only a super-resolution convolutional neural network (MSG-Net) guided by a color image is proposed by Hui et al for depth images. In summary, the existing super-resolution depth image reconstruction method cannot improve the depth image reconstruction effect and the depth image reconstruction rate at the same time, so that the super-resolution depth image reconstruction efficiency is not high.
Disclosure of Invention
The invention aims to provide a super-resolution reconstruction method, a super-resolution reconstruction device, a super-resolution reconstruction equipment and a storage medium for a depth image, and aims to solve the problem that the super-resolution reconstruction efficiency of the depth image is low because the prior art is difficult to improve the reconstruction rate of the depth image while ensuring the reconstruction effect of the depth image.
In one aspect, the present invention provides a super-resolution reconstruction method for a depth image, including the following steps:
preprocessing an input low-resolution depth map and a corresponding high-resolution color map to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth map and a high-frequency information image of the high-resolution color map;
respectively extracting features of the high-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, and respectively performing preset full convolution operation on the extracted feature maps to generate a high-frequency full convolution feature map of the low-resolution depth map and a high-frequency full convolution feature map of the high-resolution color map;
amplifying the high-frequency full convolution feature map of the low-resolution depth map to obtain an amplified high-frequency full convolution feature map of the low-resolution depth map;
performing full convolution feature fusion on the amplified high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the high-resolution color map, and reconstructing a fused image obtained by feature fusion to obtain a reconstructed high-frequency image;
amplifying the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, superposing the reconstructed high-frequency image and the amplified low-frequency information image of the low-resolution depth map to obtain a high-resolution depth image corresponding to the low-resolution depth map, and outputting the high-resolution depth image.
In another aspect, the present invention provides a super-resolution reconstruction apparatus for a depth image, the apparatus including:
the image preprocessing unit is used for preprocessing an input low-resolution depth map and a corresponding high-resolution color map so as to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth map and a high-frequency information image of the high-resolution color map;
the characteristic map generating unit is used for respectively extracting characteristics of the high-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, respectively performing preset full convolution operation on the extracted characteristic maps, and generating a high-frequency full convolution characteristic map of the low-resolution depth map and a high-frequency full convolution characteristic map of the high-resolution color map;
the feature map amplifying unit is used for amplifying the high-frequency full convolution feature map of the low-resolution depth map to obtain an amplified high-frequency full convolution feature map of the low-resolution depth map;
the high-frequency reconstruction unit is used for performing full convolution feature fusion on the amplified high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the high-resolution color map, and reconstructing a fused image obtained by feature fusion to obtain a reconstructed high-frequency image; and
and the result output unit is used for amplifying the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, superposing the reconstructed high-frequency image and the amplified low-frequency information image of the low-resolution depth map to obtain a high-resolution depth image corresponding to the low-resolution depth map, and outputting the high-resolution depth image.
In another aspect, the present invention also provides a computing device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the super-resolution reconstruction method of depth images as described when executing the computer program.
In another aspect, the present invention also provides a computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for super-resolution reconstruction of depth images as described above.
When a super-resolution reconstruction request of a depth image is received, an input low-resolution depth image and a corresponding high-resolution color image are preprocessed to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth image and a high-frequency information image of the high-resolution color image, the high-frequency information images of the two input images are respectively subjected to feature extraction and full convolution operation to generate a high-frequency full convolution feature image, the high-frequency full convolution feature image of the low-resolution depth image is amplified, then full convolution feature fusion is carried out on the high-frequency full convolution feature image of the high-resolution color image, the obtained fusion image is reconstructed, the low-frequency information image of the low-resolution depth image is amplified according to the size of the reconstructed high-frequency image, and then the low-frequency information image is superposed with the reconstructed high-frequency image to obtain and output the high-resolution depth image, therefore, the super-resolution reconstruction efficiency of the depth image is improved.
Drawings
Fig. 1 is a flowchart illustrating an implementation of a super-resolution reconstruction method for a depth image according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a super-resolution reconstruction apparatus for depth images according to a second embodiment of the present invention;
FIG. 3 is a schematic diagram of a preferred structure of a super-resolution depth image reconstruction apparatus according to a second embodiment of the present invention; and
fig. 4 is a schematic structural diagram of a computing device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of specific implementations of the present invention is provided in conjunction with specific embodiments:
the first embodiment is as follows:
fig. 1 shows an implementation flow of a super-resolution reconstruction method for depth images according to an embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown, which are detailed as follows:
in step S101, the input low-resolution depth map and the corresponding high-resolution color map are preprocessed to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth map and a high-frequency information image of the high-resolution color map.
The embodiment of the invention is suitable for a 3D image processing system, so that the super-resolution depth image can be conveniently obtained through super-resolution reconstruction of the low-resolution depth image, and the 3D image can be efficiently processed. In the embodiment of the invention, the input low-resolution depth map is a to-be-reconstructed map, and the input high-resolution color map is a high-resolution color map under the same scene as the input low-resolution depth map and is used for performing super-resolution guidance on the input low-resolution depth map in the super-resolution reconstruction process. When a super-resolution reconstruction request of a depth image is received, firstly, a preset filtering algorithm (for example, a two-dimensional Gaussian filtering algorithm) is used for filtering two input images, namely a low-resolution depth image and a corresponding high-resolution color image, so as to extract a corresponding low-frequency information image, then the low-frequency information images in the two original images, namely the low-resolution depth image and the corresponding high-resolution color image, are respectively removed, and then the high-frequency information images in the two original images, namely the low-resolution depth image and the corresponding high-resolution color image, are remained.
Preferably, when the acquired low-resolution depth map and the corresponding high-resolution color map are preprocessed, a preset filtering formula h (F) -W is firstly usedGF carries out filtering pretreatment on the low-resolution depth map and the corresponding high-resolution color map to respectively obtain low-frequency information images of the two images, and then respectively removes the low-frequency information images from the two images to obtain high-frequency information images of the two images of the low-resolution depth map and the corresponding high-resolution color map, thereby improving the efficiency of obtaining the low-frequency information images and the high-frequency information images of the input images. Wherein F is a low resolution depth map or a corresponding high resolution color map, WGIs a two-dimensional gaussian convolution kernel, representing the convolution operation.
In step S102, feature extraction is performed on the high-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, and preset full convolution operations are performed on the extracted feature maps, so as to generate a high-frequency full convolution feature map of the low-resolution depth map and a high-frequency full convolution feature map of the high-resolution color map.
In the embodiment of the invention, after the two high-frequency information images corresponding to the two input images are obtained, only the two high-frequency information images are subjected to feature extraction to obtain the corresponding feature maps, and then the obtained feature maps are respectively subjected to the preset full convolution operation to generate the high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the corresponding high-resolution color map, so that the complexity of calculation is reduced, the calculation rate is increased, and the reconstruction efficiency of the depth image is improved.
Preferably, after acquiring the two high-frequency information images corresponding to the two input images, firstly using a formula
Figure BDA0001346072000000061
Performing convolution operation on the high-frequency information image of the low-resolution depth map to obtain a low-resolution depth mapFeature maps of high frequency information images of depth maps, wherein,
Figure BDA0001346072000000062
the input of the first convolution layer is a depth map D of low resolutionLHigh-frequency information image h (D) obtained by preprocessingL),Wi DAnd
Figure BDA0001346072000000063
representing the convolution kernel and the offset, σ, of the convolution layer, respectivelyi() For activating functions and using formulae
Figure BDA0001346072000000064
Convolution operation is carried out on the high-frequency information image of the high-resolution color image to obtain a characteristic diagram of the high-frequency information image of the high-resolution color image, wherein i represents the current convolution layer,
Figure BDA0001346072000000065
representing the result of the output of the last convolution layer, the input of the first convolution layer being for a high resolution color image YHHigh-frequency information image h (Y) obtained by preprocessingH),Wi YAnd
Figure BDA0001346072000000066
respectively representing the convolution kernel and the offset of the current convolution layer, and then using the formula Fi=σ(Wi*Fi-1+Bi) Respectively carrying out full convolution operation on the characteristic diagram of the high-frequency information image of the low-resolution depth map and the characteristic diagram of the high-frequency information image of the high-resolution color map, wherein WiAnd BiAnd respectively representing the convolution kernel and the offset of the current convolution layer i to generate a high-frequency full convolution characteristic diagram of the low-resolution depth map and a high-frequency full convolution characteristic diagram of the corresponding high-resolution color map, thereby improving the calculation rate.
In step S103, the high-frequency full-convolution feature map of the low-resolution depth map is enlarged to obtain an enlarged high-frequency full-convolution feature map of the low-resolution depth map.
In the embodiment of the invention, the high-frequency full convolution characteristic diagram of the low-resolution depth map can be amplified by adopting a multi-scale up-sampling mode, so that the amplification effect of the high-frequency full convolution characteristic diagram is improved.
In step S104, full convolution feature fusion is performed on the amplified high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the high-resolution color map, and a fused image obtained by feature fusion is reconstructed to obtain a reconstructed high-frequency image.
In the embodiment of the invention, the amplified high-frequency full convolution characteristic map of the low-resolution depth image and the high-frequency full convolution characteristic map of the corresponding high-resolution color image are superposed and subjected to full convolution operation, so that full convolution characteristic fusion is realized, excessive guidance and distortion of the depth image are avoided, the characteristic fusion of the two input images is realized, and the super-resolution reconstruction effect of the depth image is further improved. And after the full convolution features are fused, reconstructing the high-frequency image obtained by fusing the full convolution features, thereby obtaining the reconstructed high-frequency image.
Preferably, when performing full-convolution feature fusion on the amplified high-frequency full-convolution feature map of the low-resolution depth map and the corresponding high-frequency full-convolution feature map of the high-resolution color map, formula F is first usedk=σ(Wk*(FY,FD)+Bk) Amplified high-frequency full convolution feature map F for low-resolution depth mapDHigh-frequency full convolution characteristic diagram F of high-resolution color diagramYAnd performing full convolution feature fusion to obtain a fused image, thereby realizing feature fusion of the two input images under the guidance of the color image and improving the image fusion effect. Preferably, after the fused image is obtained, the image blocks of the overlapped part of the fused image in the fusion process are subjected to average processing, and the obtained image is set as the reconstructed high-frequency image, so that the effect of the reconstructed high-frequency image is improved. Wherein, WkConvolution kernel of convolutional layer k, BkIs the bias of convolutional layer k.
In step S105, the low-frequency information image of the low-resolution depth map is enlarged according to the size of the reconstructed high-frequency image, the reconstructed high-frequency image and the enlarged low-frequency information image of the low-resolution depth map are superimposed to obtain a high-resolution depth image corresponding to the low-resolution depth map, and the high-resolution depth image is output.
In the embodiment of the invention, after the reconstructed high-frequency image is obtained, the low-frequency information image of the low-resolution depth image is firstly amplified to the size consistent with the size of the reconstructed high-frequency image, and then the amplified low-frequency information image and the reconstructed high-frequency image are added to finally obtain the high-resolution depth image guided by the high-resolution color image. Specifically, the low-frequency information image of the low-resolution depth map can be amplified by using a bicubic interpolation amplification mode, so that the image amplification effect is improved.
In the embodiment of the invention, the full convolution operation is introduced in the super-resolution reconstruction process of the depth image, so that the complexity of calculation is reduced, the calculation rate is improved, and the super-resolution reconstruction efficiency of the depth image is improved.
Example two:
fig. 2 shows a structure of a super-resolution depth image reconstruction apparatus according to a second embodiment of the present invention, and for convenience of description, only the parts related to the second embodiment of the present invention are shown, including:
the image preprocessing unit 21 is configured to preprocess the input low-resolution depth map and the corresponding high-resolution color map to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth map and a high-frequency information image of the high-resolution color map.
The feature map generating unit 22 is configured to perform feature extraction on the high-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, perform preset full convolution operation on the extracted feature maps, and generate a high-frequency full convolution feature map of the low-resolution depth map and a high-frequency full convolution feature map of the high-resolution color map.
The feature map amplifying unit 23 is configured to amplify the high-frequency full convolution feature map of the low-resolution depth map to obtain an amplified high-frequency full convolution feature map of the low-resolution depth map.
And the high-frequency reconstruction unit 24 is configured to perform full convolution feature fusion on the amplified high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the high-resolution color map, and reconstruct a fused image obtained by feature fusion to obtain a reconstructed high-frequency image.
And the result output unit 25 is configured to amplify the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, superimpose the reconstructed high-frequency image and the amplified low-frequency information image of the low-resolution depth map to obtain a high-resolution depth image corresponding to the low-resolution depth map, and output the high-resolution depth image.
Preferably, as shown in fig. 3, the image preprocessing unit 21 includes:
a low-frequency image obtaining unit 311 for using a preset filtering formula h (F) ═ F-WGF carries out filtering pretreatment on the low-resolution depth map and the high-resolution color map to respectively obtain low-frequency information images of the low-resolution depth map and the high-resolution color map, wherein F represents the low-resolution depth map or the corresponding high-resolution color map, and W represents the low-resolution depth map or the corresponding high-resolution color mapGRepresenting a convolution operation for a two-dimensional Gaussian convolution kernel; and
a high-frequency image obtaining unit 312, configured to remove the low-frequency information images of the low-resolution depth map and the high-resolution color map, and obtain high-frequency information images of the low-resolution depth map and the high-resolution color map, respectively;
preferably, the feature map generation unit 22 includes:
a first convolution unit 321 for using a formula
Figure BDA0001346072000000091
Performing a convolution operation on the high frequency information image of the low resolution depth map to obtain a feature map of the high frequency information image of the low resolution depth map, wherein,
Figure BDA0001346072000000092
representing the result of the last convolution layer output, Wi DAnd
Figure BDA0001346072000000093
representing the convolution kernel and the offset, σ, of the convolution layer, respectivelyi() Is an activation function;
a second convolution unit 322 for using the formula
Figure BDA0001346072000000094
Performing convolution operation on the high-frequency information image of the high-resolution color image to obtain a characteristic diagram of the high-frequency information image of the high-resolution color image, wherein i represents the current convolution layer,
Figure BDA0001346072000000095
representing the result of the last convolution layer output, Wi YAnd
Figure BDA0001346072000000096
respectively representing the convolution kernel and the offset of the current convolution layer; and
a full convolution unit 323 for using formula Fi=σ(Wi*Fi-1+Bi) Respectively carrying out full convolution operation on the characteristic graph of the high-frequency information image of the low-resolution depth map and the characteristic graph of the high-frequency information image of the high-resolution color map to generate a high-frequency full convolution characteristic graph of the low-resolution depth map and a high-frequency full convolution characteristic graph, W, of the corresponding high-resolution color mapiAnd BiRepresenting the convolution kernel and the offset, F, of the current convolution layer i, respectivelyi-1The result output for the previous volume of lamination;
preferably, the high frequency reconstruction unit 24 comprises:
a feature fusion unit 341 for using the formula Fk=σ(Wk*(FY,FD)+Bk) Amplified high-frequency full convolution feature map F for low-resolution depth mapDAnd high frequency full convolution profile F of high resolution color mapYPerforming full convolution feature fusion to obtain a fused image, wherein WkBeing a roll of convolutional layer kAccumulation of nuclei, BkIs the bias of convolutional layer k; and
the output subunit 342 is configured to perform average processing on image blocks of an overlapping portion of the fused image in the fusion process, and set an obtained image as a reconstructed high-frequency image.
In the embodiment of the present invention, the image preprocessing unit 21 first preprocesses the input low-resolution depth map and the corresponding high-resolution color map to obtain the high-frequency information image and the low-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, the feature map generating unit 22 respectively performs feature extraction and full convolution on the two obtained high-frequency information images to generate a high-frequency full convolution feature map, the feature map amplifying unit 23 amplifies the high-frequency full convolution feature map of the low-resolution depth map, the high-frequency reconstruction unit 24 then performs full convolution feature fusion on the amplified high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the high-resolution color map, and reconstructs the obtained fused image, the result output unit 25 amplifies the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, and then overlapping the high-frequency image with the reconstructed high-frequency image to obtain a high-resolution depth image, thereby improving the super-resolution reconstruction efficiency of the depth image.
In the embodiment of the present invention, each unit of the super-resolution depth image reconstruction apparatus may be implemented by a corresponding hardware or software unit, and each unit may be an independent software or hardware unit, or may be integrated into a software or hardware unit, which is not limited herein. For the specific implementation of each unit, reference may be made to the description of the first embodiment, which is not repeated herein.
Example three:
fig. 4 shows a structure of a computing device provided in a third embodiment of the present invention, and for convenience of explanation, only a part related to the third embodiment of the present invention is shown.
Computing device 4 of an embodiment of the present invention includes a processor 40, a memory 41, and a computer program 42 stored in memory 41 and executable on processor 40. The processor 40, when executing the computer program 42, implements the steps in the above-described super-resolution reconstruction method embodiment of depth images, such as the steps S101 to S105 shown in fig. 1. Alternatively, the processor 40, when executing the computer program 42, implements the functions of the units in the above-described apparatus embodiments, such as the functions of the units 21 to 25 shown in fig. 2 and the units 21 to 25 shown in fig. 3.
In the embodiment of the present invention, when the processor 40 executes the computer program 42 to implement the steps in the above-mentioned various embodiments of the method for controlling wakeup on a screen, the processor performs preprocessing on the input low-resolution depth map and the corresponding high-resolution color map to obtain the high-frequency information image and the low-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, performs feature extraction and full convolution operation on the two obtained high-frequency information images respectively to generate a high-frequency full convolution feature map, amplifies the high-frequency full convolution feature map of the low-resolution depth map, then performs full convolution feature fusion with the high-frequency full convolution feature map of the high-resolution color map, reconstructs the obtained fusion image, amplifies the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, and then superimposes the low-frequency information image with the reconstructed high-frequency image, and obtaining the high-resolution depth image, thereby improving the super-resolution reconstruction efficiency of the depth image. The steps implemented by the processor 40 in the computing device 4 when executing the computer program 42 may specifically refer to the description of the method in the first embodiment, and are not described herein again.
Example four:
in an embodiment of the present invention, a computer-readable storage medium is provided, which stores a computer program, which when executed by a processor implements the steps in the above-described super-resolution reconstruction method embodiment of depth images, for example, steps S101 to S105 shown in fig. 1. Alternatively, the computer program may be executed by a processor to implement the functions of the units in the above-described apparatus embodiments, such as the functions of the units 21 to 25 shown in fig. 2 and the units 21 to 25 shown in fig. 3.
In an embodiment of the invention, an input low resolution depth map and a corresponding high resolution color map are preprocessed, to obtain a high frequency information image and a low frequency information image of a low resolution depth map and a high frequency information image of a high resolution color map, respectively carrying out feature extraction and full convolution operation on the two obtained high-frequency information images to generate a high-frequency full convolution feature map, amplifying the high-frequency full-convolution characteristic image of the low-resolution depth image, then performing full-convolution characteristic fusion with the high-frequency full-convolution characteristic image of the high-resolution color image, reconstructing the obtained fusion image, amplifying the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, and then overlapping the high-frequency image with the reconstructed high-frequency image to obtain a high-resolution depth image, thereby improving the super-resolution reconstruction efficiency of the depth image. The super-resolution reconstruction method for depth images implemented by the computer program when executed by the processor may further refer to the description of the steps in the foregoing method embodiments, and will not be described herein again.
The computer readable storage medium of the embodiments of the present invention may include any entity or device capable of carrying computer program code, a recording medium, such as a ROM/RAM, a magnetic disk, an optical disk, a flash memory, or the like.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A super-resolution reconstruction method of a depth image, the method comprising the steps of:
preprocessing an input low-resolution depth map and a corresponding high-resolution color map to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth map and a high-frequency information image of the high-resolution color map;
respectively extracting features of the high-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, and respectively performing preset full convolution operation on the extracted feature maps to generate a high-frequency full convolution feature map of the low-resolution depth map and a high-frequency full convolution feature map of the high-resolution color map;
amplifying the high-frequency full convolution feature map of the low-resolution depth map to obtain an amplified high-frequency full convolution feature map of the low-resolution depth map;
performing full convolution feature fusion on the amplified high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the high-resolution color map, and reconstructing a fused image obtained by feature fusion to obtain a reconstructed high-frequency image;
amplifying the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, superposing the reconstructed high-frequency image and the amplified low-frequency information image of the low-resolution depth map to obtain a high-resolution depth image corresponding to the low-resolution depth map, and outputting the high-resolution depth image.
2. The method of claim 1 wherein the step of preprocessing the input low resolution depth map and corresponding high resolution color map comprises:
using a preset filter formula h (F) ═ F-WGF carries out filtering pretreatment on the low-resolution depth map and the high-resolution color map to respectively obtain low-frequency information images of the low-resolution depth map and the high-resolution color map, wherein F represents the low-resolution depth map or the high-resolution color map, and W represents the low-resolution depth map or the high-resolution color mapGA gaussian convolution kernel in two dimensions, said x representing a convolution operation;
and removing the low-frequency information image of the low-resolution depth map and the low-frequency information image of the high-resolution color map to respectively obtain the high-frequency information image of the low-resolution depth map and the high-resolution color map.
3. The method as claimed in claim 1, wherein the step of performing feature extraction on the high frequency information image of the low resolution depth map and the high frequency information image of the high resolution color map, respectively, and performing a preset full convolution operation on the extracted feature maps, respectively, comprises:
using the formula
Figure FDA0002448122750000021
Performing convolution operation on the high-frequency information image of the low-resolution depth map to obtain a feature map of the high-frequency information image of the low-resolution depth map, wherein the feature map is obtained by performing convolution operation on the high-frequency information image of the low-resolution depth map
Figure FDA0002448122750000022
Represents the result of the last convolution layer output, Wi DAnd
Figure FDA0002448122750000023
represents the convolution kernel and the offset of the convolution layer, respectively, the sigmai() Is an activation function;
using the formula
Figure FDA0002448122750000024
Performing convolution operation on the high-frequency information image of the high-resolution color image to obtain a characteristic diagram of the high-frequency information image of the high-resolution color image, wherein i represents a current convolution layer, and the
Figure FDA0002448122750000025
Represents the result of the last convolution layer output, Wi YAnd
Figure FDA0002448122750000026
respectively representing a convolution kernel and an offset of the current convolutional layer;
using the formula Fi=σ(Wi*Fi-1+Bi) Respectively carrying out the full convolution operation on the characteristic diagram of the high-frequency information image of the low-resolution depth map and the characteristic diagram of the high-frequency information image of the high-resolution color map to generate a high-frequency full convolution characteristic diagram of the low-resolution depth map and a high-frequency full convolution characteristic diagram of the high-resolution color mapDrawing, said WiAnd BiRepresents the convolution kernel and the offset of the current convolution layer i, respectively, Fi-1Is the result of the last convolution layer output.
4. The method of claim 1, wherein the step of performing full convolution feature fusion on the amplified high frequency full convolution feature map of the low resolution depth map and the high frequency full convolution feature map of the high resolution color map and reconstructing a fused image obtained by the feature fusion to obtain a reconstructed high frequency image comprises:
using the formula Fk=σ(Wk*(FY,FD)+Bk) Amplifying high-frequency full convolution characteristic diagram F of low-resolution depth mapDAnd a high-frequency full convolution feature map F of said high-resolution color mapYPerforming full convolution feature fusion to obtain the fused image, wherein the WkA convolution kernel of convolutional layer k, said BkIs the bias of the convolutional layer k;
and carrying out average processing on image blocks of the overlapped part of the fusion image in the fusion process, and setting the obtained image as the reconstructed high-frequency image.
5. A super-resolution reconstruction apparatus for a depth image, the apparatus comprising:
the image preprocessing unit is used for preprocessing an input low-resolution depth map and a corresponding high-resolution color map so as to obtain a high-frequency information image and a low-frequency information image of the low-resolution depth map and a high-frequency information image of the high-resolution color map;
the characteristic map generating unit is used for respectively extracting characteristics of the high-frequency information image of the low-resolution depth map and the high-frequency information image of the high-resolution color map, respectively performing preset full convolution operation on the extracted characteristic maps, and generating a high-frequency full convolution characteristic map of the low-resolution depth map and a high-frequency full convolution characteristic map of the high-resolution color map;
the feature map amplifying unit is used for amplifying the high-frequency full convolution feature map of the low-resolution depth map to obtain an amplified high-frequency full convolution feature map of the low-resolution depth map;
the high-frequency reconstruction unit is used for performing full convolution feature fusion on the amplified high-frequency full convolution feature map of the low-resolution depth map and the high-frequency full convolution feature map of the high-resolution color map, and reconstructing a fused image obtained by feature fusion to obtain a reconstructed high-frequency image; and
and the result output unit is used for amplifying the low-frequency information image of the low-resolution depth map according to the size of the reconstructed high-frequency image, superposing the reconstructed high-frequency image and the amplified low-frequency information image of the low-resolution depth map to obtain a high-resolution depth image corresponding to the low-resolution depth map, and outputting the high-resolution depth image.
6. The apparatus of claim 5, wherein the image pre-processing unit comprises:
a low frequency image acquisition unit for using a preset filter formula h (F) F-WGF carries out filtering pretreatment on the low-resolution depth map and the high-resolution color map to respectively obtain low-frequency information images of the low-resolution depth map and the high-resolution color map, wherein F represents the low-resolution depth map or the high-resolution color map, and W represents the low-resolution depth map or the high-resolution color mapGA gaussian convolution kernel in two dimensions, said x representing a convolution operation; and
and the high-frequency image acquisition unit is used for removing the low-frequency information images of the low-resolution depth map and the high-resolution color map to respectively obtain the high-frequency information images of the low-resolution depth map and the high-resolution color map.
7. The apparatus of claim 5, wherein the feature map generation unit comprises:
a first convolution unit for using the formula
Figure FDA0002448122750000041
Performing convolution operation on the high-frequency information image of the low-resolution depth map to obtain a feature map of the high-frequency information image of the low-resolution depth map, wherein the feature map is obtained by performing convolution operation on the high-frequency information image of the low-resolution depth map
Figure FDA0002448122750000042
Represents the result of the last convolution layer output, Wi DAnd
Figure FDA0002448122750000043
represents the convolution kernel and the offset of the convolution layer, respectively, the sigmai() Is an activation function;
a second convolution unit for using the formula
Figure FDA0002448122750000044
Performing convolution operation on the high-frequency information image of the high-resolution color image to obtain a characteristic diagram of the high-frequency information image of the high-resolution color image, wherein i represents a current convolution layer, and the
Figure FDA0002448122750000045
Represents the result of the last convolution layer output, Wi YAnd
Figure FDA0002448122750000046
respectively representing a convolution kernel and an offset of the current convolutional layer; and
full convolution unit for using formula Fi=σ(Wi*Fi-1+Bi) Respectively carrying out the full convolution operation on the characteristic diagram of the high-frequency information image of the low-resolution depth map and the characteristic diagram of the high-frequency information image of the high-resolution color map to generate a high-frequency full convolution characteristic diagram of the low-resolution depth map and a high-frequency full convolution characteristic diagram of the high-resolution color map, wherein W isiAnd BiRepresents the convolution kernel and the offset of the current convolution layer i, respectively, Fi-1Is the result of the last convolution layer output.
8. The apparatus of claim 5, wherein the high frequency reconstruction unit comprises:
a feature fusion unit for using the formula Fk=σ(Wk*(FY,FD)+Bk) Amplifying high-frequency full convolution characteristic diagram F of low-resolution depth mapDAnd a high-frequency full convolution feature map F of said high-resolution color mapYPerforming full convolution feature fusion to obtain the fused image, wherein the WkA convolution kernel of convolutional layer k, said BkIs the bias of the convolutional layer k; and
and the output subunit is used for carrying out average processing on the image blocks of the overlapped part of the fusion image in the fusion process and setting the obtained image as the reconstructed high-frequency image.
9. A computing device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 4 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
CN201710557157.8A 2017-07-10 2017-07-10 Super-resolution reconstruction method, device and equipment for depth image and storage medium Active CN107392852B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710557157.8A CN107392852B (en) 2017-07-10 2017-07-10 Super-resolution reconstruction method, device and equipment for depth image and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710557157.8A CN107392852B (en) 2017-07-10 2017-07-10 Super-resolution reconstruction method, device and equipment for depth image and storage medium

Publications (2)

Publication Number Publication Date
CN107392852A CN107392852A (en) 2017-11-24
CN107392852B true CN107392852B (en) 2020-07-07

Family

ID=60333913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710557157.8A Active CN107392852B (en) 2017-07-10 2017-07-10 Super-resolution reconstruction method, device and equipment for depth image and storage medium

Country Status (1)

Country Link
CN (1) CN107392852B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428212A (en) * 2018-01-30 2018-08-21 中山大学 A kind of image magnification method based on double laplacian pyramid convolutional neural networks
CN108765343B (en) * 2018-05-29 2021-07-20 Oppo(重庆)智能科技有限公司 Image processing method, device, terminal and computer readable storage medium
CN108846817B (en) * 2018-06-22 2021-01-12 Oppo(重庆)智能科技有限公司 Image processing method and device and mobile terminal
CN109118430B (en) * 2018-08-24 2023-05-09 深圳市商汤科技有限公司 Super-resolution image reconstruction method and device, electronic equipment and storage medium
CN109389556B (en) * 2018-09-21 2023-03-21 五邑大学 Multi-scale cavity convolutional neural network super-resolution reconstruction method and device
CN109509149A (en) * 2018-10-15 2019-03-22 天津大学 A kind of super resolution ratio reconstruction method based on binary channels convolutional network Fusion Features
CN109767386A (en) * 2018-12-22 2019-05-17 昆明理工大学 A kind of rapid image super resolution ratio reconstruction method based on deep learning
CN110189264B (en) * 2019-05-05 2021-04-23 Tcl华星光电技术有限公司 Image processing method
CN110223230A (en) * 2019-05-30 2019-09-10 华南理工大学 A kind of more front end depth image super-resolution systems and its data processing method
CN110852947B (en) * 2019-10-30 2021-07-20 浙江大学 Infrared image super-resolution method based on edge sharpening
CN112214773B (en) * 2020-09-22 2022-07-05 支付宝(杭州)信息技术有限公司 Image processing method and device based on privacy protection and electronic equipment
CN113012046B (en) * 2021-03-22 2022-12-16 华南理工大学 Image super-resolution reconstruction method based on dynamic packet convolution

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079914A (en) * 2014-07-02 2014-10-01 山东大学 Multi-view-point image super-resolution method based on deep information
CN105335929A (en) * 2015-09-15 2016-02-17 清华大学深圳研究生院 Depth map super-resolution method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810685B (en) * 2014-02-25 2016-05-25 清华大学深圳研究生院 A kind of super-resolution processing method of depth map

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104079914A (en) * 2014-07-02 2014-10-01 山东大学 Multi-view-point image super-resolution method based on deep information
CN105335929A (en) * 2015-09-15 2016-02-17 清华大学深圳研究生院 Depth map super-resolution method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Depth Map Super-Resolution Using Synthesized View Matching for Depth-Image-Based Rendering;Wei Hu等;《IEEE Xplore》;20120816;第605-610页 *
基于字典学习与结构自相似性的码本映射超分辨率算法;潘宗序等;《计算机辅助设计与图形学学报》;20150630;第27卷(第6期);第1032-1038页 *

Also Published As

Publication number Publication date
CN107392852A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN107392852B (en) Super-resolution reconstruction method, device and equipment for depth image and storage medium
Yang et al. Deep edge guided recurrent residual learning for image super-resolution
CN107403415B (en) Compressed depth map quality enhancement method and device based on full convolution neural network
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
CN108550118B (en) Motion blur image blur processing method, device, equipment and storage medium
Xie et al. Joint super resolution and denoising from a single depth image
CN112750082B (en) Human face super-resolution method and system based on fusion attention mechanism
CN110544205B (en) Image super-resolution reconstruction method based on visible light and infrared cross input
CN111028150B (en) Rapid space-time residual attention video super-resolution reconstruction method
Sun et al. Lightweight image super-resolution via weighted multi-scale residual network
CN111667410B (en) Image resolution improving method and device and electronic equipment
CN107301662B (en) Compression recovery method, device and equipment for depth image and storage medium
Shen et al. Convolutional neural pyramid for image processing
KR101028628B1 (en) Image texture filtering method, storage medium of storing program for executing the same and apparatus performing the same
CN112163994B (en) Multi-scale medical image fusion method based on convolutional neural network
KR101829287B1 (en) Nonsubsampled Contourlet Transform Based Infrared Image Super-Resolution
CN111932480A (en) Deblurred video recovery method and device, terminal equipment and storage medium
Zuo et al. Research on image super-resolution algorithm based on mixed deep convolutional networks
Zhao et al. Single depth image super-resolution with multiple residual dictionary learning and refinement
Oh et al. Fpanet: Frequency-based video demoireing using frame-level post alignment
CN110599403A (en) Image super-resolution reconstruction method with good high-frequency visual effect
CN116703719A (en) Face super-resolution reconstruction device and method based on face 3D priori information
Zheng et al. Joint residual pyramid for joint image super-resolution
Liu et al. Compressed face hallucination
CN115311152A (en) Image processing method, image processing apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant