CN113256539B - Depth image de-aliasing method, device, equipment and computer storage medium - Google Patents

Depth image de-aliasing method, device, equipment and computer storage medium Download PDF

Info

Publication number
CN113256539B
CN113256539B CN202110743522.0A CN202110743522A CN113256539B CN 113256539 B CN113256539 B CN 113256539B CN 202110743522 A CN202110743522 A CN 202110743522A CN 113256539 B CN113256539 B CN 113256539B
Authority
CN
China
Prior art keywords
aliasing
depth image
depth
pixels
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110743522.0A
Other languages
Chinese (zh)
Other versions
CN113256539A (en
Inventor
莫苏苏
吴昊
王抒昂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Silicon Integrated Co Ltd
Original Assignee
Wuhan Silicon Integrated Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Silicon Integrated Co Ltd filed Critical Wuhan Silicon Integrated Co Ltd
Priority to CN202110743522.0A priority Critical patent/CN113256539B/en
Publication of CN113256539A publication Critical patent/CN113256539A/en
Application granted granted Critical
Publication of CN113256539B publication Critical patent/CN113256539B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the application discloses a method, a device and equipment for removing aliasing of a depth image and a computer storage medium, wherein the method comprises the following steps: acquiring a depth image and an amplitude image corresponding to the depth image; determining whether aliasing exists in the depth image or not based on a variation relation between the depth image and the amplitude image; determining aliasing pixels of the depth image under the condition that the depth image has aliasing; and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image. In this way, after the aliasing pixels of the depth image are determined according to the variation relation between the depth image and the amplitude image, the determined aliasing pixels are subjected to aliasing removal processing, so that the TOF depth ranging range can be enlarged and is not limited by the maximum measuring distance and the number of modulation frequencies of optical signals; in addition, the accuracy of measurement can be improved, and the visual effect of the depth image is improved.

Description

Depth image de-aliasing method, device, equipment and computer storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method, an apparatus, a device, and a computer storage medium for depth image de-aliasing.
Background
Time of Flight (TOF), as the name implies, is understood to be the Time taken by measuring the distance travelled by an object, particle or wave in a fixed medium. In general, a sensor emits a modulated light signal and reflects the modulated light signal after encountering a photographed object, and then the sensor calculates a time difference or a phase difference between the emission and the reflection of the light signal and further converts the time difference or the phase difference to obtain the distance of the photographed object so as to generate depth information.
In the related art, a technique for determining a distance between a depth camera and a photographed object may be based on a round trip flight time of light, for example, modulating a light signal and determining the distance based on a phase difference between transmitted and received light signals. However, due to the periodicity of the modulated light signal, aliasing can occur for certain time-of-flight techniques, resulting in a limited TOF depth ranging range.
Disclosure of Invention
The application provides a depth image de-aliasing method, a depth image de-aliasing device, depth image de-aliasing equipment and a computer storage medium, which can enlarge the range of TOF depth ranging and enable the range not to be limited by the maximum measuring distance and the number of modulation frequencies of optical signals; in addition, the accuracy of the measurement can be improved.
In order to achieve the purpose, the technical scheme of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a method for de-aliasing a depth image, where the method includes:
acquiring a depth image and an amplitude image corresponding to the depth image;
determining whether aliasing exists in the depth image based on a variation relation between the depth image and the amplitude image;
determining aliased pixels of the depth image in the presence of aliasing in the depth image;
and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image.
In a second aspect, an embodiment of the present application provides an apparatus for de-aliasing a depth image, where the apparatus includes an obtaining unit, a determining unit, and a de-aliasing unit; wherein,
the acquiring unit is configured to acquire a depth image and a magnitude image corresponding to the depth image;
a determination unit configured to determine whether aliasing exists in the depth image based on a variation relationship between the depth image and the amplitude image; and determining aliased pixels of the depth image in the presence of aliasing in the depth image;
and the de-aliasing unit is configured to perform de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a memory and a processor; wherein,
a memory for storing a computer program capable of running on the processor;
a processor for performing the method according to the first aspect when running the computer program.
In a fourth aspect, the present application provides a computer storage medium storing a computer program, which when executed implements the method according to the first aspect.
According to the method, the device and the equipment for removing the aliasing of the depth image and the computer storage medium, the depth image and the amplitude image corresponding to the depth image are obtained; determining whether aliasing exists in the depth image or not based on a variation relation between the depth image and the amplitude image; determining aliasing pixels of the depth image under the condition that the depth image has aliasing; and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image. In this way, after the aliasing pixels of the depth image are determined according to the variation relation between the depth image and the amplitude image, the determined aliasing pixels are subjected to aliasing removal processing, so that the TOF depth ranging range can be enlarged and is not limited by the maximum measuring distance and the number of modulation frequencies of optical signals; in addition, the accuracy of measurement can be improved, and the visual effect of the depth image is improved.
Drawings
Fig. 1 is a schematic flowchart of a depth image de-aliasing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another depth image de-aliasing method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an amplitude-following distance variation curve before and after a depth de-aliasing correction according to an embodiment of the present application;
fig. 4A is a schematic structural diagram of an original depth image according to an embodiment of the present disclosure;
FIG. 4B is a schematic structural diagram of a de-aliased depth image according to an embodiment of the present application;
fig. 5 is a schematic structural diagram illustrating a component structure of an apparatus for de-aliasing a depth image according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
So that the manner in which the features and elements of the present embodiments can be understood in detail, a more particular description of the embodiments, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. It should also be noted that reference to the terms "first \ second \ third" in the embodiments of the present application is only used for distinguishing similar objects and does not represent a specific ordering for the objects, and it should be understood that "first \ second \ third" may be interchanged with a specific order or sequence where possible so that the embodiments of the present application described herein can be implemented in an order other than that shown or described herein.
The depth camera is a new technology which is started in recent years, compared with the traditional camera, the depth camera is added with a depth measurement function, and the surrounding environment and changes can be sensed more conveniently and accurately. The information obtained by the depth camera may be referred to as "depth information". Depth information may be input into a computing system to enable a wide variety of applications, such as applications in military, entertainment, sports, and medical scenarios.
To determine depth information, the depth camera may project a light signal onto a subject in the camera field of view. The light signal is reflected off the subject and back to the camera where it is processed to determine depth information. A technique for determining the distance between a camera and a subject being photographed may be based on the round-trip Time of Flight (TOF) of an optical signal, e.g. modulating the optical signal and determining the distance based on the phase difference between the transmitted and received optical signals.
For TOF, TOF is one of the main realization paths of three-dimensional imaging, and can be divided into indirect time of flight (idtof) and direct time of flight (indiect TOF) according to different modes. Wherein dTOF is to obtain depth information by directly measuring the time difference between transmission and reception of optical pulses; the iTOF is used for indirectly measuring the flight time of light according to the phase difference between a transmitting light signal and a receiving light signal, and further obtaining depth information.
In the embodiment of the present application, the ietf adopts a phase type ranging method, and the ranging formula is shown as follows,
Figure 270028DEST_PATH_IMAGE001
(1)
wherein,
Figure 953950DEST_PATH_IMAGE002
means the measured distance in meters (m);
Figure 902184DEST_PATH_IMAGE003
represents the phase difference in radians (rad) of the transmitted and received signal waveforms;
Figure 149494DEST_PATH_IMAGE004
represents the speed of light in meters per second (m/s);
Figure 968414DEST_PATH_IMAGE005
representing the frequency of the modulated signal in hertz (Hz).
From equation (1), the maximum measured distance (i.e., the aliasing period) of the iTOF can be obtained as:
Figure 744609DEST_PATH_IMAGE006
(2)
wherein,
Figure 914560DEST_PATH_IMAGE007
represents the maximum measured distance in meters (m). In addition, for a single-frequency iTOF,
Figure 231140DEST_PATH_IMAGE008
the modulation frequency for it; for dual/multi-frequency ietfs,
Figure 186458DEST_PATH_IMAGE008
being the greatest common divisor of the two or more modulation frequencies. It should be noted that, for the greatest common divisor, the common divisor among several integers is referred to as the common divisor of the several integers, and the greatest common divisor is the greatest common divisor of the several integers. Illustratively, 4 is the greatest common divisor of 12 and 16, and 3 is the greatest common divisor of 12, 15, and 18.
However, in the related art, depth aliasing occurs for some time-of-flight techniques, resulting in a limited TOF depth ranging range. At present, although the depth measurement range of TOF can be improved by increasing the number of modulation frequencies, for example, increasing a single frequency to a double frequency, increasing a double frequency to a triple frequency, or even more, the depth measurement range of TOF can be improved, but the TOF is always subject to the theoretically calculated maximum measurement distance (a)
Figure 133554DEST_PATH_IMAGE007
) The limit of (2).
Based on this, the embodiment of the present application provides a method for de-aliasing a depth image, and the basic idea of the method is: acquiring a depth image and an amplitude image corresponding to the depth image; determining whether aliasing exists in the depth image or not based on a variation relation between the depth image and the amplitude image; determining aliasing pixels of the depth image under the condition that the depth image has aliasing; and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image. In this way, after the aliasing pixels of the depth image are determined according to the variation relation between the depth image and the amplitude image, the determined aliasing pixels are subjected to aliasing removal processing, so that the TOF depth ranging range can be enlarged and is not limited by the maximum measuring distance and the number of modulation frequencies of optical signals; in addition, the accuracy of measurement can be improved, and the visual effect of the depth image is improved.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
In an embodiment of the present application, refer to fig. 1, which shows a flowchart of a method for de-aliasing a depth image according to an embodiment of the present application. As shown in fig. 1, the method may include:
s101: and acquiring the depth image and the amplitude image corresponding to the depth image.
It should be noted that the method of the embodiment of the present application may be applied to an apparatus for de-aliasing a depth image, or an electronic device integrated with the apparatus. The electronic device may be a device having a photographing function, such as a smart phone, a tablet computer, a notebook computer, a palm computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a Digital camera, a video camera, and the like, without any limitation.
It is also noted that the depth image may be generated by a depth camera shot. The depth image may include depth information for different locations, with each location having one or more neighborhood locations. In addition, the depth image may include several pixels, each having one or more neighborhood pixels in the depth image. That is, the locations in the depth image may correspond to pixels, i.e., each pixel may represent a depth (magnitude of a value), i.e., a distance of a photographed object from the camera plane. Thus, in the depth image, the depth information of these neighborhood pixels may constitute the neighborhood depth information of the pixel.
In some embodiments, the depth image may be determined from four phase maps, and the depth image may be determined along with its corresponding magnitude image. In addition, while the depth image is determined, the corresponding intensity image may also be determined. Here, the amplitude image may include several pixels, each of which may represent an amplitude (magnitude of a value); the intensity image may also comprise several pixels, each of which may represent an intensity (magnitude of a value).
In practical application, optical signals returned from the shot objects at two different distances may have the same measurement phase difference relative to the emitted optical signals, and the value range of the measurement phase difference is 0-2
Figure 853118DEST_PATH_IMAGE009
. Wherein, when the distance is far, the actual phase difference exceeds 2
Figure 442231DEST_PATH_IMAGE010
At this time, the actual phase difference at this time needs to be 2, which is an integral multiple of the measured phase difference
Figure 252055DEST_PATH_IMAGE011
In the present embodiment, aliasing may occur for techniques that use phase differences for indirect time-of-flight measurements. Specifically, aliasing occurs because optical signals returned from subjects at two different distances may have the same measured phase difference with respect to the emitted optical signal. In other words, the actual phase difference of the light signals reflected from some of the photographic subjects may be more than 2
Figure 963528DEST_PATH_IMAGE010
But this is greater than 2
Figure 655540DEST_PATH_IMAGE012
Cannot be less than 2
Figure 782765DEST_PATH_IMAGE011
Are distinguished. In one specific example, when the depth range of the actual detection scene exceeds the range
Figure 239241DEST_PATH_IMAGE013
When the depth values of the iTOF measurements are aliased. I.e. for measuring depth values
Figure 324877DEST_PATH_IMAGE014
The corresponding actual depth values may be:
Figure 707448DEST_PATH_IMAGE015
Figure 107205DEST_PATH_IMAGE016
(3)
the technical solution of the embodiments of the present application is to provide a method for removing deep aliasing, that is, a method for removing deep aliasing by using a linear interpolation method
Figure 937627DEST_PATH_IMAGE014
Estimating
Figure 928586DEST_PATH_IMAGE017
Therefore, according to the technical scheme of the embodiment of the application, the aliasing can be eliminated and the TOF depth ranging range can be further expanded on the basis of the theoretical maximum measurement distance limit by combining the scene understanding and the image processing method.
Thus, based on scene understanding, the embodiment of the present application considers that, in the case of continuous change of scene content, aliasing may exist in a depth image, but an amplitude image (or an intensity image) is not affected by the aliasing, and performs de-aliasing processing on the depth image according to the characteristics, so that a TOF depth measurement range can be improved and is not limited by a maximum measurement distance. It should be noted that the amplitude image described in the embodiment of the present application may refer to an amplitude image, or may be replaced with an intensity image; unless otherwise specified, the following description will be made by taking the amplitude image as an example.
S102: and determining whether aliasing exists in the depth image or not based on the variation relation between the depth image and the amplitude image.
It should be noted that, when aliasing exists in the depth image, at this time, the amplitude information of the depth image jumps but the amplitude image is not affected by the aliasing (i.e., the amplitude image still keeps continuously changing), so whether aliasing exists in the depth image can be determined according to the change relationship between the amplitude information and the amplitude information. Specifically, in some embodiments, the method may further comprise: when the variation relation indicates that the amplitude information in the amplitude image continuously varies and the depth information in the depth image has a jump variation, it is determined that aliasing exists in the depth image.
In this way, after the depth image and the amplitude image are acquired, the aliasing judgment can be performed on the depth image according to the variation relation of the depth image and the amplitude image. In a possible implementation manner, for S102, the determining whether aliasing exists in the depth image based on the variation relationship between the depth image and the amplitude image may include:
establishing a change curve of depth and amplitude based on the depth image and the amplitude image;
determining a distance aliasing threshold according to the change curve of the depth and the amplitude;
and if the depth image has the pixel point to be detected which is smaller than the distance aliasing threshold, determining that the depth image has aliasing.
That is to say, for performing aliasing judgment on the depth image, a depth-amplitude variation curve (which may reflect the variation relationship between depth and amplitude) may be calculated by using the characteristics of the depth continuity of the scene in the depth image, the continuity of the amplitude image with the depth variation, and generally no object with zero depth, and a distance aliasing threshold for aliasing may be queried according to the variation relationship curve.
It should be noted that the distance aliasing threshold may be set to different values for different scenes. Illustratively, the distance aliasing threshold may be set to 200 millimeters (mm), but the value may be specifically set according to the actual situation, and is not limited in any way here.
It should be further noted that, if there are pixels to be detected in the depth image that are smaller than the distance aliasing threshold, these pixels to be detected may be preliminarily determined as candidate aliasing pixels in the depth image, that is, it means that the depth image has aliasing.
In another possible implementation manner, for S102, the determining whether aliasing exists in the depth image based on the variation relationship between the depth image and the amplitude image may include:
determining neighborhood depth information of a pixel point to be detected in the depth image and neighborhood amplitude information of the pixel point to be detected in the amplitude image;
and if the neighborhood amplitude information of the pixel point to be detected continuously changes and the neighborhood depth information of the pixel point to be detected changes in a jumping way, determining that the depth image has aliasing.
That is to say, the depth image is subjected to aliasing judgment, neighborhood information (including neighborhood depth information and neighborhood amplitude information) of a pixel point to be detected in the depth image and the amplitude image can be utilized, and if it is found out that the neighborhood amplitude information of a certain pixel point is continuous but the neighborhood depth information jumps, the pixel point can be determined as a candidate aliasing pixel in the depth image, that is, the depth image is aliased.
In this embodiment of the present application, the neighborhood depth information may refer to depth information of a neighborhood pixel adjacent to a pixel point to be detected in a depth image; the neighborhood amplitude information may refer to amplitude information of a neighborhood pixel adjacent to a pixel point to be detected in an amplitude image. Since the amplitude information is not affected by aliasing, even if there is an aliased pixel, its neighborhood amplitude information is continuous, but if there is an aliased pixel, its neighborhood depth information will jump across the aliasing period; therefore, if the neighborhood amplitude information of a certain pixel point is continuously inquired but the neighborhood depth information jumps, the depth image can be determined to have aliasing. Here, in general, one aliasing period is the maximum measurement distance, i.e., as described in the foregoing embodiments
Figure 860770DEST_PATH_IMAGE018
It should be noted that, for the judgment of whether aliasing exists in the depth image, a distance aliasing threshold value for aliasing occurrence may be searched according to a change curve of depth and amplitude, and then the judgment is performed according to the distance aliasing threshold value; or searching for a pixel point with continuous neighborhood amplitude information but hopping neighborhood depth information and the like according to neighborhood information of the pixel point to be detected, or even other judgment modes, which is not specifically limited in the embodiment of the present application.
S103: in the case of aliasing in the depth image, aliased pixels of the depth image are determined.
It should be noted that, if aliasing exists in the depth image, aliasing pixels need to be confirmed, so as to avoid phenomena such as missing judgment and erroneous judgment. In some embodiments, the method may further comprise: in the case of aliasing in the depth image, candidate aliased pixels in the depth image are determined.
Accordingly, for S103, the determining aliased pixels of the depth image may include: determining aliasing pixels of the depth image according to the candidate aliasing pixels in the depth image.
In the embodiment of the present application, the candidate aliasing pixels may include: and/or the pixel points to be detected, which are smaller than the distance aliasing threshold value, in the depth image, and/or the pixel points to be detected, which are continuous in neighborhood amplitude information and jump in neighborhood depth information, in the depth image.
In a possible implementation manner, taking a single frame as an example, the determining aliasing pixels of the depth image according to the candidate aliasing pixels may include:
determining first parameters of the candidate aliased pixel; wherein the first parameter comprises: the number of pixels and the pixel communication area;
determining aliased pixels of the depth image from the candidate aliased pixels according to the first parameters.
That is, the present embodiment can confirm the number of pixels of aliasing pixels in a single frame image and the connected area. In general, if the first parameter of the aliasing-candidate pixel is greater than a preset parameter threshold, for example, the number of pixels of the aliasing-candidate pixel is greater than a preset number, the pixel connected area of the aliasing-candidate pixel is greater than a preset area, and the like, the aliasing-candidate pixel of the depth image can be determined.
In the embodiment of the present application, the setting of the preset parameter threshold (e.g., the preset number, the preset area, etc.) may adopt different values for different scenes, and here, mainly provides a judgment basis, and the preset parameter threshold is only used as a reference. Illustratively, in a specific example, the policy adopted by the embodiment of the present application is: if 8 pixel points except for the pixel point to be detected in the 3 x 3 neighborhood of the pixel point to be detected are non-aliasing points, and the depth information of the pixel point to be detected is greatly different from other pixel points, the pixel point to be detected can be confirmed to be an isolated aliasing point. In another specific example, the policy adopted by the embodiment of the present application is: if more than 4 pixel points in the 3 × 3 neighborhood of the pixel point to be detected are aliasing points, and the amplitude information and the depth information of the pixel point to be detected and the aliasing points are similar, it can be determined that the pixel point to be detected is also an aliasing point.
In another possible implementation, taking a multi-frame as an example, the method may further include:
acquiring a plurality of continuous frames containing the depth image, and determining candidate aliasing pixels of each frame in the plurality of frames;
and performing delay judgment according to the candidate aliasing pixels of each frame in the plurality of frames, and determining the aliasing pixels of the depth image.
Further, in some embodiments, the determining the aliasing pixels of the depth image according to the delay judgment on the candidate aliasing pixels of each of the several frames may include:
determining the number of frames of the pixel points to be detected belonging to candidate aliasing pixels in the plurality of frames based on the pixel points to be detected in the depth image;
and if the number of the frames is greater than the preset number, determining the pixel points to be detected as aliasing pixels of the depth image.
That is, the present embodiment can perform delay determination by using the aliasing determination results of several consecutive frames, so as to avoid missing determination, erroneous determination, and the like of some frames. Specifically, after determining the number of frames in which the pixel point to be detected belongs to the candidate aliasing pixels in the consecutive frames, if the number of frames is greater than a preset number, the pixel point to be detected may be determined as the aliasing pixel of the depth image. For example, assuming that there are 20 continuous frames of images, if a certain pixel is determined as an aliasing pixel in 18 frames, the pixel may be determined as an aliasing pixel, and the remaining 2 frames may be missed judgment, erroneous judgment, and the like due to some reasons.
It should be noted that, for the aliasing confirmation of the depth image, the confirmation may be performed by using the number of pixels of the aliased pixels in the single frame image and the connected area, or the delay judgment may be performed by using the aliasing judgment result of several consecutive frames, or even other confirmation manners, and the embodiment of the present application is not particularly limited.
S104: and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image.
It should be noted that if an aliased pixel in the depth image is identified, then the depth image may be subjected to a de-aliasing process. In a possible implementation manner, for S104, performing a de-aliasing process on aliasing pixels of the depth image to obtain a de-aliased target depth image may include:
acquiring depth information of the aliasing pixels according to the depth image;
and carrying out periodic extension on the depth information of the aliasing pixels to obtain the target depth image.
Further, in some embodiments, the periodically extending the depth information of the aliased pixel may include: adding the depth information of the aliasing pixels and the preset measuring distance which is N times; wherein, N is a preset constant value larger than zero.
Here, the preset measurement distance may generally refer to a maximum measurement distance, i.e., as described in the foregoing embodiments
Figure 329797DEST_PATH_IMAGE007
. That is, the depth information of the aliased pixels queried in the depth image is periodically extended (e.g., measured depth value + N;)
Figure 483567DEST_PATH_IMAGE018
)。
In the embodiment of the application, N may perform comprehensive judgment by combining a change curve of depth and amplitude and combining depth information and amplitude information of unaliased pixels in a neighborhood pixel. In a specific example, N has a value of 1, but is not particularly limited.
In another possible implementation manner, for S104, performing a de-aliasing process on aliasing pixels of the depth image to obtain a de-aliased target depth image may include:
acquiring neighborhood depth information and neighborhood amplitude information of the aliasing pixels;
and performing at least one time of iterative de-aliasing processing on the aliasing pixel according to the neighborhood depth information and the neighborhood amplitude information to obtain the target depth image.
That is to say, the depth image may be subjected to iterative de-aliasing processing for several times by using neighborhood pixels, specifically neighborhood information (including neighborhood depth information and neighborhood amplitude information), so that it may be avoided that some of the aliased pixels are missed as aliased pixels due to noise factors.
In another possible implementation manner, for S104, performing a de-aliasing process on the aliased pixels of the depth image to obtain a de-aliased target depth image may include:
inputting the depth image into a de-aliasing network model;
outputting the target depth image through the antialiasing network model.
Further, in some embodiments, the method may further comprise:
obtaining at least one set of training samples; the training samples comprise depth images with aliasing, depth images without aliasing and amplitude images, the depth images with aliasing and the amplitude images in the training samples are used as input of a neural network model, and the depth images without aliasing in the training samples are used as expected output of the neural network model;
and training the neural network model through the at least one group of training samples, and determining the trained neural network model as the antialiasing network model.
That is to say, the embodiment of the present application may also adopt a deep-learning-based deep-aliasing removing method. Specifically, an aliasing-free depth image and an amplitude image are used as input, an aliasing-free depth image (which can be acquired aliasing-free data or artificially aliasing-removed data) is used as expected output, an aliasing-removed network model is trained, and then the depth image is subjected to aliasing removal processing by using the model.
In the embodiment of the application, deep learning is one of machine learning, and machine learning is a must-pass path for realizing artificial intelligence. The concept of deep learning is derived from the research of artificial neural networks, and a multi-layer perceptron comprising a plurality of hidden layers is a deep learning structure. Deep learning may form a more abstract class or feature of high-level representation properties by combining low-level features to discover a distributed feature representation of the data. In the embodiment of the present application, a Convolutional Neural Network (CNN) is taken as an example, which is a kind of feed forward Neural network (fed Neural network) containing convolution calculation and having a Deep structure, and is one of the representative algorithms of Deep Learning (Deep Learning). The neural network model herein may be a convolutional neural network structure.
In addition, the amplitude image described in the embodiment of the present application may be replaced with an intensity image. In some embodiments, the method for de-aliasing a depth image according to an embodiment of the present application may further include:
acquiring a depth image and an intensity image corresponding to the depth image;
determining whether aliasing exists in the depth image or not based on a variation relation between the depth image and the intensity image;
determining aliasing pixels of the depth image under the condition that the depth image has aliasing;
and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image.
Further, in some embodiments, the method may further comprise: when the variation relation indicates that the intensity information in the intensity image continuously varies and the depth information in the depth image has a jump variation, it is determined that aliasing exists in the depth image.
That is to say, the embodiment of the application can realize the de-aliasing of the depth image according to the variation relation between the depth image and the amplitude image, and also can realize the de-aliasing of the depth image according to the variation relation between the depth image and the intensity image. That is, the embodiments of the present application can utilize the characteristic that the depth image has aliasing and the amplitude image (and/or the intensity image) is not affected by the aliasing (the scene content still keeps continuously changing). Thus, the depth measurement range of the single frequency/multi-frequency TOF using the phase ranging can be expanded by the methods of scene understanding and image processing.
The embodiment of the application provides a depth image de-aliasing method, which comprises the steps of obtaining a depth image and an amplitude image corresponding to the depth image; determining whether aliasing exists in the depth image or not based on a variation relation between the depth image and the amplitude image; determining aliasing pixels of the depth image under the condition that the depth image has aliasing; and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image. In this way, after the aliasing pixels of the depth image are determined according to the variation relation between the depth image and the amplitude image, the determined aliasing pixels are subjected to aliasing removal processing, so that the TOF depth ranging range can be enlarged and is not limited by the maximum measuring distance and the number of modulation frequencies of optical signals; in addition, the accuracy of measurement can be improved, and the visual effect of the depth image is improved.
In another embodiment of the present application, refer to fig. 2, which shows a flowchart of another depth image de-aliasing method provided by the embodiment of the present application. As shown in fig. 2, the method may include:
s201: and judging whether aliasing exists in the depth image.
It should be noted that, in the embodiment of the present application, the depth image may be subjected to de-aliasing by a method of scene understanding and image processing, so that the TOF depth ranging range can be increased without being limited by the maximum measurement distance. Here, the depth image can be de-aliased by taking advantage of the property that the depth image has aliasing while the amplitude image (and/or intensity image) is not affected by the aliasing (the scene content remains continuously changing).
For S201, the step of aliasing determination of the depth image may include, but is not limited to:
s201-1: and (3) calculating a change curve of the depth and the amplitude (or intensity) by utilizing the characteristics of the depth image and the amplitude image, and searching a distance aliasing threshold value for aliasing.
S201-2: and searching pixels with continuous neighborhood amplitude (or intensity) but jumping across aliasing periods in depth and the like by utilizing neighborhood information of the pixels to be detected in the depth image.
It should be noted that, for S201-1, the characteristics of the depth image and the amplitude image specifically refer to the continuity of the depth of the scene in the depth image, the continuity of the amplitude image (or the intensity image) varying with the depth, and the characteristics that there is usually no object with zero depth; at this time, if there are pixels to be detected smaller than the distance aliasing threshold in the depth image, these pixels to be detected may be preliminarily determined as candidate aliasing pixels in the depth image, that is, it means that the depth image has aliasing.
It should be further noted that, for S201-2, if the neighborhood amplitude information (or neighborhood intensity information) of a certain pixel point is continuously queried but the neighborhood depth information jumps, it may be determined that the depth image has aliasing. In generalIn this case, one aliasing period is the maximum measured distance, i.e.
Figure 661738DEST_PATH_IMAGE007
S202: confirmation of aliased pixels in the depth image.
For S202, aliasing confirmation of the depth image, this step may include, but is not limited to:
s202-1: the number of pixels of the single-frame aliasing pixel and the connected area are used for confirmation.
S202-2: and performing delay judgment by using aliasing judgment results of a plurality of continuous frames.
It should be noted that, for S202-1, if the number of pixels of the candidate aliasing pixel is greater than the preset number, the pixel connected area of the candidate aliasing pixel is greater than the preset area, and the like, the aliasing pixel of the depth image can be determined. For S202-2, the delay determination may be performed by using the aliasing determination results of several consecutive frames, so as to avoid missing determination, erroneous determination, and the like of some frames. Specifically, after determining the number of frames in which the pixel point to be detected belongs to the candidate aliasing pixels in the consecutive frames, if the number of frames is greater than a preset number, the pixel point to be detected may be determined as the aliasing pixel of the depth image.
It should also be noted that, for the number of pixels and the threshold setting of the connected area, it is possible to adopt different values for different scenes. It should be noted that the basis for the determination is provided mainly here, and the threshold provided is used as a reference only. The strategies that can be adopted at present are: (1) if 8 points in a 3 × 3 neighborhood of a certain pixel except the pixel itself are non-aliasing points, and the depth value of the point is greatly different from other points, the point is considered to be an isolated aliasing point. (2) If more than 4 points in the 3 × 3 neighborhood of a certain pixel are aliasing points, and the amplitude and the depth of the aliasing point are close to those of the aliasing points, the point is also determined to be an aliasing point.
S203: and (5) performing de-aliasing processing on the depth image.
For S203, the depth image de-aliasing process may include, but is not limited to:
s203-1: and carrying out periodic extension on the depth value of the searched aliasing pixel.
S203-2: and performing iterative de-aliasing processing on the depth image for a plurality of times by utilizing the neighborhood pixels.
S203-3: deep learning based deep de-aliasing.
It should be noted that, for S203-1, the periodic extension is performed on the depth information of the aliasing pixels queried in the depth image, and specifically may be: measuring depth value + N
Figure 533748DEST_PATH_IMAGE018
. Wherein,
Figure 72046DEST_PATH_IMAGE007
and N can be combined with the depth value and the amplitude value of the non-aliasing pixel in the neighborhood pixel to carry out comprehensive judgment. In a specific example, N has a value of 1, but is not particularly limited.
It should be further noted that, for S203-2, a depth image may be subjected to a plurality of iterative de-aliasing processes by using neighborhood pixels, specifically, neighborhood information (including neighborhood depth information and neighborhood amplitude information), so as to avoid that some of the aliased pixels are missed as aliased pixels due to noise.
It should be further noted that, for S203-3, a deep de-aliasing method based on deep learning may be adopted, and specifically, the method may be: the aliasing depth image and the amplitude image (or the intensity image) are used as input, the aliasing-free depth image (which can be collected aliasing-free data or artificially aliasing-free data) is used as expected output, a de-aliasing network model is obtained through training, and then the model is used for carrying out de-aliasing processing on the depth image.
Illustratively, taking fig. 3 as an example, it shows an "amplitude-following distance variation curve" before and after "single-frequency depth aliasing removal" correction provided by the embodiment of the present application. As shown in fig. 3, the horizontal axis represents distance (i.e., measured depth value), and the vertical axis represents amplitude; black circles for pre-correction information, grey circle tableThe corrected information is shown, and the vertical lines represent the distance aliasing threshold. In FIG. 3, the range aliasing threshold may be set at 990mm, the maximum measured range
Figure 814743DEST_PATH_IMAGE018
May be set to 2250 mm. Wherein the larger the distance value, the smaller the amplitude value thereof. However, if the distance value is small and the amplitude value is also small, it can be stated that the depth image is aliased. Specifically, for the distance value between 0mm and 990mm, the pixels belong to aliasing pixels; for the distance value between 990mm and 2250mm, the pixels belong to aliasing-free pixels, so that the pixels before and after correction are coincident; after the aliasing correction is carried out on the aliasing pixel, the distance value of the aliasing pixel is 2250-3240 mm, namely equal to (measured depth value +)
Figure 960553DEST_PATH_IMAGE018
)。
Fig. 4A shows a schematic structural diagram of an original depth image according to an embodiment of the present application, and fig. 4B shows a schematic structural diagram of an unaliased depth image according to an embodiment of the present application. Theoretically, the closer the distance, the darker the color. Fig. 4A is an original depth image, and since the distance between the background wall and the camera is the farthest, but the background wall exceeds the maximum range, depth aliasing occurs, and the background is pure black. FIG. 4B is a depth image after single-frequency depth de-aliasing; here, according to the comparison between the original depth image and the antialiasing depth image of fig. 4A and 4B, since the color of the background wall is supposed to be lightest as the distance from the camera is farthest, it can be seen from fig. 4B that the color of the background wall is lightest after the antialiasing process.
In short, specific implementations of the foregoing embodiments are described in detail through the foregoing embodiments, and it can be seen that the embodiments of the present application provide a method that can improve TOF product performance (e.g., measurement accuracy, depth map improvement, etc.). Among them, measurement accuracy is a key indicator of TOF products. Through the technical scheme of the embodiment of the application, the TOF ranging range can be further expanded on the basis of the theoretical maximum distance limit of the current system, the measurement accuracy is improved, and the depth map/point cloud map effect is improved. Specifically, it is not necessary to increase the number of frequencies of modulated light, and even a single frequency TOF can be applied, and it is not necessary to decrease the output frame rate of the depth map/cloud point map.
In another embodiment of the present application, based on the same inventive concept of the foregoing embodiment, refer to fig. 5, which shows a schematic structural diagram of an apparatus 50 for de-aliasing a depth image according to an embodiment of the present application. As shown in fig. 5, the depth image de-aliasing apparatus 50 may include: an acquisition unit 501, a determination unit 502 and an aliasing removal unit 503; wherein,
an obtaining unit 501, configured to obtain a depth image and a magnitude image corresponding to the depth image;
a determining unit 502 configured to determine whether aliasing exists in the depth image based on a variation relationship between the depth image and the magnitude image; and determining aliased pixels of the depth image in the presence of aliasing in the depth image;
the antialiasing unit 503 is configured to perform antialiasing processing on the aliased pixels of the depth image, so as to obtain a target depth image after the antialiasing processing.
In some embodiments, the determining unit 502 is further configured to establish a depth-to-amplitude variation curve based on the depth image and the amplitude image; determining a distance aliasing threshold according to the change curve of the depth and the amplitude; and if the depth image has the pixel point to be detected which is smaller than the distance aliasing threshold, determining that the depth image has aliasing.
In some embodiments, the determining unit 502 is further configured to determine neighborhood depth information of a pixel point to be detected in the depth image, and neighborhood amplitude information of the pixel point to be detected in the amplitude image; and if the neighborhood amplitude information of the pixel point to be detected continuously changes and the neighborhood depth information of the pixel point to be detected changes in a jumping way, determining that the depth image has aliasing.
In some embodiments, the determining unit 502 is further configured to determine, in a case where aliasing exists in the depth image, an aliased pixel of the depth image from a candidate aliased pixel in the depth image; wherein the candidate aliased pixel comprises: and the pixel points to be detected in the depth image, which are smaller than a distance aliasing threshold value, and/or the pixel points to be detected, which are continuous in neighborhood amplitude information and jump in neighborhood depth information, in the depth image.
In some embodiments, the determining unit 502 is further configured to determine a first parameter of the candidate aliased pixel; wherein the first parameter comprises: the number of pixels and the pixel communication area; and determining aliased pixels of the depth image from the candidate aliased pixels according to the first parameters.
In some embodiments, the obtaining unit 501 is further configured to obtain several consecutive frames including the depth image, and determine candidate aliased pixels of each of the several frames;
the determining unit 502 is further configured to perform delay judgment according to the candidate aliasing pixels of each of the plurality of frames, and determine aliasing pixels of the depth image.
In some embodiments, the determining unit 502 is further configured to determine, based on a pixel point to be detected in the depth image, a number of frames of the pixel point to be detected belonging to candidate aliasing pixels in the plurality of frames; and if the number of the frames is larger than the preset number, determining the pixel points to be detected as the aliasing pixels of the depth image.
In some embodiments, the obtaining unit 501 is further configured to obtain depth information of the aliased pixels according to the depth image;
the antialiasing unit 503 is further configured to perform periodic extension on the depth information of the aliased pixel to obtain the target depth image.
In some embodiments, the antialiasing unit 503 is further configured to add the depth information of the aliased pixel to N times the preset measurement distance; wherein, N is a preset constant value larger than zero.
In some embodiments, the obtaining unit 501 is further configured to obtain neighborhood depth information and neighborhood amplitude information of the aliased pixel;
the antialiasing unit 503 is further configured to perform iterative antialiasing processing on the aliased pixel at least once according to the neighborhood depth information and the neighborhood amplitude information, so as to obtain the target depth image.
In some embodiments, the antialiasing unit 503 is further configured to input the depth image into an antialiasing network model; and outputting the target depth image through the antialiasing network model.
In some embodiments, referring to fig. 5, the apparatus 50 for de-aliasing a depth image may further comprise a training unit 504;
an obtaining unit 501, further configured to obtain at least one set of training samples; the training samples comprise depth images with aliasing, depth images without aliasing and amplitude images, the depth images with aliasing and the amplitude images in the training samples are used as input of a neural network model, and the depth images without aliasing in the training samples are used as expected output of the neural network model;
a training unit 504 configured to train the neural network model through the at least one set of training samples, and determine the trained neural network model as the antialiasing network model.
It is understood that in this embodiment, a "unit" may be a part of a circuit, a part of a processor, a part of a program or software, etc., and may also be a module, or may also be non-modular. Moreover, each component in the embodiment may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Accordingly, the present embodiments provide a computer storage medium storing a computer program which, when executed by at least one processor, performs the steps of the method of any of the preceding embodiments.
In a further embodiment of the present application, referring to fig. 6, a schematic diagram of a specific hardware structure of an electronic device 60 provided by an embodiment of the present application is shown, based on the composition of the above-mentioned depth image antialiasing apparatus 50 and a computer storage medium. As shown in fig. 6, may include: a communication interface 601, a memory 602, and a processor 603; the various components are coupled together by a bus system 604. It is understood that the bus system 604 is used to enable communications among the components. The bus system 604 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 604 in fig. 6. Wherein,
a communication interface 601, configured to receive and transmit signals during information transmission and reception with other external network elements;
a memory 602 for storing a computer program capable of running on the processor 603;
a processor 603 for, when running the computer program, performing:
acquiring a depth image and an amplitude image corresponding to the depth image;
determining whether aliasing exists in the depth image based on a variation relation between the depth image and the amplitude image;
determining aliased pixels of the depth image in the presence of aliasing in the depth image;
and performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image.
It will be appreciated that the memory 602 in the subject embodiment can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous chained SDRAM (Synchronous link DRAM, SLDRAM), and Direct memory bus RAM (DRRAM). The memory 602 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 603 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 603. The Processor 603 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 602, and the processor 603 reads the information in the memory 602, and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units configured to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, in some embodiments, the processor 603 is further configured to perform the steps of the method of any of the preceding embodiments when running the computer program.
Optionally, in some embodiments, refer to fig. 7, which shows a schematic structural diagram of another electronic device 60 provided in the embodiments of the present application. As shown in fig. 7, the electronic device 60 may include at least the depth image de-aliasing apparatus 50 of any of the previous embodiments.
In the embodiment of the present application, for the electronic device 60, after determining aliasing pixels of the depth image according to the corresponding relationship between the depth image and the amplitude image, by performing de-aliasing processing on the determined aliasing pixels, not only can the TOF depth ranging range be increased, but also the TOF depth ranging range is not limited by the maximum measurement distance and the number of modulation frequencies of the optical signal; in addition, the accuracy of measurement can be improved, and the visual effect of the depth image is improved.
It should be noted that, in the present application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
The methods disclosed in the several method embodiments provided in the present application may be combined arbitrarily without conflict to obtain new method embodiments.
Features disclosed in several of the product embodiments provided in the present application may be combined in any combination to yield new product embodiments without conflict.
The features disclosed in the several method or apparatus embodiments provided in the present application may be combined arbitrarily, without conflict, to arrive at new method embodiments or apparatus embodiments.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (13)

1. A method of de-aliasing a depth image, the method comprising:
acquiring a depth image and an amplitude image corresponding to the depth image;
determining whether aliasing exists in the depth image based on a variation relation between the depth image and the amplitude image;
determining aliased pixels of the depth image in the presence of aliasing in the depth image;
performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image;
wherein the method further comprises: determining that aliasing exists in the depth image when the variation relation indicates that the amplitude information in the amplitude image continuously varies and the depth information in the depth image has a jump variation;
performing de-aliasing processing on the aliasing pixels of the depth image to obtain a de-aliased target depth image, including:
acquiring depth information of the aliasing pixels according to the depth image;
adding the depth information of the aliasing pixels and the preset measuring distance which is N times of the aliasing pixels to obtain the target depth image; wherein, N is a preset constant value larger than zero.
2. The method of claim 1, wherein the determining whether aliasing exists in the depth image based on the variance relationship between the depth image and the magnitude image comprises:
establishing a change curve of depth and amplitude based on the depth image and the amplitude image;
determining a distance aliasing threshold according to the change curve of the depth and the amplitude;
and if the depth image has the pixel point to be detected which is smaller than the distance aliasing threshold, determining that the depth image has aliasing.
3. The method of claim 1, wherein the determining whether aliasing exists in the depth image based on the variance relationship between the depth image and the magnitude image comprises:
determining neighborhood depth information of a pixel point to be detected in the depth image and neighborhood amplitude information of the pixel point to be detected in the amplitude image;
and if the neighborhood amplitude information of the pixel point to be detected continuously changes and the neighborhood depth information of the pixel point to be detected changes in a jumping way, determining that the depth image has aliasing.
4. The method of claim 2 or 3, wherein the determining aliased pixels of the depth image comprises:
determining aliasing pixels of the depth image according to candidate aliasing pixels in the depth image under the condition that the aliasing exists in the depth image;
wherein the candidate aliased pixel comprises: and the pixel points to be detected in the depth image, which are smaller than a distance aliasing threshold value, and/or the pixel points to be detected, which are continuous in neighborhood amplitude information and jump in neighborhood depth information, in the depth image.
5. The method of claim 4, wherein determining aliased pixels of the depth image from the candidate aliased pixels comprises:
determining first parameters of the candidate aliased pixel; wherein the first parameter comprises: the number of pixels and the pixel communication area;
determining aliased pixels of the depth image from the candidate aliased pixels according to the first parameters.
6. The method of claim 4, further comprising:
acquiring a plurality of continuous frames containing the depth image, and determining candidate aliasing pixels of each frame in the plurality of frames;
and performing delay judgment according to the candidate aliasing pixels of each frame in the plurality of frames, and determining the aliasing pixels of the depth image.
7. The method according to claim 6, wherein determining the aliasing pixels of the depth image by performing the delay judgment according to the candidate aliasing pixels of each of the plurality of frames comprises:
determining the number of frames of the pixel points to be detected belonging to candidate aliasing pixels in the plurality of frames based on the pixel points to be detected in the depth image;
and if the number of the frames is greater than the preset number, determining the pixel points to be detected as aliasing pixels of the depth image.
8. The method according to claim 1, wherein the performing the de-aliasing process on the aliasing pixels of the depth image to obtain the de-aliased target depth image comprises:
acquiring neighborhood depth information and neighborhood amplitude information of the aliasing pixels;
and performing at least one time of iterative de-aliasing processing on the aliasing pixel according to the neighborhood depth information and the neighborhood amplitude information to obtain the target depth image.
9. The method according to claim 1, wherein the performing the de-aliasing process on the aliasing pixels of the depth image to obtain the de-aliased target depth image comprises:
inputting the depth image into a de-aliasing network model;
outputting the target depth image through the antialiasing network model.
10. The method of claim 9, further comprising:
obtaining at least one set of training samples; the training samples comprise depth images with aliasing, depth images without aliasing and amplitude images, the depth images with aliasing and the amplitude images in the training samples are used as input of a neural network model, and the depth images without aliasing in the training samples are used as expected output of the neural network model;
and training the neural network model through the at least one group of training samples, and determining the trained neural network model as the antialiasing network model.
11. An apparatus for antialiasing a depth image, the apparatus comprising an acquisition unit, a determination unit, and an antialiasing unit; wherein,
the acquiring unit is configured to acquire a depth image and a magnitude image corresponding to the depth image;
the determining unit is configured to determine whether aliasing exists in the depth image based on a variation relation between the depth image and the amplitude image; and determining aliased pixels of the depth image in the presence of aliasing in the depth image;
the de-aliasing unit is configured to perform de-aliasing processing on the aliasing pixels of the depth image to obtain a target depth image after de-aliasing processing;
wherein the determining unit is further configured to determine that aliasing exists in the depth image when the variation relation indicates that the amplitude information in the amplitude image continuously varies and the depth information in the depth image has a jump variation;
the de-aliasing unit is further configured to acquire depth information of the aliasing pixels according to the depth image; adding the depth information of the aliasing pixels and the N times of preset measurement distance to obtain the target depth image; wherein, N is a preset constant value larger than zero.
12. An electronic device, comprising a memory and a processor; wherein,
the memory for storing a computer program operable on the processor;
the processor, when running the computer program, is configured to perform the method of any of claims 1 to 10.
13. A computer storage medium, characterized in that it stores a computer program which, when executed, implements the method of any one of claims 1 to 10.
CN202110743522.0A 2021-07-01 2021-07-01 Depth image de-aliasing method, device, equipment and computer storage medium Active CN113256539B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110743522.0A CN113256539B (en) 2021-07-01 2021-07-01 Depth image de-aliasing method, device, equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110743522.0A CN113256539B (en) 2021-07-01 2021-07-01 Depth image de-aliasing method, device, equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN113256539A CN113256539A (en) 2021-08-13
CN113256539B true CN113256539B (en) 2021-11-02

Family

ID=77190408

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110743522.0A Active CN113256539B (en) 2021-07-01 2021-07-01 Depth image de-aliasing method, device, equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN113256539B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881894B (en) * 2022-07-07 2022-09-30 武汉市聚芯微电子有限责任公司 Pixel repairing method, device, equipment and computer storage medium
CN114998158B (en) * 2022-08-03 2022-10-25 武汉市聚芯微电子有限责任公司 Image processing method, terminal device and storage medium
WO2024090040A1 (en) * 2022-10-28 2024-05-02 ソニーグループ株式会社 Information processing device, information processing method, and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8514269B2 (en) * 2010-03-26 2013-08-20 Microsoft Corporation De-aliasing depth images
US9602807B2 (en) * 2012-12-19 2017-03-21 Microsoft Technology Licensing, Llc Single frequency time of flight de-aliasing
CN110263625A (en) * 2019-05-08 2019-09-20 吉林大学 A kind of envelope extraction algorithm of signal pulse amplitude sorting
US10929956B2 (en) * 2019-07-02 2021-02-23 Microsoft Technology Licensing, Llc Machine-learned depth dealiasing
CN110378946B (en) * 2019-07-11 2021-10-01 Oppo广东移动通信有限公司 Depth map processing method and device and electronic equipment
CN112073646B (en) * 2020-09-14 2021-08-06 哈工大机器人(合肥)国际创新研究院 Method and system for TOF camera long and short exposure fusion

Also Published As

Publication number Publication date
CN113256539A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN113256539B (en) Depth image de-aliasing method, device, equipment and computer storage medium
US9903941B2 (en) Time of flight camera device and method of driving the same
US10571571B2 (en) Method and apparatus for controlling time of flight confidence map based depth noise and depth coverage range
CN110378944B (en) Depth map processing method and device and electronic equipment
CN115032608B (en) Ranging sensor data optimization method and application thereof
WO2020119467A1 (en) High-precision dense depth image generation method and device
CN112106111A (en) Calibration method, calibration equipment, movable platform and storage medium
CN110874852A (en) Method for determining depth image, image processor and storage medium
CN110400342B (en) Parameter adjusting method and device of depth sensor and electronic equipment
US20200264262A1 (en) Position estimating apparatus, position estimating method, and terminal apparatus
US20200096638A1 (en) Method and apparatus for acquiring depth image, and electronic device
CN111080784A (en) Ground three-dimensional reconstruction method and device based on ground image texture
CN111784730B (en) Object tracking method and device, electronic equipment and storage medium
CN106374224B (en) Electromagnetic-wave imaging system and antenna array signals bearing calibration
US11841466B2 (en) Systems and methods for detecting an electromagnetic signal in a constant interference environment
CN113515132B (en) Robot path planning method, robot, and computer-readable storage medium
CN109031333B (en) Distance measuring method and device, storage medium, and electronic device
KR101790864B1 (en) Method for removing interference according to multi-path in frequency modulation lidar sensor system and apparatus thereof
KR20100033634A (en) Distance estimation apparatus and method
CN108254738A (en) Obstacle-avoidance warning method, device and storage medium
CN112950709B (en) Pose prediction method, pose prediction device and robot
CN110390689B (en) Depth map processing method and device and electronic equipment
CN117480408A (en) Object ranging method and device
CN113052886A (en) Method for acquiring depth information of double TOF cameras by adopting binocular principle
CN113340433A (en) Temperature measuring method, temperature measuring device, storage medium, and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant