CN114066779A - Depth map filtering method and device, electronic equipment and storage medium - Google Patents

Depth map filtering method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114066779A
CN114066779A CN202210038515.5A CN202210038515A CN114066779A CN 114066779 A CN114066779 A CN 114066779A CN 202210038515 A CN202210038515 A CN 202210038515A CN 114066779 A CN114066779 A CN 114066779A
Authority
CN
China
Prior art keywords
pixel
projection
map
depth
normal vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210038515.5A
Other languages
Chinese (zh)
Other versions
CN114066779B (en
Inventor
郑灵杰
国学理
杨洋
朱月
徐永奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Lanxin Technology Co ltd
Original Assignee
Hangzhou Lanxin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Lanxin Technology Co ltd filed Critical Hangzhou Lanxin Technology Co ltd
Priority to CN202210038515.5A priority Critical patent/CN114066779B/en
Publication of CN114066779A publication Critical patent/CN114066779A/en
Application granted granted Critical
Publication of CN114066779B publication Critical patent/CN114066779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a depth map filtering method and device, electronic equipment and a storage medium, wherein the method calculates the three-dimensional coordinates of each point according to internal parameters of a depth camera on a depth map acquired by the depth camera; calculating a method vector diagram according to the three-dimensional coordinates; generating a mask image with the same size for the depth map; calculating a normal vector approximation mean value map according to the normal vector map and the mask map, wherein the normal vector approximation mean value map is formed by approximation mean value normal vectors of each pixel in the depth map; calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value normal vector direction to obtain a projection scalar of each pixel; calculating a depth adjustment value of each pixel according to the projection scalar by using the rule of L1 Smooth; and accumulating the depth adjustment value to a pixel corresponding to the depth map. With the rule of L1Smooth, the tolerance of the algorithm to outliers is increased, so that the algorithm has good retained edge sharpness.

Description

Depth map filtering method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a depth map filtering method and apparatus, an electronic device, and a storage medium.
Background
In the process of obstacle detection and point cloud matching algorithm, a smooth depth map is obtained through filtering, so that the difficulty of subsequent processing algorithm can be greatly reduced, and the subsequent processing algorithm is easier to complete.
In the process of implementing the invention, the inventor finds that at least the following problems exist in the prior art:
filtering such as mean filtering, gaussian filtering, etc. may blur the boundary of the depth map, and some small details may be lost, thereby affecting the subsequent algorithm processing. For example, BM3D, Robust PCA, etc. may obtain good smoothing results, but the processing speed is too slow to be applied to real-time scenes.
Therefore, a smoothing filtering method with both a smoothing effect and a small amount of calculation is important in real-time depth information processing.
Disclosure of Invention
An embodiment of the present application provides a depth map filtering method and apparatus, an electronic device, and a storage medium, which effectively solve the problem that the depth map and point cloud are not smooth enough and real-time performance needs to be considered.
According to a first aspect of embodiments of the present application, there is provided a depth map filtering method, including:
calculating the three-dimensional coordinates of each point according to the internal parameters of the depth camera for the depth map acquired by the depth camera;
calculating a method vector diagram according to the three-dimensional coordinates;
generating a mask image with the same size for the depth map;
calculating a normal vector approximation mean value map according to the normal vector map and the mask map, wherein the normal vector approximation mean value map is formed by approximation mean value normal vectors of each pixel in the depth map;
calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value normal vector direction to obtain a projection scalar of each pixel;
calculating a depth adjustment value of each pixel according to the projection scalar by using the rule of L1 Smooth;
and accumulating the depth adjustment value to a pixel corresponding to the depth map to finish filtering.
Further, according to the three-dimensional coordinate computation method vector diagram, the method comprises the following steps:
subtracting two three-dimensional coordinates of each point in the left and right directions in the three-dimensional coordinates and normalizing to obtain a first unit vector;
subtracting two points of the three-dimensional coordinates in the up-down direction of each point in the three-dimensional coordinates and normalizing to obtain a second unit vector;
and normalizing the cross multiplication result of the first unit vector and the second unit vector of each point to obtain a normal vector diagram.
Further, for the depth map, generating a mask map of the same size, including:
generating a mask image with the same size;
and setting the pixel which is invalid corresponding to the depth value of the depth map in the mask map as 0, and setting the pixel which is valid as 1.
Further, according to the normal vector diagram and the mask diagram, calculating to obtain an approximate normal vector mean value, including:
respectively calculating an integrogram for the normal vector image and the mask image to obtain a normal vector integrogram and a mask integrogram;
and dividing the normal vector integral image by the mask integral image to obtain a normal vector approximate mean value image.
Further, calculating the projection of each point in the neighborhood of each pixel in the depth map in the approximate mean vector direction to obtain a projection scalar of each pixel, including:
and subtracting the three-dimensional coordinate of each point in the neighborhood of each pixel in the depth map by the three-dimensional coordinate of the pixel, and then performing point multiplication on the approximate mean value normal vector of the pixel to obtain a projection scalar of each pixel.
Further, calculating a depth adjustment value for each pixel using the rule of L1Smooth according to the projection scalar, including:
respectively counting projection values in a projection scalar of each pixel, if the projection values are both positive values or negative values, taking the mean value of all the projection values in the projection scalar as the depth adjustment value of the pixel, if the positive values and the negative values of the projection values are both present, changing the projection values of the parts exceeding a preset threshold value in the projection scalar into the threshold value size by using the rule of L1Smooth, keeping the original values of the parts not exceeding the preset threshold value, and averaging to be used as the depth adjustment value of the pixel.
According to a second aspect of embodiments of the present application, there is provided a depth map filtering apparatus, including:
the first calculation module is used for calculating the three-dimensional coordinates of each point according to the internal parameters of the depth camera for the depth map acquired by the depth camera;
the second calculation module is used for calculating a method vector diagram according to the three-dimensional coordinates;
the generating module is used for generating a mask image with the same size for the depth image;
a third calculation module, configured to calculate a normal vector approximation average map according to the normal vector map and the mask map, where the normal vector approximation average map is formed by an approximation average vector of each pixel in the depth map;
the fourth calculation module is used for calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value method vector direction to obtain a projection scalar of each pixel;
a fifth calculation module, configured to calculate a depth adjustment value of each pixel according to the projection scalar using a rule of L1 Smooth;
and the accumulation module is used for accumulating the depth adjustment value to the pixel corresponding to the depth map to finish filtering.
According to a third aspect of embodiments of the present application, there is provided an electronic apparatus, including:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as described in the first aspect.
According to a fourth aspect of embodiments herein, there is provided a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first aspect.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
according to the embodiment, the method is carried out on the ordered depth map, so that extra calculation for solving neighborhood points in normal vector calculation is avoided, and the efficiency of the algorithm is improved; by generating a mask image with the same size for the depth image and calculating the approximate mean value image of the normal vector according to the normal vector image and the mask image, the problem of too low speed of a filtering algorithm is solved, and the technical effect of real-time calculation is achieved. Calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value normal vector direction to obtain a projection scalar of each pixel, calculating the depth adjustment value of each pixel by utilizing the rule of L1Smooth according to the projection scalar, increasing the tolerance of the algorithm to abnormal points, ensuring that the algorithm has good edge sharpness, overcoming the problem that the filtering algorithm can cause the boundary to be fuzzy, and achieving the technical effects that the point cloud in the curved surface area is Smooth and the edge is well preserved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flow chart illustrating a depth map filtering method according to an exemplary embodiment.
FIG. 2 is a depth map acquired by a depth camera, shown in accordance with an exemplary embodiment.
Fig. 3 is a flowchart illustrating step S12 according to an exemplary embodiment.
FIG. 4 is a normal vector diagram shown in accordance with an exemplary embodiment.
FIG. 5 is a diagram illustrating a normal vector integral according to an exemplary embodiment.
FIG. 6 is a diagram illustrating a normal vector approximate mean value, according to an exemplary embodiment.
FIG. 7 is an illustration of a filtered image, according to an example embodiment.
Fig. 8 is a schematic structural diagram illustrating a depth map filtering apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a flowchart illustrating a depth map filtering method according to an exemplary embodiment, which may include the following steps, as shown in fig. 1:
step S11, calculating the three-dimensional coordinates of each point according to the internal parameters of the depth camera for the depth map acquired by the depth camera;
step S12, calculating a method vector diagram according to the three-dimensional coordinates;
step S13, generating a mask map with the same size for the depth map;
step S14, calculating a normal vector approximation mean value map according to the normal vector map and the mask map, wherein the normal vector approximation mean value map is composed of the approximation mean value vector of each pixel in the depth map;
step S15, calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value vector direction to obtain the projection scalar of each pixel;
step S16, calculating the depth adjustment value of each pixel according to the projection scalar by using the rule of L1 Smooth;
and step S17, accumulating the depth adjustment value to the pixel corresponding to the depth map, and completing filtering.
According to the embodiment, the method is carried out on the ordered depth map, so that extra calculation for solving neighborhood points in normal vector calculation is avoided, and the efficiency of the algorithm is improved; by generating a mask image with the same size for the depth image and calculating the approximate mean value image of the normal vector according to the normal vector image and the mask image, the problem of too low speed of a filtering algorithm is solved, and the technical effect of real-time calculation is achieved. Calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value normal vector direction to obtain a projection scalar of each pixel, calculating the depth adjustment value of each pixel by utilizing the rule of L1Smooth according to the projection scalar, increasing the tolerance of the algorithm to abnormal points, ensuring that the algorithm has good edge sharpness, overcoming the problem that the filtering algorithm can cause the boundary to be fuzzy, and achieving the technical effects that the point cloud in the curved surface area is Smooth and the edge is well preserved.
In the specific implementation of step S11, for the depth map acquired by the depth camera, the three-dimensional coordinates of each point are calculated according to the internal parameters of the depth camera;
specifically, fig. 2 is a depth map acquired by a depth camera, and the depth map calculates world coordinates of each pixel of the acquired depth map, i.e., three-dimensional coordinates of each point, through external parameters and internal parameters of the camera.
In a specific implementation of step S12, calculating a law vector diagram from the three-dimensional coordinates; referring to fig. 3, this step may include the following sub-steps:
step S121, subtracting two three-dimensional coordinates of each point in the left and right directions in the three-dimensional coordinates and normalizing to obtain a first unit vector;
specifically, the three-dimensional coordinates of the left pixel point are subtracted from the three-dimensional coordinates of the right pixel point to obtain a set of three-dimensional vectors representing the left and right directions, and the set of three-dimensional vectors is divided by a module of the three-dimensional vectors to obtain a normalized first unit vector.
Step S122, subtracting two points of the three-dimensional coordinates in the up-down direction of each point in the three-dimensional coordinates and normalizing to obtain a second unit vector;
specifically, the three-dimensional coordinates of the upper-side pixel points are subtracted from the three-dimensional coordinates of the lower-side pixel points to obtain a group of three-dimensional vectors representing the vertical direction, and the group of three-dimensional vectors is divided by a module of the three-dimensional vectors to obtain a normalized second unit vector.
Step S123, normalizing the cross product of the first unit vector and the second unit vector of each point to obtain a normal vector diagram.
Specifically, the cross product of the first unit vector and the second unit vector is calculated to obtain a three-dimensional vector, the direction of the three-dimensional vector is perpendicular to the first unit vector and the second unit vector, and the length of the three-dimensional vector is the sine value of the included angle between the first unit vector and the second unit vector, which is not required here, so that the vector is divided by the modulus of the vector to obtain a normal vector.
Fig. 4 is a normal vector diagram calculated in step S12. The normal vector can also be calculated using the PCA algorithm by using a vector cross-product calculation method vector for the pixels and the pixel neighborhood. The normal vector obtained by the PCA method has high calculation operation amount, but the calculation result is more reliable. The calculation amount by the cross multiplication method is low, but the calculation result is more affected by the abnormal point.
In the specific implementation of step S13, a mask map with the same size is generated for the depth map;
specifically, a mask map of the same size is generated, and pixels in the mask map that are invalid in correspondence with the depth values of the depth map are set to 0, and the pixels in the mask map that are valid in correspondence with the depth values of the depth map are set to 1.
In the specific implementation of step S14, a normal vector approximate mean is calculated according to the normal vector graph and the mask graph; this step may include the following sub-steps:
step S141, respectively calculating an integrogram for the normal vector diagram and the mask diagram to obtain a normal vector integrogram and a mask integrogram;
specifically, an integral graph is respectively calculated for the normal vector image and the mask image to obtain a mask image integral graph and a normal vector image integral graph, the normal vector integral graph refers to fig. 5, and the number of points with effective depth values in a window can be obtained by counting the sum in the specified window through the integral graph. The effective points are counted through the integral graph, the calculation complexity is constant and is not increased along with the increase of the width of a statistical window, and the algorithm can be ensured to run quickly under the condition of keeping good performance. The number of valid points can also be counted by traversing the count within a specified window, but such complexity becomes the square of R, which is the window width. There are real-time problems in increasing the performance of the algorithm by increasing the window width.
The normal vector image integral graph is divided by the mask image integral graph to obtain an approximate normal vector mean graph, and the normal vector approximate mean graph refers to fig. 6. The size of an integral window can be specified during integral graph calculation, the larger the window is, the smoother the normal vector mean value is, generally, a larger window is selected in the initial generations of iteration, and then a smaller window is used, so that the data convergence speed is improved, and the total iteration times can be effectively reduced.
And step S142, dividing the normal vector integral image by the mask integral image to obtain a normal vector approximate mean value image.
Specifically, traversing each pixel position of the mask integral image, if the pixel is not 0, removing the integral of the normal vector from the same pixel position in the normal vector integral image, and dividing the integral of the normal vector by the integral of the mask image to obtain the mean value of the normal vectors of all effective pixel points in the integral window. The mean value of normal vectors in any range can be quickly obtained through the integral graph calculation method vector mean value, the comparison result is more accurate when the neighborhood cross multiplication calculation method vector is directly selected in a large range, and the comparison speed with the PCA method is faster when the neighborhood cross multiplication calculation method vector is directly selected in the large range.
In a specific implementation of step S15, calculating a projection of each point in a neighborhood of each pixel in the depth map in a direction of an approximate mean vector to obtain a projection scalar of each pixel, including:
specifically, subtracting the three-dimensional coordinates of each pixel from the three-dimensional coordinates of each point in the neighborhood of each pixel in the depth map, and then point-multiplying the three-dimensional coordinates by the normal vector of the approximate mean value of the pixel to obtain a group of projection scalars of each pixel;
in a specific implementation of step S16, calculating a depth adjustment value for each pixel according to the projection scalar using the rule of L1Smooth includes:
specifically, the projection value in the projection scalar of each pixel is counted separately, and if the projection scalars are all positive values or negative values, the average value is directly used as the adjustment value, which is adopted because it can be determined that there is no abnormal value when this occurs. In an actual scene, an isolated noise point exists on a plane, if the L1Smooth method is directly adopted, a large number of iterations are needed to pull the noise point back to the plane, and the adjustment value can exceed the adjustment threshold value by adding the judgment of all positive or all negative, so that the smoothness of the isolated noise point can be finished by one iteration.
If both positive and negative values appear, changing the projection value of the part exceeding the preset threshold value in the projection scalar into the threshold value, keeping the original value of the part not exceeding the preset threshold value, averaging to be used as the depth adjustment value of the pixel, and if both positive values or both negative values exist, taking the average value of all projection values in the projection scalar as the depth adjustment value of the pixel.
The depth value adjusting method adopting the limiting step size is actually a simulation of the L1Smooth function, namely, the influence of data in a threshold value range on a result is linear, and the exceeding part shows a constant. This approach limits the effect of outliers on the results to the advantage of L1, while also having the advantage of fast convergence of L2.
S11-S15 are repeated a certain number of times in order to obtain a smoother filtering result.
Fig. 7 is a diagram after final filtering, and comparing with fig. 2, it can be seen that the curved surface in the depth map is smoother by the L1Smooth adjustment rule in filtering, and the object edge is also preserved. Meanwhile, the area of the cavity in the depth map is smaller, the cavity is regarded as an isolated noise point, and the three-dimensional coordinates of the cavity are estimated from the curved surfaces nearby the cavity.
Corresponding to the foregoing embodiments of the depth map filtering method, the present application also provides embodiments of a depth map filtering apparatus.
Fig. 8 is a block diagram illustrating a depth map filtering apparatus according to an exemplary embodiment. Referring to fig. 8, the apparatus includes:
the first calculation module 21 is configured to calculate a three-dimensional coordinate of each point according to internal parameters of the depth camera for a depth map acquired by the depth camera;
a second calculation module 22, configured to calculate a law vector diagram according to the three-dimensional coordinates;
a generating module 23, configured to generate a mask map with the same size for the depth map;
a third calculating module 24, configured to calculate a normal vector approximation average map according to the normal vector map and the mask map, where the normal vector approximation average map is formed by an approximation average normal vector of each pixel in the depth map;
a fourth calculating module 25, configured to calculate projections of points in each pixel neighborhood in the depth map in the vector direction of the approximate mean value method, so as to obtain a projection scalar of each pixel;
a fifth calculating module 26, configured to calculate a depth adjustment value of each pixel according to the projection scalar by using the rule of L1 Smooth;
and an accumulation module 27, configured to accumulate the depth adjustment value to a pixel corresponding to the depth map, so as to complete filtering.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Correspondingly, the present application also provides an electronic device, comprising: one or more processors; a memory for storing one or more programs; when executed by the one or more processors, cause the one or more processors to implement a depth map filtering method as described above.
Accordingly, the present application also provides a computer readable storage medium having computer instructions stored thereon, wherein the instructions, when executed by a processor, implement the depth map filtering method as described above.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A depth map filtering method, comprising:
calculating the three-dimensional coordinates of each point according to the internal parameters of the depth camera for the depth map acquired by the depth camera;
calculating a method vector diagram according to the three-dimensional coordinates;
generating a mask image with the same size for the depth map;
calculating a normal vector approximation mean value map according to the normal vector map and the mask map, wherein the normal vector approximation mean value map is formed by approximation mean value normal vectors of each pixel in the depth map;
calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value normal vector direction to obtain a projection scalar of each pixel;
calculating a depth adjustment value of each pixel according to the projection scalar by using the rule of L1 Smooth;
and accumulating the depth adjustment value to a pixel corresponding to the depth map to finish filtering.
2. The method of claim 1, wherein computing a law vector map from the three-dimensional coordinates comprises:
subtracting two three-dimensional coordinates of each point in the left and right directions in the three-dimensional coordinates and normalizing to obtain a first unit vector;
subtracting two points of the three-dimensional coordinates in the up-down direction of each point in the three-dimensional coordinates and normalizing to obtain a second unit vector;
and normalizing the cross multiplication result of the first unit vector and the second unit vector of each point to obtain a normal vector diagram.
3. The method of claim 1, wherein generating a mask map of the same size for the depth map comprises:
generating a mask image with the same size;
and setting the pixel which is invalid corresponding to the depth value of the depth map in the mask map as 0, and setting the pixel which is valid as 1.
4. The method of claim 1, wherein calculating the normal vector approximate mean according to the normal vector graph and the mask graph comprises:
respectively calculating an integrogram for the normal vector image and the mask image to obtain a normal vector integrogram and a mask integrogram;
and dividing the normal vector integral image by the mask integral image to obtain a normal vector approximate mean value image.
5. The method of claim 1, wherein computing a projection of each point in each pixel neighborhood in the depth map in a direction of an approximate mean vector to obtain a projection scalar for each pixel, comprises:
and subtracting the three-dimensional coordinate of each point in the neighborhood of each pixel in the depth map by the three-dimensional coordinate of the pixel, and then performing point multiplication on the approximate mean value normal vector of the pixel to obtain a projection scalar of each pixel.
6. The method of claim 1, wherein calculating a depth adjustment value for each pixel from the projection scalar using the rule of L1Smooth comprises:
respectively counting projection values in a projection scalar of each pixel, if the projection values are both positive values or negative values, taking the mean value of all the projection values in the projection scalar as the depth adjustment value of the pixel, if the positive values and the negative values of the projection values are both present, changing the projection values of the parts exceeding a preset threshold value in the projection scalar into the threshold value size by using the rule of L1Smooth, keeping the original values of the parts not exceeding the preset threshold value, and averaging to be used as the depth adjustment value of the pixel.
7. A depth map filtering apparatus, comprising:
the first calculation module is used for calculating the three-dimensional coordinates of each point according to the internal parameters of the depth camera for the depth map acquired by the depth camera;
the second calculation module is used for calculating a method vector diagram according to the three-dimensional coordinates;
the generating module is used for generating a mask image with the same size for the depth image;
a third calculation module, configured to calculate a normal vector approximation average map according to the normal vector map and the mask map, where the normal vector approximation average map is formed by an approximation average vector of each pixel in the depth map;
the fourth calculation module is used for calculating the projection of each point in each pixel neighborhood in the depth map in the approximate mean value method vector direction to obtain a projection scalar of each pixel;
a fifth calculation module, configured to calculate a depth adjustment value of each pixel according to the projection scalar using a rule of L1 Smooth;
and the accumulation module is used for accumulating the depth adjustment value to the pixel corresponding to the depth map to finish filtering.
8. The apparatus of claim 7, wherein calculating a depth adjustment value for each pixel using the rule of L1Smooth from the projection scalar comprises:
respectively counting projection values in a projection scalar of each pixel, if the projection values are both positive values or negative values, taking the mean value of all the projection values in the projection scalar as the depth adjustment value of the pixel, if the positive values and the negative values of the projection values are both present, changing the projection values of the parts exceeding a preset threshold value in the projection scalar into the threshold value size by using the rule of L1Smooth, keeping the original values of the parts not exceeding the preset threshold value, and averaging to be used as the depth adjustment value of the pixel.
9. An electronic device, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
10. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-6.
CN202210038515.5A 2022-01-13 2022-01-13 Depth map filtering method and device, electronic equipment and storage medium Active CN114066779B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210038515.5A CN114066779B (en) 2022-01-13 2022-01-13 Depth map filtering method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210038515.5A CN114066779B (en) 2022-01-13 2022-01-13 Depth map filtering method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114066779A true CN114066779A (en) 2022-02-18
CN114066779B CN114066779B (en) 2022-05-06

Family

ID=80231060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210038515.5A Active CN114066779B (en) 2022-01-13 2022-01-13 Depth map filtering method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114066779B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881878A (en) * 2022-05-12 2022-08-09 厦门微图软件科技有限公司 Depth image enhancement method, device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150071526A1 (en) * 2012-05-17 2015-03-12 The Regents Of The University Of California Sampling-based multi-lateral filter method for depth map enhancement and codec
CN104778701A (en) * 2015-04-15 2015-07-15 浙江大学 Local image describing method based on RGB-D sensor
CN110223252A (en) * 2019-06-10 2019-09-10 成都理工大学 A kind of depth image reparation algorithm based on compound adaptive region growth criterion
CN110223383A (en) * 2019-06-17 2019-09-10 重庆大学 A kind of plant three-dimensional reconstruction method and system based on depth map repairing
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110490829A (en) * 2019-08-26 2019-11-22 北京华捷艾米科技有限公司 A kind of filtering method and system of depth image
CN110675346A (en) * 2019-09-26 2020-01-10 武汉科技大学 Image acquisition and depth map enhancement method and device suitable for Kinect
CN112488910A (en) * 2020-11-16 2021-03-12 广州视源电子科技股份有限公司 Point cloud optimization method, device and equipment
CN112991193A (en) * 2020-11-16 2021-06-18 武汉科技大学 Depth image restoration method, device and computer-readable storage medium
CN113808063A (en) * 2021-09-24 2021-12-17 土豆数据科技集团有限公司 Depth map optimization method and device for large-scale scene reconstruction and storage medium
CN113837952A (en) * 2020-06-24 2021-12-24 影石创新科技股份有限公司 Three-dimensional point cloud noise reduction method and device based on normal vector, computer readable storage medium and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150071526A1 (en) * 2012-05-17 2015-03-12 The Regents Of The University Of California Sampling-based multi-lateral filter method for depth map enhancement and codec
CN104778701A (en) * 2015-04-15 2015-07-15 浙江大学 Local image describing method based on RGB-D sensor
CN110223252A (en) * 2019-06-10 2019-09-10 成都理工大学 A kind of depth image reparation algorithm based on compound adaptive region growth criterion
CN110223383A (en) * 2019-06-17 2019-09-10 重庆大学 A kind of plant three-dimensional reconstruction method and system based on depth map repairing
CN110378945A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
CN110490829A (en) * 2019-08-26 2019-11-22 北京华捷艾米科技有限公司 A kind of filtering method and system of depth image
CN110675346A (en) * 2019-09-26 2020-01-10 武汉科技大学 Image acquisition and depth map enhancement method and device suitable for Kinect
CN113837952A (en) * 2020-06-24 2021-12-24 影石创新科技股份有限公司 Three-dimensional point cloud noise reduction method and device based on normal vector, computer readable storage medium and electronic equipment
CN112488910A (en) * 2020-11-16 2021-03-12 广州视源电子科技股份有限公司 Point cloud optimization method, device and equipment
CN112991193A (en) * 2020-11-16 2021-06-18 武汉科技大学 Depth image restoration method, device and computer-readable storage medium
CN113808063A (en) * 2021-09-24 2021-12-17 土豆数据科技集团有限公司 Depth map optimization method and device for large-scale scene reconstruction and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KOSTADIN DABOV 等: "Image denoising by sparse 3D transform-domain collaborative ltering", 《 IEEE TRANSACTIONS ON IMAGE PROCESSING》 *
SEEMA KUMARI 等: "Patch Similarity in Transform Domain for Intensity/Range Image Denoising with Edge Preservation", 《 COMPUTER VISION, PATTERN RECOGNITION, IMAGE PROCESSING, AND GRAPHICS》 *
刘继忠等: "基于像素滤波和中值滤波的深度图像修复方法", 《光电子激光》 *
李腾飞等: "基于窗口滤波与均值滤波的深度图像实时修复算法", 《软件工程》 *
赵书芳 等: "面向Kinect深度图像的导向滤波算法改进", 《单片机与嵌入式***应用》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114881878A (en) * 2022-05-12 2022-08-09 厦门微图软件科技有限公司 Depth image enhancement method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114066779B (en) 2022-05-06

Similar Documents

Publication Publication Date Title
US9754377B2 (en) Multi-resolution depth estimation using modified census transform for advanced driver assistance systems
CN110363047B (en) Face recognition method and device, electronic equipment and storage medium
US9947077B2 (en) Video object tracking in traffic monitoring
US8385630B2 (en) System and method of processing stereo images
EP3132418B1 (en) Non local image denoising
WO2018098891A1 (en) Stereo matching method and system
US20050265604A1 (en) Image processing apparatus and method thereof
US20120114175A1 (en) Object pose recognition apparatus and object pose recognition method using the same
CN109493367B (en) Method and equipment for tracking target object
US20120082370A1 (en) Matching device, matching method and matching program
CN108225319B (en) Monocular vision rapid relative pose estimation system and method based on target characteristics
KR20170091496A (en) Method and apparatus for processing binocular image
EP3293700A1 (en) 3d reconstruction for vehicle
CN114066779B (en) Depth map filtering method and device, electronic equipment and storage medium
CN115797300A (en) Edge detection method and device based on adaptive gradient threshold canny operator
CN111311651A (en) Point cloud registration method and device
CN110705330A (en) Lane line detection method, lane line detection apparatus, and computer-readable storage medium
CN110706254B (en) Target tracking template self-adaptive updating method
US11461597B2 (en) Object likelihood estimation device, method, and program
CN116703979A (en) Target tracking method, device, terminal and storage medium
Brockers Cooperative stereo matching with color-based adaptive local support
Badenas et al. Motion and intensity-based segmentation and its application to traffice monitoring
CN107392936B (en) Target tracking method based on meanshift
Maier et al. Distortion compensation for movement detection based on dense optical flow
CN112308798B (en) Image processing method and device, storage medium and terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant