CN117058022A - Depth image denoising method and device, computer equipment and storage medium - Google Patents

Depth image denoising method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN117058022A
CN117058022A CN202310960485.8A CN202310960485A CN117058022A CN 117058022 A CN117058022 A CN 117058022A CN 202310960485 A CN202310960485 A CN 202310960485A CN 117058022 A CN117058022 A CN 117058022A
Authority
CN
China
Prior art keywords
interest
region
target
plane
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310960485.8A
Other languages
Chinese (zh)
Inventor
何苗
刘枢
吕江波
沈小勇
肖寒
柴子豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202310960485.8A priority Critical patent/CN117058022A/en
Publication of CN117058022A publication Critical patent/CN117058022A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a depth image denoising method, a depth image denoising device, computer equipment and a storage medium. The method comprises the following steps: acquiring a plurality of regions of interest in a depth image to be processed, and performing plane fitting on the plurality of regions of interest to obtain a reference fitting plane; determining the vertical distance between each pixel point in each region of interest and a reference fitting plane respectively; for each region of interest, determining a distance interval corresponding to the region of interest based on the vertical distance corresponding to each pixel point in the region of interest, and determining the pixel point with the vertical distance in the distance interval as the pixel point to be fitted corresponding to the region of interest; performing plane fitting on pixel points to be fitted corresponding to a plurality of interest areas to obtain a target fitting plane; and denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image. By adopting the method, the denoising precision of the depth image can be improved.

Description

Depth image denoising method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a depth image denoising method, a depth image denoising device, a computer device, and a storage medium.
Background
Along with the development of computer technology, the requirements for high-precision measurement are increasing, and the measurement requirements for fitting planes are often accompanied with a large number of flatness, offset measurement and the like in the high-precision measurement, and before plane fitting is carried out, denoising processing is needed for the depth image in order to improve the accuracy of the fitted planes.
In the conventional technology, the depth image is subjected to denoising processing by using the idea of statistics value based on the nearby area, and then the target pixel points obtained after the denoising processing are subjected to plane fitting to obtain a target plane, so that the accuracy of the target plane is lower due to a large number of noise points contained in the target pixel points.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a depth image denoising method, apparatus, computer device, computer readable storage medium, and computer program product, which can achieve an improvement in denoising accuracy of a depth image.
In a first aspect, the present application provides a depth image denoising method, including:
acquiring a plurality of regions of interest in a depth image to be processed, and performing plane fitting on the plurality of regions of interest to obtain a reference fitting plane;
Determining the vertical distance between each pixel point in each region of interest and a reference fitting plane respectively;
for each region of interest, determining a distance interval corresponding to the region of interest based on the vertical distance corresponding to each pixel point in the region of interest, and determining the pixel point with the vertical distance in the distance interval as the pixel point to be fitted corresponding to the region of interest;
performing plane fitting on pixel points to be fitted corresponding to a plurality of interest areas to obtain a target fitting plane;
and denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image.
In a second aspect, the present application further provides a depth image denoising apparatus, including:
the acquisition module is used for acquiring a plurality of interested areas in the depth image to be processed, and carrying out plane fitting on the plurality of interested areas to obtain a reference fitting plane;
the computing module is used for respectively determining the vertical distance between each pixel point in each region of interest and the reference fitting plane;
the determining module is used for determining a distance interval corresponding to the region of interest based on the vertical distance corresponding to each pixel point in the region of interest, and determining the pixel point with the vertical distance in the distance interval as the pixel point to be fitted corresponding to the region of interest;
The fitting module is used for carrying out plane fitting on the pixel points to be fitted corresponding to the multiple regions of interest to obtain a target fitting plane;
and the denoising module is used for denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image.
In a third aspect, the present application also provides a computer device, where the computer device includes a memory and a processor, where the memory stores a computer program, and the processor implements the steps of the depth image denoising method when executing the computer program.
In a fourth aspect, the present application also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the depth image denoising method described above.
In a fifth aspect, the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the depth image denoising method described above.
According to the depth image denoising method, the device, the computer equipment and the storage medium, the plane fitting is carried out on a plurality of interested areas in the depth image to be processed to obtain the reference fitting plane, the interested areas comprise a large number of pixel points and a large number of noise points in the target plane, the reference fitting plane is close to the target plane, the vertical distance from each pixel point in the interested areas to the reference fitting plane is calculated, the distance interval corresponding to the pixel points in the interested areas is determined according to the vertical distance corresponding to the pixel points in the interested areas, the pixel points in the distance interval are determined to be the pixel points to be fitted corresponding to the interested areas, the pixel points in the target plane and the vertical distance corresponding to a small number of noise points are understood to be within the distance interval, the vertical distance corresponding to a large number of noise points is outside the distance interval, the pixel points in the interested areas are denoised by using the distance interval, and the large number of noise points are filtered, so that the obtained pixel points to be subjected to the fitting plane fitting is carried out on the pixels to be subjected to the target plane fitting, the pixel points to be subjected to the pixel points to the depth image fitting are corresponding to the target plane fitting, the depth image to be more accurate than the target plane fitting is obtained, the depth image to be more accurate than the depth image fitting is obtained, and the depth image to be fitted to the depth image to be more accurate.
Drawings
Fig. 1 is an application environment diagram of a depth image denoising method according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a denoising method for depth image according to an embodiment of the present application;
fig. 3 is a schematic flow chart of a distance interval determining step according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a target compression parameter determining step according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a target image acquisition step according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a region of interest denoising procedure according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a depth image and a target image according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating a target fitting plane determining step according to an embodiment of the present application;
FIG. 9 is a schematic diagram of another depth image and a target image according to an embodiment of the present application;
fig. 10 is a block diagram of a depth image denoising apparatus according to an embodiment of the present application;
FIG. 11 is a diagram illustrating an internal architecture of a computer device according to an embodiment of the present application;
fig. 12 is an internal structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The depth image denoising method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal and the server can be used independently for executing the depth image denoising method provided by the embodiment of the application. The terminal and the server can also cooperate to execute the depth image denoising method provided in the embodiment of the application. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, etc. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In some embodiments, as shown in fig. 2, a depth image denoising method is provided, and this embodiment is described by taking application of the method to a computer device as an example, and includes steps 202 to 210.
Step 202, obtaining a plurality of regions of interest in a depth image to be processed, and performing plane fitting on the plurality of regions of interest to obtain a reference fitting plane.
The depth image refers to a two-dimensional image, and a pixel value of each pixel point in the depth image is a distance from an object to shooting equipment. The region of interest (ROI, region of Interest) refers to a region of interest in an image, the region of interest comprising a large number of pixels in a certain plane and a small number of pixels in other planes; a plane refers to a plane of interest, or a plane that is desired by performing a plane fit; the pixels of a small number of other planes are noise points. The region of interest may be selected automatically by a preset method, or may be selected manually by an operator, and the number, size and shape of the region of interest may be set according to actual requirements, for example, the region of interest may be rectangular, circular, polygonal or other shapes. The plurality of regions of interest refers to a plurality of image blocks corresponding to a certain plane in the depth image, and it can be understood that, when an image of a certain plane needs to be obtained from the depth image, the plurality of image blocks are acquired in the image of the plane and around the image of the plane, each image block is one region of interest, and the plurality of regions of interest are processed, so that basic data for denoising the depth image to be processed can be obtained. Plane fitting refers to the process of finding an optimal plane model through a set of data points to approximate the plane in which the data points lie. The plane fitting may use a least squares method or a principal component analysis method, without limitation. The reference fit plane may be represented By a plane function, e.g., with ax+by+cz+d=0, where A, B, C is a component of the normal vector of the reference fit plane and D is a constant of the reference fit plane.
Step 204, determining a vertical distance from each pixel point in each region of interest to the reference fitting plane.
The pixel point may be represented by pixel point coordinates, pixel point information, or the like. The vertical distance refers to the distance that the pixel point is perpendicular to the reference fitting plane. The vertical distance may be positive, negative, and zero, i.e., the vertical distance may be greater than zero, less than zero, or equal to zero; the vertical distance being positive indicates that the pixel point is above the reference fit plane, the vertical distance being negative indicates that the pixel point is below the reference fit plane, and the vertical distance being equal to zero indicates that the pixel point is on the reference fit plane.
Illustratively, the computer device calculates a vertical distance from the pixel point to the reference fitting plane based on the pixel point information of the pixel point and a plane function of the reference fitting plane.
Step 206, for each region of interest, determining a distance interval corresponding to the region of interest based on the vertical distances corresponding to the respective pixel points in the region of interest, and determining the pixel points with the vertical distances in the distance interval as the pixel points to be fitted corresponding to the region of interest.
The distance interval refers to a value range of a vertical distance, and consists of a minimum distance threshold and a maximum distance threshold, and is used for removing noise points in an interested region. The distance interval is related to the degree of dispersion of the vertical distance corresponding to the pixel point in the region of interest. For example, the region of interest corresponds to a distance interval of [ -0.5,0.6].
For each region of interest, the computer device determines a distance interval corresponding to the region of interest based on the degree of dispersion of the vertical distances corresponding to the pixels in the region of interest.
And step 208, performing plane fitting on the pixel points to be fitted corresponding to the multiple regions of interest to obtain a target fitting plane.
The target fitting plane is obtained by performing plane fitting by using pixel points to be fitted in a plurality of interested areas.
In some embodiments, the computer device performs plane fitting on the pixel points to be fitted corresponding to the multiple regions of interest to obtain an initial fitting plane, and repeatedly performs steps 204-208 with the initial fitting plane as the updated reference fitting plane until a circulation stopping condition is reached to obtain a target fitting plane. The cycle stop condition may be that the cycle number reaches a preset cycle number.
Step 210, denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image.
The denoising process refers to a process of removing noise points in the depth image to be processed, and may also be understood as a process of selecting target pixels from the depth image to be processed, where the noise points refer to pixels in the depth image to be processed other than the target pixels.
The computer device calculates the target vertical distance between each pixel point in the depth image to be processed and the target fitting plane, determines a target distance interval corresponding to the depth image to be processed based on the distance interval corresponding to each region of interest, determines the pixel point with the target vertical distance in the target distance interval as the target pixel point corresponding to the depth image to be processed, and obtains the target image based on the target pixel point.
According to the depth image denoising method, plane fitting is carried out on a plurality of interested areas in a depth image to be processed to obtain a reference fitting plane, the interested areas contain a large number of pixel points and a large number of noise points in the target plane, the reference fitting plane is close to the target plane, the vertical distance from each pixel point in the interested areas to the reference fitting plane is calculated, the distance interval corresponding to the interested areas is determined according to the vertical distance corresponding to the pixel points in the interested areas, the pixel points with the vertical distance in the distance interval are determined to be the pixel points to be fitted corresponding to the interested areas, the pixel points in the target plane and the vertical distance corresponding to a small number of noise points are in the distance interval, the vertical distance corresponding to most of noise points is outside the distance interval, the pixel points in the interested areas are denoised by using the distance interval, and most of noise points are filtered, namely the obtained pixel points to be fitted only contain a small number of noise points, the pixel points to be fitted corresponding to the plurality of interest areas are plane fitted, and the target plane fitting is obtained, the pixel points to be fitted are contained in the pixel points to be fitted, the pixel points to be corresponding to the depth points are located in the distance interval, the pixel points to be more than the target plane, the pixel points to be more accurate than the target plane, the pixel points to be more depth points in the target plane can be more accurate than the target plane, the depth image can be more accurate, and the depth image can be fitted, and the image can be more accurate, and compared with the depth image can be more accurately and better compared with the target plane.
In some embodiments, as shown in fig. 3, determining a distance interval corresponding to the region of interest based on the vertical distance corresponding to each pixel point in the region of interest includes:
step 302, obtaining a target compression parameter corresponding to the region of interest, and a first proportion parameter and a second proportion parameter.
The target compression parameter refers to a compression coefficient corresponding to the region of interest. The target compression parameter is inversely related to the degree of dispersion of the vertical distance corresponding to the pixel point in the region of interest. The target compression parameters corresponding to different regions of interest of the depth image to be processed may be different. The first scale factor refers to a percentage factor related to the number of pixels in the region of interest that are below the reference fit plane. The second scaling factor refers to a percentage factor related to the number of pixels in the region of interest that are on the reference fit plane. The first scale coefficients corresponding to different interested areas of the depth image to be processed may be the same, the corresponding second scale coefficients may be the same, and the first scale parameters and the second scale parameters may be preset parameters.
The computer device obtains the first scale parameter and the second scale parameter, determines a degree of dispersion corresponding to the region of interest based on a vertical distance corresponding to the pixel point in the region of interest, and determines a target compression parameter corresponding to the region of interest based on the degree of dispersion.
Step 304, counting the pixel points corresponding to the vertical distances smaller than zero based on the vertical distances corresponding to the pixel points in the region of interest to obtain a first number; and counting the pixel points corresponding to the vertical distance larger than zero to obtain a second number.
Step 306, fusing the target compression parameter, the first proportion parameter and the first quantity to obtain a first arrangement sequence; and fusing the target compression parameter, the second proportion parameter and the second number to obtain a second arrangement sequence.
The first arrangement sequence refers to the arrangement sequence of the determined minimum distance threshold value, and it can be understood that the number of pixel points with the vertical distance smaller than zero is reserved. For example, if the first number of the regions of interest is 100, the first scale parameter is 0.8, and the target compression parameter is 0.5, the first arrangement order is 40, that is, 40 pixels with vertical distances smaller than zero in the region of interest are reserved, and the minimum distance threshold corresponding to the region of interest can be determined according to the arrangement order. The second arrangement refers to the arrangement of the determined maximum distance threshold, and it is understood that the number of pixels with vertical distances greater than zero is preserved.
Illustratively, the computer device performs a multiplication operation on the target compression parameter, the first proportion parameter and the first number to obtain a first result, performs a rounding operation on the first result to obtain a first arrangement sequence, and then performs a multiplication operation on the target compression parameter, the second proportion parameter and the second number to obtain a second result, and performs a rounding operation on the second result to obtain a second arrangement sequence.
Step 308, determining a distance interval corresponding to the region of interest based on the first arrangement order, the second arrangement order and the vertical distance corresponding to each pixel point in the region of interest.
The computer device determines a minimum distance threshold based on the first ranking and the vertical distance less than zero in the region of interest, determines a maximum distance threshold based on the second ranking and the vertical distance greater than zero in the region of interest, and composes the minimum distance threshold and the maximum distance threshold into a distance interval corresponding to the region of interest.
In this embodiment, the target compression parameter is related to the degree of dispersion of the vertical distance corresponding to the pixel point in the region of interest; the higher the degree of dispersion, the more inconsistent the vertical distance of the pixel points in the characterization region of interest, and the fewer the number of pixel points in the region of interest that need to be reserved; the lower the degree of dispersion, the more uniform the vertical distance characterizing the pixel points in the region of interest, and the greater the number of pixel points in the region of interest that need to be preserved. According to the first arrangement sequence, the second arrangement sequence and the vertical distance corresponding to each pixel point in the region of interest, the distance interval corresponding to the region of interest is determined, and the accuracy of the distance interval is improved.
In some embodiments, as shown in fig. 4, obtaining the target compression parameter corresponding to the region of interest includes:
step 402, a sequence of compression parameters is obtained.
The compression parameter sequence is a sequence composed of a plurality of compression parameters. The number of compression parameters in the compression parameter sequence may be the same as the number of regions of interest, and the compression parameter sequence may be preset by a setting person.
Step 404, calculating the dispersion degree of the vertical distances corresponding to the plurality of pixel points in the region of interest according to each region of interest, and obtaining the standard deviation corresponding to the region of interest.
The dispersion degree refers to the dispersion degree, and the dispersion degree describes the intervals or the difference degree between the vertical distances corresponding to the pixel points; the smaller the dispersion degree is, the smaller the vertical distance phase difference corresponding to the pixel points is; the larger the dispersion degree is, the larger the vertical distance phase difference corresponding to the pixel points is. The degree of dispersion can be expressed by parameters such as variance, standard deviation and range.
Step 406, sorting the multiple regions of interest based on the standard deviation corresponding to each region of interest to obtain a region arrangement sequence; the region arrangement sequence is the reverse of the arrangement sequence of the compression parameter sequence.
The regions of interest in the region arrangement sequence are ordered according to the size of the standard deviation corresponding to the regions of interest.
The method comprises the steps that a computer device obtains the arrangement sequence of a compression parameter sequence, and if the arrangement sequence of the compression parameter sequence is from small to large, the arrangement of standard deviations corresponding to a plurality of regions of interest is carried out from large to small, so that the region arrangement sequence is obtained; and if the arrangement sequence of the compression parameter sequences is from large to small, arranging the standard deviations corresponding to the multiple regions of interest from small to large to obtain the region arrangement sequence.
Step 408, determining a target arrangement sequence corresponding to the region of interest based on the region arrangement sequence; and determining the compression parameters corresponding to the target arrangement sequence in the compression parameter sequence as target compression parameters corresponding to the region of interest.
The target arrangement sequence refers to the arrangement sequence of the region of interest in the region arrangement sequence.
In this embodiment, according to the compression parameter sequence and the target arrangement sequence of the region of interest in the region arrangement sequence, the target compression parameter corresponding to the region of interest is determined, the larger the target arrangement sequence corresponding to the region of interest is, the more inconsistent the vertical distance of the pixel points in the region of interest is represented, the fewer the pixel points in the region of interest need to be reserved, and the smaller the target compression parameter corresponding to the region of interest is, the smaller the distance interval is, and the more noise points in the region of interest are removed by using the smaller distance interval, so that the accuracy of denoising the region of interest is improved.
In some embodiments, determining a distance interval corresponding to the region of interest based on the first arrangement order, the second arrangement order, and the vertical distance corresponding to each pixel point in the region of interest includes:
sequencing the vertical distances smaller than zero from large to small for each region of interest, and determining the vertical distance of which the arrangement sequence is positioned in the first arrangement sequence as a minimum distance threshold corresponding to the region of interest;
sorting the vertical distances larger than zero from small to large, and determining the vertical distances with the arrangement sequence in the second arrangement sequence as the maximum distance threshold corresponding to the region of interest;
and obtaining a distance interval corresponding to the region of interest based on the minimum distance threshold and the maximum distance threshold.
The minimum distance threshold refers to the minimum vertical distance of the reserved pixel point. The maximum distance threshold refers to the maximum vertical distance of the remaining pixel points.
In this embodiment, according to the first arrangement sequence and the second arrangement sequence, a distance interval corresponding to the region of interest is determined, and basic data is provided for denoising the region of interest subsequently.
In some embodiments, as shown in fig. 5, denoising a depth image to be processed based on a target fitting plane and distance intervals corresponding to each region of interest to obtain a target image, including:
Step 502, respectively calculating the target vertical distance from each pixel point in the depth image to be processed to the target fitting plane.
The target vertical distance refers to the distance from the pixel point in the depth image to be processed to the target fitting plane.
Step 504, determining the maximum value of the minimum distance thresholds in the distance intervals as a target minimum distance threshold; and determining the minimum value of the maximum distance thresholds in the distance intervals as a target maximum distance threshold.
The target minimum distance threshold refers to a minimum vertical distance for retaining a pixel point in the depth image to be processed. The target maximum distance threshold refers to the maximum vertical distance that retains the pixel points in the depth image to be processed.
The computer device may obtain a minimum distance threshold and a maximum distance threshold from distance intervals corresponding to the respective regions of interest, determine a maximum value of the minimum distance thresholds as a target minimum distance threshold, and determine a minimum value of the maximum distance thresholds as a target maximum distance threshold.
Step 506, determining a target distance interval corresponding to the depth image to be processed based on the target minimum distance threshold and the target maximum distance threshold.
The computer device uses an interval composed of a target minimum distance threshold and a target maximum distance threshold as a target distance interval corresponding to the depth image to be processed.
And step 508, obtaining a target image based on the pixel points of which the target vertical distance is in the target distance zone.
The computer device respectively compares the target vertical distance corresponding to each pixel point in the depth image to be processed with the target distance interval, determines the pixel point corresponding to the target vertical distance as a target pixel point if the target vertical distance is within the target distance interval, discards the pixel point corresponding to the target vertical distance if the target vertical distance is not within the target distance interval, and forms the target pixel point into a corresponding target image.
In this embodiment, the target distance interval is used to remove the pixel points of the non-target image in the depth image to be processed, so as to obtain the pixel points located in the target distance interval, that is, the pixel points located on the target fitting plane and near the target fitting plane, and the target image is generated according to the pixel points located in the target distance interval, so that the noise point in the target image is reduced, and the denoising precision of the depth image is improved.
In some embodiments, performing a plane fit on the plurality of regions of interest to obtain a reference fit plane includes:
acquiring pixel point information corresponding to each pixel point in a plurality of interest areas;
and carrying out plane fitting on the pixel points in the multiple regions of interest based on the pixel point information corresponding to the pixel points in the multiple regions of interest to obtain a reference fitting plane.
The pixel point information refers to data representing the pixel point. The pixel information may be represented by three-dimensional coordinates, where a dimension value of two dimensions represents a position of the pixel, a dimension value of another dimension represents a height corresponding to the pixel, for example, (x, y, z) represents pixel information of the pixel, x and y represent positions of the pixel in the image, and z represents a height corresponding to the pixel.
In this embodiment, by performing plane fitting on pixel point information corresponding to each pixel point in multiple interested regions, a reference fitting plane is obtained, the interested regions include a large number of pixel points and partial noise points of a target plane, and the reference fitting plane is close to a plane represented by the target plane, so as to provide basic data for subsequent denoising processing of the interested regions.
In some embodiments, as shown in fig. 6, performing a plane fit on the plurality of regions of interest, obtaining the reference fit plane further includes:
step 602, a preset depth threshold and a preset gradient threshold are obtained.
The preset depth threshold value refers to a preset depth. The preset gradient threshold value refers to a preset gradient.
Step 604, for each region of interest, determining a first mask corresponding to the region of interest based on a preset depth threshold and the depth of each pixel point in the region of interest.
Where a mask refers to a binary image or matrix of the same size as the region of interest, for specifying a particular region or pixel to be processed or manipulated.
For each region of interest, the computer device compares the depth corresponding to each pixel in turn with a preset depth threshold, and if the depth of the pixel is equal to the preset depth threshold, determines the value corresponding to the pixel as a first identifier; if the depth of the pixel point is not equal to the preset depth threshold value, determining that the value corresponding to the pixel point is a second mark; and obtaining a first mask corresponding to the region of interest based on the value corresponding to each pixel point. Wherein the first identifier may be 1 and the second identifier may be 0.
Step 606, calculating the gradient of each pixel point in the region of interest, and determining the second mask corresponding to the region of interest based on the preset gradient threshold and the gradient of each pixel point in the region of interest.
The gradient refers to a depth change rate of the pixel point, and can be understood as the depth change intensity and direction between the pixel point and the adjacent pixel point.
For each region of interest, the computer device calculates a gradient of each pixel point by using a preset method, sequentially compares the gradient of each pixel point with a preset gradient threshold value, and determines a value corresponding to the pixel point as a second identifier if the gradient of the pixel point is greater than the preset gradient threshold value; if the gradient of the pixel point is smaller than or equal to a preset gradient threshold value, determining the value corresponding to the pixel point as a first mark, and obtaining a second mask corresponding to the region of interest based on the value corresponding to each pixel point.
And 608, denoising the region of interest based on the first mask and the second mask to obtain a target region of interest.
Illustratively, the computer device performs an and operation on the first mask, the second mask, and the region of interest to obtain the target region of interest.
Step 610, performing plane fitting on the pixels in the multiple regions of interest based on the pixel information corresponding to each pixel in the multiple regions of interest, to obtain a reference fitting plane, including: and carrying out plane fitting on the pixel points in the multiple target interested areas based on the pixel point information corresponding to the pixel points in the multiple target interested areas to obtain a reference fitting plane.
In this embodiment, denoising the region of interest by using the first mask and the second mask to obtain the target region of interest may be understood as removing pixels in the region of interest that do not satisfy the preset depth threshold and the preset gradient threshold, so as to reduce noise in the region of interest, and performing plane fitting on pixels in multiple target regions of interest, where the obtained reference fitting plane is closer to the target plane.
In one exemplary embodiment, a first depth image denoising method is used for a depth image with noise dispersion, comprising the steps of:
the method comprises the steps that computer equipment obtains a depth image to be processed, a preset depth threshold value and a preset gradient threshold value, depth corresponding to each pixel point in the depth image to be processed is compared with the preset depth threshold value in sequence, and if the depth of the pixel point is equal to the preset depth threshold value, a value corresponding to the pixel point is determined to be 1; if the depth of the pixel point is not equal to the preset depth threshold value, determining that the value corresponding to the pixel point is 0; and obtaining a first mask corresponding to the region of interest based on the value corresponding to each pixel point.
The computer equipment calculates the gradient of each pixel point by using a preset method, sequentially compares the gradient of each pixel point with a preset gradient threshold value, and determines the value corresponding to the pixel point as 0 if the gradient of the pixel point is larger than the preset gradient threshold value; if the gradient of the pixel point is smaller than or equal to a preset gradient threshold value, determining the value corresponding to the pixel point as 1; and obtaining a second mask corresponding to the region of interest based on the value corresponding to each pixel point.
And the computer equipment performs AND operation on the first mask, the second mask and the region of interest to obtain a target image. For example, as shown in fig. 7, (a) is a depth image to be processed, 702, 704, 706 and 708 in the depth image to be processed represent plane areas with different depths, 704 is a target image, and (b) is a plane area formed by pixels with depth being a depth threshold in (a), it can be seen that noise points with depth being the depth threshold exist in the depth image to be processed, that is, the pixels except for 704 in (b) are all noise points, the noise points are scattered, and the first depth image denoising method is adopted to perform denoising processing on (a) to obtain a target image (c).
A second depth image denoising method is used for the depth image in the noise set, firstly, a target fitting plane corresponding to the depth image to be processed is determined by using a flowchart shown in fig. 8, and the method comprises the following steps:
The method comprises the steps that a computer device obtains a depth image to be processed and n interested areas { ROI_1, … and ROI_n } in the depth image to be processed, obtains a compression parameter sequence { adapt_ratio_1, …, adapt_ratio_n }, a first ratio parameter low_ratio and a second ratio parameter high_ratio }, places each pixel point in all the interested areas into a target set valid_points, and performs plane fitting by using the target set valid_points to obtain a reference fitting plane. For each pixel point in each region of interest, the computer device calculates a vertical distance of the pixel point to a reference fit plane.
For each region of interest, calculating standard deviations of vertical distances corresponding to a plurality of pixel points in the region of interest by using the computer equipment to obtain a set { d_1, …, d_n } consisting of n standard deviations, and if the arrangement sequence of the compression parameter sequence is from large to small, carrying out ascending order arrangement on the standard deviations corresponding to the plurality of regions of interest to obtain a region arrangement sequence by using the computer equipment; based on the region arrangement sequence, determining a target arrangement sequence corresponding to the region of interest, and determining compression parameters corresponding to the target arrangement sequence in the compression parameter sequence as target compression parameters corresponding to the region of interest. The target set valid_points is emptied.
Counting the pixel points corresponding to the vertical distances smaller than zero based on the vertical distances corresponding to the pixel points in the region of interest aiming at each region of interest to obtain a first number; and counting the pixel points corresponding to the vertical distance larger than zero to obtain a second number. The computer equipment performs multiplication on the target compression parameter, the first proportion parameter and the first quantity corresponding to the region of interest to obtain a first arrangement sequence, and then performs multiplication on the target compression parameter, the second proportion parameter and the second quantity corresponding to the region of interest to obtain a second arrangement sequence. Sequencing the vertical distances smaller than zero from large to small, and determining the vertical distances with the arrangement sequence in the first arrangement sequence as the minimum distance threshold corresponding to the region of interest; sorting the vertical distances larger than zero from small to large, and determining the vertical distances with the arrangement sequence in the second arrangement sequence as the maximum distance threshold corresponding to the region of interest; and obtaining a distance interval corresponding to the region of interest based on the minimum distance threshold and the maximum distance threshold.
For each region of interest, the computer device compares the vertical distance corresponding to each pixel point in the region of interest with the distance interval respectively, if the vertical distance corresponding to the pixel point is located in the distance interval, the pixel point is determined to be the pixel point to be fitted corresponding to the region of interest, and the pixel point to be fitted is put into the target set valid_points. After traversing all the interested areas, performing plane fitting on the target set valid_points to obtain a target fitting plane.
After the target fitting plane is obtained, the computer equipment respectively obtains a minimum distance threshold value and a maximum distance threshold value from distance intervals corresponding to all the regions of interest, determines the maximum value in the minimum distance threshold values as a target minimum distance threshold value, determines the minimum value in the maximum distance threshold values as a target maximum distance threshold value, and takes the interval formed by the target minimum distance threshold value and the target maximum distance threshold value as a target distance interval corresponding to the depth image to be processed. And then respectively calculating the target vertical distance between each pixel point in the depth image to be processed and the target fitting plane, determining the pixel points with the target vertical distance in the target distance interval as the target pixel points corresponding to the depth image to be processed, and obtaining the corresponding target image based on the target pixel points. For example, as shown in fig. 9, (a) is a depth image to be processed, noise points in (a) are more and more concentrated, and a second depth image denoising method is adopted to denoise (a) to obtain a target image (b).
In the second depth image denoising method, plane fitting is performed on a plurality of regions of interest in a depth image to be processed to obtain a reference fitting plane, the regions of interest contain a large number of pixel points and a large number of noise points in a target plane, the reference fitting plane is close to the target plane, the vertical distance from each pixel point in the regions of interest to the reference fitting plane is calculated, the distance interval corresponding to the regions of interest is determined according to the vertical distance corresponding to the pixel points in the regions of interest, the pixel points with the vertical distance in the distance interval are determined as the pixel points to be fitted corresponding to the regions of interest, the pixel points to be fitted can be understood as the pixel points to be fitted corresponding to the regions of interest, the vertical distance corresponding to most of the noise points is outside the distance interval, the pixel points in the regions of interest are denoised by using the distance interval, and most of the noise points are filtered, and the obtained pixel points to be fitted only contain a small number of noise points, the pixel points to be fitted corresponding to the regions of interest are planar, and the target fitting plane is obtained, the pixel points to be fitted are required to be more accurate than the pixel points to be fitted, the depth image to be fitted, and the depth image to be more accurate than the target plane to be fitted is obtained, the depth image to be more accurate than the depth image to be fitted.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a depth image denoising device for realizing the above related depth image denoising method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitation in the embodiment of the depth image denoising device or devices provided below may be referred to the limitation of the depth image denoising method hereinabove, and will not be repeated here.
In some embodiments, as shown in fig. 10, there is provided a depth image denoising apparatus, including:
an obtaining module 1002, configured to obtain a plurality of regions of interest in a depth image to be processed, and perform plane fitting on the plurality of regions of interest to obtain a reference fitting plane;
a calculating module 1004, configured to determine a vertical distance between each pixel point in each region of interest and a reference fitting plane;
a determining module 1006, configured to determine, for each region of interest, a distance interval corresponding to the region of interest based on vertical distances corresponding to respective pixel points in the region of interest, and determine pixel points with vertical distances located in the distance interval as pixel points to be fitted corresponding to the region of interest;
the fitting module 1008 is configured to perform plane fitting on pixel points to be fitted corresponding to the multiple regions of interest, so as to obtain a target fitting plane;
and the denoising module 1010 is used for denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image.
In some embodiments, in determining a distance interval corresponding to the region of interest based on the vertical distance corresponding to each pixel point in the region of interest, the determining module 1006 is specifically configured to:
Obtaining a target compression parameter corresponding to a region of interest, and a first proportion parameter and a second proportion parameter;
based on the vertical distances corresponding to all the pixel points in the region of interest, counting the pixel points corresponding to the vertical distances smaller than zero to obtain a first number; counting the pixel points corresponding to the vertical distance larger than zero to obtain a second number;
fusing the target compression parameter, the first proportion parameter and the first quantity to obtain a first arrangement sequence; fusing the target compression parameter, the second proportion parameter and the second number to obtain a second arrangement sequence;
and determining a distance interval corresponding to the region of interest based on the first arrangement sequence, the second arrangement sequence and the vertical distance corresponding to each pixel point in the region of interest.
In some embodiments, in acquiring the target compression parameter corresponding to the region of interest, the determining module 1006 is specifically configured to:
acquiring a compression parameter sequence;
calculating the dispersion degree of the vertical distances corresponding to a plurality of pixel points in the region of interest according to each region of interest to obtain the standard deviation corresponding to the region of interest;
sequencing a plurality of regions of interest based on standard deviations corresponding to the regions of interest to obtain a region arrangement sequence; the arrangement sequence of the regions is opposite to the arrangement sequence of the compression parameter sequences;
Determining a target arrangement sequence corresponding to the region of interest based on the region arrangement sequence; and determining the compression parameters corresponding to the target arrangement sequence in the compression parameter sequence as target compression parameters corresponding to the region of interest.
In some embodiments, in determining a distance interval corresponding to the region of interest based on the first arrangement order, the second arrangement order, and the vertical distance corresponding to each pixel point in the region of interest, the determining module 1006 is specifically configured to:
sequencing the vertical distances smaller than zero from large to small for each region of interest, and determining the vertical distance of which the arrangement sequence is positioned in the first arrangement sequence as a minimum distance threshold corresponding to the region of interest;
sorting the vertical distances larger than zero from small to large, and determining the vertical distances with the arrangement sequence in the second arrangement sequence as the maximum distance threshold corresponding to the region of interest;
and obtaining a distance interval corresponding to the region of interest based on the minimum distance threshold and the maximum distance threshold.
In some embodiments, in denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest, the denoising module 1010 is specifically configured to:
Respectively calculating the target vertical distance from each pixel point in the depth image to be processed to the target fitting plane;
determining the maximum value of the minimum distance thresholds in the distance intervals as a target minimum distance threshold; determining the minimum value of the maximum distance threshold values in the distance intervals as a target maximum distance threshold value;
determining a target distance interval corresponding to the depth image to be processed based on the target minimum distance threshold and the target maximum distance threshold;
and obtaining a target image based on the pixel points of which the target vertical distance is in the target distance interval.
In some embodiments, in performing a plane fit on a plurality of regions of interest to obtain a reference fit plane, the obtaining module 1002 is specifically configured to:
acquiring pixel point information corresponding to each pixel point in a plurality of interest areas;
and carrying out plane fitting on the pixel points in the multiple regions of interest based on the pixel point information corresponding to the pixel points in the multiple regions of interest to obtain a reference fitting plane.
In some embodiments, in performing a plane fit on a plurality of regions of interest to obtain a reference fit plane, the obtaining module 1002 is specifically configured to:
acquiring a preset depth threshold and a preset gradient threshold;
Determining a first mask corresponding to each region of interest based on a preset depth threshold and the depth of each pixel point in the region of interest;
calculating the gradient of each pixel point in the region of interest, and determining a second mask corresponding to the region of interest based on a preset gradient threshold value and the gradient of each pixel point in the region of interest;
denoising the region of interest based on the first mask and the second mask to obtain a target region of interest;
performing plane fitting on the pixel points in the multiple regions of interest based on the pixel point information corresponding to each pixel point in the multiple regions of interest to obtain a reference fitting plane, wherein the method comprises the following steps: and carrying out plane fitting on the pixel points in the multiple target interested areas based on the pixel point information corresponding to the pixel points in the multiple target interested areas to obtain a reference fitting plane.
The above-described respective modules in the depth image denoising apparatus may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement the steps of the depth image denoising method described above. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In some embodiments, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In some embodiments, a computer readable storage medium 1200 is provided, on which a computer program 1202 is stored, which computer program 1202, when executed by a processor, implements the steps of the method embodiments described above, and the internal structure diagram of which may be shown in fig. 12.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related country and region.
Those skilled in the art will appreciate that implementing all or part of the above-described methods may be accomplished by way of a computer program, which may be stored on a non-transitory computer readable storage medium and which, when executed, may comprise the steps of the above-described embodiments of the methods. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

1. A depth image denoising method, comprising:
acquiring a plurality of regions of interest in a depth image to be processed, and performing plane fitting on the plurality of regions of interest to obtain a reference fitting plane;
determining the vertical distance between each pixel point in each region of interest and the reference fitting plane respectively;
for each region of interest, determining a distance interval corresponding to the region of interest based on vertical distances corresponding to all pixel points in the region of interest, and determining the pixel points with the vertical distances in the distance interval as pixel points to be fitted corresponding to the region of interest;
Performing plane fitting on pixel points to be fitted corresponding to a plurality of regions of interest to obtain a target fitting plane;
and denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image.
2. The method of claim 1, wherein the determining a distance interval corresponding to the region of interest based on the vertical distance corresponding to each pixel point in the region of interest comprises:
acquiring a target compression parameter corresponding to the region of interest, and a first proportion parameter and a second proportion parameter;
based on the vertical distances corresponding to all the pixel points in the interested area, counting the pixel points corresponding to the vertical distances smaller than zero to obtain a first number; counting the pixel points corresponding to the vertical distance larger than zero to obtain a second number;
fusing the target compression parameter, the first proportion parameter and the first quantity to obtain a first arrangement sequence; fusing the target compression parameter, the second proportion parameter and the second quantity to obtain a second arrangement sequence;
And determining a distance interval corresponding to the region of interest based on the first arrangement sequence, the second arrangement sequence and the vertical distance corresponding to each pixel point in the region of interest.
3. The method according to claim 2, wherein the obtaining the target compression parameter corresponding to the region of interest includes:
acquiring a compression parameter sequence;
calculating the dispersion degree of the vertical distances corresponding to a plurality of pixel points in the region of interest according to each region of interest to obtain the standard deviation corresponding to the region of interest;
sequencing a plurality of regions of interest based on standard deviations corresponding to the regions of interest to obtain a region arrangement sequence; the region arrangement sequence is opposite to the arrangement sequence of the compression parameter sequence;
determining a target arrangement sequence corresponding to the region of interest based on the region arrangement sequence; and determining the compression parameters corresponding to the target arrangement sequence in the compression parameter sequence as target compression parameters corresponding to the region of interest.
4. The method of claim 2, wherein the determining a distance interval corresponding to the region of interest based on the first arrangement order, the second arrangement order, and a vertical distance corresponding to each pixel point in the region of interest comprises:
Sequencing the vertical distances smaller than zero from large to small for each region of interest, and determining the vertical distance with the arrangement sequence being in the first arrangement sequence as a minimum distance threshold corresponding to the region of interest;
sorting the vertical distances larger than zero from small to large, and determining the vertical distances with the arrangement sequence being positioned in the second arrangement sequence as a maximum distance threshold corresponding to the region of interest;
and obtaining a distance interval corresponding to the region of interest based on the minimum distance threshold and the maximum distance threshold.
5. The method according to claim 1, wherein the denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image includes:
respectively calculating the target vertical distance from each pixel point in the depth image to be processed to the target fitting plane;
determining the maximum value of the minimum distance thresholds in the distance intervals as a target minimum distance threshold; determining the minimum value of the maximum distance thresholds in the distance intervals as a target maximum distance threshold;
Determining a target distance interval corresponding to the depth image to be processed based on the target minimum distance threshold and the target maximum distance threshold;
and obtaining a target image based on the pixel points of which the target vertical distance is in the target distance zone.
6. The method of claim 1, wherein said performing a plane fit on a plurality of said regions of interest results in a reference fit plane, comprising:
acquiring pixel point information corresponding to each pixel point in a plurality of interested areas;
and carrying out plane fitting on the pixel points in the multiple regions of interest based on the pixel point information corresponding to the pixel points in the multiple regions of interest to obtain a reference fitting plane.
7. The method of claim 6, wherein the method further comprises:
acquiring a preset depth threshold and a preset gradient threshold;
determining a first mask corresponding to each region of interest based on the preset depth threshold and the depth of each pixel point in the region of interest;
calculating the gradient of each pixel point in the region of interest, and determining a second mask corresponding to the region of interest based on the preset gradient threshold and the gradient of each pixel point in the region of interest;
Denoising the region of interest based on the first mask and the second mask to obtain a target region of interest;
performing plane fitting on the pixel points in the multiple regions of interest based on the pixel point information corresponding to the pixel points in the multiple regions of interest to obtain a reference fitting plane, including:
and carrying out plane fitting on the pixel points in the target interested areas based on the pixel point information corresponding to the pixel points in the target interested areas, so as to obtain a reference fitting plane.
8. A depth image denoising apparatus, comprising:
the acquisition module is used for acquiring a plurality of interested areas in the depth image to be processed, and carrying out plane fitting on the plurality of interested areas to obtain a reference fitting plane;
the calculation module is used for respectively determining the vertical distance between each pixel point in each region of interest and the reference fitting plane;
the determining module is used for determining a distance interval corresponding to the region of interest based on the vertical distance corresponding to each pixel point in the region of interest, and determining the pixel point with the vertical distance in the distance interval as the pixel point to be fitted corresponding to the region of interest;
The fitting module is used for carrying out plane fitting on the pixel points to be fitted corresponding to the multiple regions of interest to obtain a target fitting plane;
and the denoising module is used for denoising the depth image to be processed based on the target fitting plane and the distance interval corresponding to each region of interest to obtain a target image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202310960485.8A 2023-08-01 2023-08-01 Depth image denoising method and device, computer equipment and storage medium Pending CN117058022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310960485.8A CN117058022A (en) 2023-08-01 2023-08-01 Depth image denoising method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310960485.8A CN117058022A (en) 2023-08-01 2023-08-01 Depth image denoising method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117058022A true CN117058022A (en) 2023-11-14

Family

ID=88656487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310960485.8A Pending CN117058022A (en) 2023-08-01 2023-08-01 Depth image denoising method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117058022A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117975429A (en) * 2024-03-29 2024-05-03 之江实验室 Method and device for determining physical properties of black hole jet flow and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117975429A (en) * 2024-03-29 2024-05-03 之江实验室 Method and device for determining physical properties of black hole jet flow and storage medium
CN117975429B (en) * 2024-03-29 2024-05-31 之江实验室 Method and device for determining physical properties of black hole jet flow and storage medium

Similar Documents

Publication Publication Date Title
CN117058022A (en) Depth image denoising method and device, computer equipment and storage medium
CN111316319A (en) Image processing method, electronic device, and computer-readable storage medium
CN115100383B (en) Three-dimensional reconstruction method, device and equipment for mirror surface object based on common light source
CN116109765A (en) Three-dimensional rendering method and device for labeling objects, computer equipment and storage medium
CN115239784A (en) Point cloud generation method and device, computer equipment and storage medium
CN116596935B (en) Deformation detection method, deformation detection device, computer equipment and computer readable storage medium
CN116823966A (en) Internal reference calibration method and device for camera, computer equipment and storage medium
US10861174B2 (en) Selective 3D registration
CN116385575A (en) Image reconstruction method, device, computer equipment and storage medium
CN116206125A (en) Appearance defect identification method, appearance defect identification device, computer equipment and storage medium
CN115731442A (en) Image processing method, image processing device, computer equipment and storage medium
CN115063473A (en) Object height detection method and device, computer equipment and storage medium
CN116295031B (en) Sag measurement method, sag measurement device, computer equipment and storage medium
CN115861520B (en) Highlight detection method, highlight detection device, computer equipment and storage medium
CN117807875B (en) Three-dimensional data reverse reconstruction and dimension measurement system and method for quartz device
CN115994955B (en) Camera external parameter calibration method and device and vehicle
CN116645374B (en) Point defect detection method, point defect detection device, computer equipment and storage medium
CN114898094B (en) Point cloud upsampling method and device, computer equipment and storage medium
CN114750147B (en) Space pose determining method and device of robot and robot
CN117611781B (en) Flattening method and device for live-action three-dimensional model
CN116883491A (en) Adjustment distance determining method, device, computer equipment and storage medium
CN111210500B (en) Three-dimensional point cloud processing method and device
CN118154689A (en) Part position acquisition method, device, computer equipment, storage medium and product
CN118314233A (en) Graphics processing method, graphics processing apparatus, computer device, and storage medium
CN117490598A (en) Object space information measuring method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination