CN112488910A - Point cloud optimization method, device and equipment - Google Patents

Point cloud optimization method, device and equipment Download PDF

Info

Publication number
CN112488910A
CN112488910A CN202011279945.3A CN202011279945A CN112488910A CN 112488910 A CN112488910 A CN 112488910A CN 202011279945 A CN202011279945 A CN 202011279945A CN 112488910 A CN112488910 A CN 112488910A
Authority
CN
China
Prior art keywords
filtering
point cloud
dimensional
pixel
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011279945.3A
Other languages
Chinese (zh)
Other versions
CN112488910B (en
Inventor
李玉成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202011279945.3A priority Critical patent/CN112488910B/en
Publication of CN112488910A publication Critical patent/CN112488910A/en
Application granted granted Critical
Publication of CN112488910B publication Critical patent/CN112488910B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/06Topological mapping of higher dimensional structures onto lower dimensional surfaces
    • G06T3/067Reshaping or unfolding 3D tree structures onto 2D planes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a point cloud optimization method, a device and equipment, wherein the point cloud optimization method comprises the following steps: acquiring a three-dimensional point cloud corresponding to a target object; projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; the original pixel value of a pixel point in the two-dimensional depth map is the depth value of a corresponding pixel point in the three-dimensional point cloud; performing convolution filtering operation on the two-dimensional depth map respectively according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; the filtering boundary of each direction filtering kernel respectively represents the image edges in different forms; and replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map. Compared with the prior art, the method and the device have the advantages that the smooth filtering processing of the point cloud is realized by presetting the filtering kernels in multiple directions, and the accuracy of point cloud optimization is improved.

Description

Point cloud optimization method, device and equipment
Technical Field
The embodiment of the application relates to the technical field of optical detection, in particular to a point cloud optimization method, device and equipment.
Background
Three-dimensional Automatic optical Inspection (3D AOI) systems generally use the structured light imaging principle to acquire a high-precision three-dimensional point cloud of a target object. However, in actual measurement, due to the limitation of the resolution of the camera pixels, a large amount of noise and cavities exist in the finally obtained three-dimensional point cloud, and therefore, the point cloud needs to be optimized after the point cloud is obtained.
The original point cloud optimization method is to perform smoothing and denoising processing on three-dimensional point cloud by using methods such as moving least squares and statistical filtering, and although the methods can improve the quality of the three-dimensional point cloud, a large amount of normal vector calculation is involved, the operation process is time-consuming, and the requirement of real-time detection optimization is difficult to meet.
Disclosure of Invention
The embodiment of the application provides a point cloud optimization method, a point cloud optimization device and point cloud optimization equipment, which can efficiently realize point cloud optimization processing and solve the problems of high point cloud optimization calculation overhead and poor real-time performance, and the technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a point cloud optimization method, including:
acquiring a three-dimensional point cloud corresponding to a target object;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
performing convolution filtering operation on the two-dimensional depth map respectively according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
and replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map.
In a second aspect, an embodiment of the present application provides a point cloud optimization method for a circuit board, including:
acquiring a three-dimensional point cloud corresponding to a target circuit board;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
performing convolution filtering operation on the two-dimensional depth map respectively according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms.
Replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
In a third aspect, an embodiment of the present application provides a point cloud optimization apparatus, including:
the first point cloud obtaining unit is used for obtaining a three-dimensional point cloud corresponding to a target object;
the first projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the first filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
and the first optimization unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map.
In a fourth aspect, an embodiment of the present application provides a point cloud optimizing apparatus for a circuit board, including:
the second point cloud obtaining unit is used for obtaining a three-dimensional point cloud corresponding to the target circuit board;
the second projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the second filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
the second optimization unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and the third point cloud obtaining unit is used for converting the pixel values of the pixel points in the optimized two-dimensional depth map into the depth values of the corresponding pixel points in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
In a fifth aspect, an embodiment of the present application provides a point cloud optimizing apparatus, including: a processor, a memory and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the point cloud optimization method according to the first aspect or the steps of the point cloud optimization method according to the second aspect when executing the computer program.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program, which when executed by a processor, implements the steps of the point cloud optimization method of the first aspect or the steps of the point cloud optimization method of the circuit board of the second aspect.
According to the method and the device, the three-dimensional point cloud corresponding to the target object is projected to the camera imaging plane, the point cloud data are subjected to dimensionality reduction, the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, and therefore the real-time performance of subsequent point cloud optimization can be guaranteed. Then, combining the difference of the image edge forms, presetting a plurality of directional filtering kernels to enable filtering boundaries of the different directional filtering kernels to fully embody the image edges of different forms, then respectively carrying out convolution filtering on the two-dimensional depth map through each preset directional filtering kernel to obtain a plurality of filtering pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map, further avoiding the complex parameter adjustment in the conventional filtering process, improving the robustness of the method, realizing the smooth filtering processing of the point cloud under the condition of not influencing the image edge forms, and improving the accuracy of the point cloud optimization.
For a better understanding and implementation, the technical solutions of the present application are described in detail below with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic flow chart of a point cloud optimization method according to an embodiment of the present disclosure;
FIG. 2 is a schematic structural diagram of a directional filter kernel including vertical filter boundaries according to an embodiment of the present application;
FIG. 3 is a block diagram illustrating a structure of a directional filter kernel including horizontal filter boundaries according to an embodiment of the present application;
FIG. 4 is a block diagram illustrating a structure of a directional filter kernel including a tilted filter boundary according to an embodiment of the present application;
FIG. 5 is a block diagram illustrating a directional filter kernel including corner filter boundaries according to an embodiment of the present application;
fig. 6 is a schematic flowchart of S103 in the point cloud optimization method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of S104 in the point cloud optimization method according to an embodiment of the present application;
fig. 8 is a schematic diagram comparing optimization results of two-dimensional depth maps provided in an embodiment of the present application;
fig. 9 is a schematic flowchart of a point cloud optimization method according to another embodiment of the present disclosure;
FIG. 10 is a schematic flow chart illustrating a method for optimizing a point cloud of a circuit board according to an embodiment of the present disclosure;
FIG. 11 is a schematic diagram illustrating a comparison of point cloud optimization results for a circuit board according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a point cloud optimization apparatus according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of a point cloud optimization apparatus for a circuit board according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a point cloud optimization apparatus according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if/if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Please refer to fig. 1, which is a schematic flow chart of a point cloud optimization method according to an embodiment of the present application, the method includes the following steps:
s101: and acquiring the three-dimensional point cloud corresponding to the target object.
In an alternative embodiment, an execution subject of the point cloud optimization method may be a three-dimensional laser scanner, a three-dimensional optical detector, or other devices capable of directly acquiring a three-dimensional point cloud, or may be a component in the above devices, such as a processor or a microprocessor therein; in another alternative embodiment, the executing subject of the point cloud optimization method may be other equipment which establishes data connection with equipment such as a three-dimensional laser scanner or a three-dimensional optical detector, and the other equipment indirectly acquires the three-dimensional point cloud through the equipment such as the three-dimensional laser scanner or the three-dimensional optical detector; in other alternative embodiments, the execution subject of the point cloud optimization method may also be an integrated device integrating a three-dimensional laser scanning function or a three-dimensional optical detection function, and may also be a component in the integrated device.
In the embodiment of the application, a device (hereinafter referred to as a point cloud optimization device) which establishes data connection with a three-dimensional optical detector is taken as an execution subject, and the point cloud optimization method is executed.
Specifically, the point cloud optimization equipment firstly establishes data connection with the three-dimensional optical detector, and acquires a three-dimensional point cloud corresponding to a target object from the three-dimensional optical detector.
The target object may be an object of any shape or shape, and in an alternative embodiment, the target object may be a Printed Circuit Board (PCB).
The three-dimensional point cloud corresponding to the target object is a collection of a large number of points representing the surface characteristics of the target object.
The process of acquiring the three-dimensional point cloud corresponding to the target object by the point cloud optimization equipment through the three-dimensional optical detector is as follows: firstly, the three-dimensional optical detector irradiates structured light on a target object; then, acquiring imaging in a camera after the imaging is reflected by the surface of the target object, and acquiring three-dimensional coordinates of each point on the surface of the target object by analyzing phase values of each pixel point in the imaging; and finally, obtaining the three-dimensional point cloud corresponding to the target object.
In the embodiment of the present application, when acquiring a three-dimensional point cloud corresponding to a target object by a three-dimensional optical detector, a camera imaging plane of the three-dimensional optical detector is parallel to a plane on which the target object is placed, so that three-dimensional coordinates of each point on a surface of the target object detected by the three-dimensional optical detector include two-dimensional coordinates of each point and a distance value (which may also be understood as a depth value of each point with respect to the camera imaging plane) between each point and the camera imaging plane.
S102: projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; and the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud.
The point cloud optimization equipment projects the three-dimensional point cloud corresponding to the target object to the camera imaging plane, namely projects the three-dimensional point cloud to the two-dimensional plane to obtain the two-dimensional depth map corresponding to the three-dimensional point cloud, so that the purpose of reducing the calculation amount is achieved, and the requirement of real-time optimization is met.
In the two-dimensional depth map, the original pixel value of a pixel point is the depth value of the corresponding pixel point in the three-dimensional point cloud, and the two-dimensional coordinate of the pixel point is the two-dimensional coordinate of the corresponding pixel point in the three-dimensional point cloud.
For example, the three-dimensional coordinate of a certain point in the three-dimensional point cloud is (x1, y1, z1), where z1 represents the depth value of the point relative to the camera imaging plane, and then after the three-dimensional point cloud is projected to the camera imaging plane, the two-dimensional coordinate of the corresponding pixel point of the point in the two-dimensional depth map is (x1, y1), and the pixel value is z 1.
S103: performing convolution filtering operation on the two-dimensional depth map respectively according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms.
Since the object edges have irregularities, their corresponding image edges will also have a variety of different morphologies. In the embodiment of the application, a plurality of directional filtering kernels are preset in the point cloud optimization equipment, filtering boundaries of the directional filtering kernels represent image edges in different forms, and different directional filtering kernels are used for performing convolution filtering operation on the two-dimensional depth map respectively, so that the technical problem of low accuracy of filtering results is solved.
In the field of image processing, each filter kernel includes a plurality of filter elements, and the values of the filter elements are used for performing weighting operation on the pixel values of the pixel points. After the convolution filtering operation is carried out on the input image through the filtering kernel, the filtering pixel value of each pixel point in the output image is an average value obtained after the weighted operation is carried out on the pixel value of the pixel point in the coverage area of the filtering kernel and the value of the filtering element in the input image, therefore, the filtering pixel value obtained can be directly influenced by the value of the filtering element in the filtering kernel, the value of the filtering element in the filtering kernel is adjusted in the direction based on the image edges in different forms, and then the filtering boundary of the filtering kernel is represented by the change boundary of the value of the filtering element.
In an optional embodiment, to better embody the boundary of the directional filter kernel, the directional filter kernel is divided into an effective filter area and an ineffective filter area, specifically, before performing a convolution filtering operation on the two-dimensional depth map, the point cloud optimization device divides the effective filter area and the ineffective filter area in the directional filter kernel based on different forms of the image edge, generates a plurality of the directional filter kernels, and represents the filter boundary through the divided boundary of the effective filter area and the ineffective filter area. And the value of the filter element in the effective filter area in the directional filter kernel is 1, and the value of the filter element in the ineffective filter area is 0.
It should be noted that, the setting of the values of the filter elements in the effective filter region and the values of the filter elements in the ineffective filter region in the directional filter kernel is not limited, and in other alternative embodiments, the values of the filter elements in the effective filter region and the values of the filter elements in the ineffective filter region in the directional filter kernel may be adaptively adjusted.
In another alternative embodiment, the image edge shapes are divided in more detail, specifically including a vertical shape, a horizontal shape, an inclined shape, and a corner shape. Furthermore, when the effective filtering area and the ineffective filtering area in the directional filtering kernel are divided based on different forms of the image edge to generate a plurality of directional filtering kernels, the method can be specifically divided into the following dividing modes:
(1) and dividing an effective filtering area and an ineffective filtering area in the directional filtering kernel in the vertical direction according to the vertical form of the image edge to generate the directional filtering kernel comprising a vertical filtering boundary.
Specifically, referring to fig. 2, fig. 2 is a schematic structural diagram of a directional filter kernel including a vertical filter boundary according to an embodiment of the present disclosure, where the directional filter kernel shown in fig. 2 respectively represents a right-side vertical form and a left-side vertical form of an image edge, a value of a filter element in an effective filter region is 1, a value of a filter element in an ineffective filter region is 0, and a partition boundary between the effective filter region and the ineffective filter region is the vertical filter boundary.
In order to better observe the filtering boundary of the directional filtering kernel, fig. 2 shows the effective filtering region and the ineffective filtering region with different gray values, and it can be seen that the boundary where the gray value changes obviously is the filtering boundary, which is also consistent with the vertical shape of the image edge.
(2) And dividing an effective filtering area and an ineffective filtering area in the directional filtering kernel in the horizontal direction according to the horizontal form of the image edge to generate the directional filtering kernel comprising a horizontal filtering boundary.
Specifically, please refer to fig. 3, where fig. 3 is a schematic structural diagram of a directional filter kernel including a horizontal filter boundary according to an embodiment of the present disclosure, where the directional filter kernel shown in fig. 3 respectively represents an upper horizontal form and a lower horizontal form of an image edge, a value of a filter element in an effective filter region is 1, a value of a filter element in an ineffective filter region is 0, and a partition boundary between the effective filter region and the ineffective filter region is the horizontal filter boundary.
In order to better observe the filtering boundary of the directional filtering kernel, fig. 3 also uses different gray values to display the effective filtering region and the ineffective filtering region, and it can be seen that the boundary where the gray value changes obviously is the filtering boundary, which is also consistent with the horizontal shape of the image edge.
(3) And dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in a tilting direction according to the tilting form of the image edge to generate the direction filtering kernel comprising a tilting filtering boundary.
Specifically, referring to fig. 4, fig. 4 is a schematic structural diagram of a directional filter kernel including a tilted filter boundary according to an embodiment of the present application, where the directional filter kernel shown in fig. 4 respectively represents a lower right tilted form, an upper left tilted form, a lower left tilted form, and an upper right tilted form of an image edge, a value of a filter element in an effective filter region is 1, a value of a filter element in an ineffective filter region is 0, and a partition boundary between the effective filter region and the ineffective filter region is the tilted filter boundary.
In order to better observe the filtering boundary of the directional filtering kernel, fig. 4 also uses different gray values to display the effective filtering region and the ineffective filtering region, and it can be seen that the boundary where the gray value changes obviously is the filtering boundary, which is also consistent with the oblique shape of the image edge.
(4) And dividing an effective filtering area and an ineffective filtering area in the directional filtering kernel in an angle changing direction according to the corner form of the image edge to generate the directional filtering kernel comprising a corner filtering boundary.
Specifically, referring to fig. 5, fig. 5 is a schematic structural diagram of a directional filter kernel including a corner filter boundary according to an embodiment of the present application, where the directional filter kernel shown in fig. 5 respectively represents a lower right corner form, a lower left corner form, an upper right corner form, and an upper left corner form of an image edge, a value of a filter element in an effective filter region is 1, a value of a filter element in an ineffective filter region is 0, and a partition boundary between the effective filter region and the ineffective filter region is the corner filter boundary.
In order to better observe the filtering boundary of the directional filtering kernel, fig. 5 also uses different gray values to display the effective filtering region and the ineffective filtering region, and it can be seen that the boundary where the gray value changes obviously is the filtering boundary, which is also consistent with the corner shape of the image edge.
The above 4 ways of dividing the effective filtering area and the ineffective filtering area in the directional filtering kernel to generate a plurality of directional filtering kernels fully consider the vertical form, the horizontal form, the inclined form and the corner form of the image edge, and can effectively improve the accuracy of the convolution filtering result.
In the following, how to perform the convolution filtering operation on the two-dimensional depth map according to the plurality of preset directional filtering kernels will be specifically described, referring to fig. 6, step S103 includes S1031 to S1034, which are specifically as follows:
s1031: and obtaining a target pixel point corresponding to the filter center according to the corresponding target position of the filter center of the ith direction filter kernel in the two-dimensional depth map.
And the point cloud optimization equipment obtains a target pixel point corresponding to the filtering center according to the corresponding target position of the filtering center of the ith direction filtering kernel in the two-dimensional depth map.
The target position of the filtering center of the directional filtering kernel in the two-dimensional depth map can be used for indicating which pixel point in the current two-dimensional depth map carries out filtering operation, and the pixel point at the target position is the target pixel point and is also the pixel point to be filtered.
To facilitate understanding, please refer to fig. 2 to 5, wherein the star position is the filtering center of each direction filtering kernel.
S1032: and obtaining a target area covered by the directional filter kernel in the two-dimensional depth map according to the target position and the filter radius of the ith directional filter kernel.
The filter radius of the ith directional filter kernel does not indicate that the shape of the directional filter kernel is circular, and is only used for calculating a target area in the two-dimensional depth map which can be covered by the directional filter kernel.
Taking the directional filter kernels shown in fig. 2 to 5 as an example, the filter radius of the directional filter kernel is 2, that is, the size of the directional filter kernel is 5 × 5, so that the target region including 5 × 5 pixels can be covered.
S1033: and acquiring an ith filtering pixel value corresponding to the target pixel point based on the value of each filtering element in the ith direction filtering kernel, the original pixel value of the pixel point in the target region and the number of effective filtering elements in the ith direction filtering kernel.
The point cloud optimization equipment firstly carries out weighted accumulation operation based on the value of each filtering element in the ith direction filtering kernel and the original pixel value of the pixel point in the target area to obtain the weighted pixel value of the target pixel point, and then obtains the ith filtering pixel value corresponding to the target pixel point according to the weighted pixel value divided by the number of effective filtering elements in the ith direction filtering kernel.
Specifically, the point cloud optimization device obtains an ith filtering pixel value corresponding to the target pixel point according to the value of each filtering element in the ith directional filtering kernel, the original pixel value of the pixel point in the target region, the number of effective filtering elements in the ith directional filtering kernel and a preset filtering pixel value calculation formula.
Wherein, the preset filtering pixel value calculation formula is as follows:
Figure BDA0002780440240000081
Qi(centerX, centerY) represents an ith filter pixel value corresponding to the target pixel point, (centerX, centerY) represents a target position corresponding to a filter center of the directional filter kernel in the two-dimensional depth map, that is, a position of the target pixel point in the two-dimensional depth map, (centerX + k, centerY + l) represents a position of each filter element in the directional filter kernel, that is, a position of the pixel point in the target region, and Wi(centerX + k, centerY + l) represents the value of the filter element with the position of (centerX + k, centerY + l) in the ith directional filter kernel, P (centerX + k, centerY + l) represents the original pixel value of the pixel point with the position of (centerX + k, centerY + l) in the target area, r represents the filter radius of the ith directional filter kernel, k is less than or equal to r, and | l is less than or equal to r, | k | represents the filter element andthe horizontal distance of the filtering center, | I | represents the vertical distance between the filtering element and the filtering center, i is more than or equal to 1 and less than or equal to n, and n represents the number of the directional filtering kernels.
S1034: and moving the ith direction filtering kernel to repeatedly execute the steps until the ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
And the point cloud optimization equipment moves the ith direction filtering kernel to repeatedly execute the steps until the ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
Specifically, the ith filtering pixel value corresponding to the next target pixel point is calculated by changing the next target pixel point corresponding to the filtering center of the directional filtering kernel until the ith filtering pixel value corresponding to each pixel point is obtained.
Because there are multiple directional filter kernels, there are multiple final filtered pixel values corresponding to each pixel point.
S104: and replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map.
And the point cloud optimization equipment acquires the filtering pixel value with the minimum difference value between all the filtering pixel values of each pixel point and the original pixel value of the pixel point, and replaces the original pixel value of the pixel point with the filtering pixel value, so that the optimized two-dimensional depth map is obtained.
Specifically, the point cloud optimization device executes the following steps to obtain a filtering pixel value corresponding to each pixel point and having the minimum difference value with the original pixel value.
Initializing setting, i is 1, diffmin=+∞,Qmin(centerX,centerY)=+∞,
Obtaining the ith filtered pixel value Q corresponding to the pixel point with the position (centerX, centerY)i(centerX,centerY),
If Qi(centerX,centerY)-P(centerX,centerY)<diffmin
Then the process of the first step is carried out,
Figure BDA0002780440240000091
otherwise, let i be i +1,
if i is less than or equal to n, jumping back to the second step, otherwise, outputting Qmin(centerX,centerY),
Replace the original pixel value of the pixel with the position (centerX, centerY) with Qmin(centerX,centerY)。
And the point cloud optimization equipment repeatedly executes the steps until a filtering pixel value with the minimum difference value between the original pixel value and the corresponding pixel point of each position is obtained, and completes the replacement operation to finally obtain the optimized two-dimensional depth map.
It should be noted that, in the second step of the above steps, the ith filtered pixel value Q corresponding to the pixel point with the position (centerX, centerY) is obtainedi(centerX, centerY), it may be that Q is calculated only when the second step is performedi(centerX, centerY), or Q may be completed in another threadiThe calculation of (centerX, centerY) is directly obtained for use when the second step is executed, both implementations are within the scope of the present application, and relatively speaking, Q is completed in other threadsi(centerX, centerY), where the direct acquisition uses a way algorithm that performs more efficiently.
According to the method and the device, the three-dimensional point cloud corresponding to the target object is projected to the camera imaging plane, the point cloud data are subjected to dimensionality reduction, the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, and therefore the real-time performance of subsequent point cloud optimization can be guaranteed. Then, combining the difference of the image edge forms, presetting a plurality of directional filtering kernels to enable filtering boundaries of the different directional filtering kernels to fully embody the image edges of different forms, then respectively carrying out convolution filtering on the two-dimensional depth map through each preset directional filtering kernel to obtain a plurality of filtering pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map, further avoiding the complex parameter adjustment in the conventional filtering process, improving the robustness of the method, realizing the smooth filtering processing of the point cloud under the condition of not influencing the image edge forms, and improving the accuracy of the point cloud optimization.
In an optional embodiment, referring to fig. 7, in order to smooth the point cloud and effectively remove the noise pixel, step S104 further includes S1041 to S1043:
s1041: and obtaining a filtering pixel value with the minimum difference value with the original pixel value of the pixel point.
The point cloud optimization equipment obtains a filtering pixel value with the minimum difference value with the original pixel value of the pixel point, but the filtering pixel value is not directly replaced.
S1042: and if the minimum difference is not larger than the preset invalid difference threshold value, replacing the original pixel value of the pixel point with a filtering pixel value with the minimum difference with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map.
And the point cloud optimization equipment judges whether the minimum difference value is not greater than a preset invalid difference value threshold value, and if so, replaces the original pixel value of the pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point.
In the embodiment of the present application, the preset invalid difference threshold is a threshold selected according to the prior data, and may be specifically set according to an actual situation, which is not limited herein.
S1043: and if the minimum difference is larger than a preset invalid difference threshold value, replacing the original pixel value of the pixel point with a null value.
If the minimum difference is larger than the preset invalid difference threshold, the fact indicates that after the pixel point is filtered, the obtained filtering pixel values are different from the original pixel values greatly, the probability that the pixel point is a noise pixel point is high, therefore, the point cloud optimization equipment replaces the original pixel value of the pixel point with a null value, the fact that the point has no depth value is indicated, and further, the pixel point is vacant when an optimized two-dimensional depth map is formed.
In this embodiment, after the point cloud optimization device obtains the filtering pixel value with the minimum difference between the original pixel values of the pixel point, the original pixel value is not directly replaced by the point cloud optimization device, but the minimum difference is judged, whether the minimum difference is greater than a preset invalid difference threshold value or not is confirmed, if so, the pixel point is judged to be a noise pixel point, so that the noise pixel point is removed, and therefore, when the point cloud is subjected to smoothing processing, the noise pixel point can be effectively removed, and the subsequent point cloud optimization effect is further improved.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a comparison of an optimization result of a two-dimensional depth map according to an embodiment of the present disclosure. In fig. 8, the left side is the original two-dimensional depth map, and the right side is the optimized two-dimensional depth map. According to fig. 8, in the optimized two-dimensional depth map, the point cloud is smoother, and the noise pixel points are effectively removed.
In another alternative embodiment, to implement the three-dimensional reconstruction of the target object, referring to fig. 9, after the step S104 is executed, the method further includes the step S105:
s105: and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target object.
And the point cloud optimization equipment converts the pixel values of the pixel points in the optimized two-dimensional depth map into the depth values of the corresponding pixel points in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target object.
The optimized three-dimensional point cloud can more accurately reflect the condition of the surface of an object, noise points can be effectively removed, and meanwhile, the two-dimensional data is processed in the whole operation process, so that the optimization instantaneity can be met.
Because the PCB circuit board has certain defects in the production process, the defects of the circuit board can be detected through the point cloud of the circuit board. However, when the point cloud of the circuit board is obtained, due to the existence of smooth solder or components on the circuit board, specular reflection may be caused, and further, a large amount of noise point clouds may be generated, and the noise point clouds may seriously affect the detection result, so in an optional embodiment of the present application, a point cloud optimization method for the circuit board is provided for optimizing the point cloud of the circuit board, please refer to fig. 10, including steps S201 to S205, which are specifically as follows:
s201: and acquiring a three-dimensional point cloud corresponding to the target circuit board.
The target circuit board may be any type of circuit board, and the type thereof is not limited herein.
It should be emphasized that, when the three-dimensional point cloud corresponding to the target circuit board is obtained, the imaging plane of the camera is parallel to the plane of the target circuit board.
S202: projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; and the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud.
S203: performing convolution filtering operation on the two-dimensional depth map respectively according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms.
S204: and replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board.
S205: and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
The main execution and explanation of steps S201 to S205 are already described in steps S102 to S105, and the difference is only that the target object is a circuit board, and the detailed description is not repeated. The point cloud optimization method for the circuit board can realize smooth processing of the point cloud, does not need to set complex parameters, and can process outlier noise points with large changes.
Referring to fig. 11, fig. 11 is a schematic diagram illustrating a comparison of point cloud optimization results of a circuit board according to an embodiment of the present disclosure. The upper two graphs in fig. 11 are an original circuit board point cloud graph and a bilateral-filtered circuit board point cloud graph, respectively, and the lower graph in fig. 11 is a point cloud graph of a circuit board obtained by applying the point cloud optimization method for the circuit board provided by the embodiment of the present application. As can be seen from fig. 11, when the point cloud optimization method of the circuit board provided in the embodiment of the present application performs point cloud optimization of the circuit board, noise points in the area 1 are removed, and the point cloud in the area 2 can better reflect the actual situation of the circuit board.
In an optional embodiment, after the step S205 is executed, the point cloud optimization device may further reconstruct a three-dimensional image of the target circuit board according to the optimized three-dimensional point cloud, and perform defect detection on the target circuit board.
The point cloud optimization equipment reconstructs the three-dimensional image of the target circuit board according to the optimized three-dimensional point cloud, so that the defect detection of the target circuit board is realized, the detection accuracy is improved, the detection speed is ensured, and the point cloud optimization equipment can meet the requirements of high-speed and high-precision defect detection on a circuit board production line.
Please refer to fig. 12, which is a schematic structural diagram of a point cloud optimization apparatus according to an embodiment of the present disclosure. The device can be realized by software, hardware or a combination of the two to be all or part of the point cloud optimization equipment. The device 12 includes a first point cloud obtaining unit 121, a first projecting unit 122, a first filtering unit 123, and a first optimizing unit 124:
a first point cloud obtaining unit 121, configured to obtain a three-dimensional point cloud corresponding to a target object;
the first projection unit 122 is configured to project the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the first filtering unit 123 is configured to perform convolution filtering operations on the two-dimensional depth map according to a plurality of preset directional filtering kernels, so as to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
the first optimizing unit 124 is configured to replace the original pixel value of each pixel point with a filtered pixel value having the smallest difference with the original pixel value of the pixel point, so as to obtain an optimized two-dimensional depth map.
According to the method and the device, the three-dimensional point cloud corresponding to the target object is projected to the camera imaging plane, the point cloud data are subjected to dimensionality reduction, the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, and therefore the real-time performance of subsequent point cloud optimization can be guaranteed. Then, combining the difference of the image edge forms, presetting a plurality of directional filtering kernels to enable filtering boundaries of the different directional filtering kernels to fully embody the image edges of different forms, then respectively carrying out convolution filtering on the two-dimensional depth map through each preset directional filtering kernel to obtain a plurality of filtering pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map, further avoiding the complex parameter adjustment in the conventional filtering process, improving the robustness of the method, realizing the smooth filtering processing of the point cloud under the condition of not influencing the image edge forms, and improving the accuracy of the point cloud optimization.
Optionally, the apparatus 12 further comprises:
the filtering kernel generating unit is used for dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel based on different forms of the image edge to generate a plurality of direction filtering kernels; wherein, the dividing boundary of the effective filtering area and the ineffective filtering area is the filtering boundary.
Optionally, the filter kernel generating unit includes:
the first filtering kernel generating unit is used for dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in the vertical direction according to the vertical form of the image edge and generating the direction filtering kernel comprising a vertical filtering boundary;
a second filtering kernel generating unit, configured to divide an effective filtering area and an ineffective filtering area in the directional filtering kernel in a horizontal direction according to a horizontal form of the image edge, and generate a directional filtering kernel including a horizontal filtering boundary;
a third filtering kernel generating unit, configured to divide an effective filtering area and an ineffective filtering area in the directional filtering kernel in an oblique direction according to an oblique form of the image edge, and generate a directional filtering kernel including an oblique filtering boundary;
and the fourth filtering kernel generating unit is used for dividing an effective filtering area and an ineffective filtering area in the direction filtering kernel in an angle changing direction according to the corner form of the image edge and generating the direction filtering kernel comprising a corner filtering boundary.
Optionally, the first filtering unit 123 includes the steps of:
the first obtaining unit is used for obtaining a target pixel point corresponding to the filtering center according to the corresponding target position of the filtering center of the ith direction filtering kernel in the two-dimensional depth map;
the second obtaining unit is used for obtaining a target area covered by the directional filter kernel in the two-dimensional depth map according to the target position and the filter radius of the ith directional filter kernel;
a third obtaining unit, configured to obtain an ith filtering pixel value corresponding to the target pixel point based on values of filtering elements in the ith directional filtering kernel, original pixel values of pixel points in the target region, and the number of effective filtering elements in the ith directional filtering kernel;
and the fourth obtaining unit is used for moving the ith direction filtering kernel to repeatedly execute the steps until the ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
Optionally, the first optimizing unit 124 includes:
a fifth obtaining unit, configured to obtain a filtered pixel value having a smallest difference with an original pixel value of the pixel point;
and the third optimization unit is used for replacing the original pixel value of the pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point if the minimum difference value is not greater than a preset invalid difference value threshold value, so as to obtain an optimized two-dimensional depth map.
Optionally, the first optimization unit 124 further includes:
and the replacing unit is used for replacing the original pixel value of the pixel point with a null value if the minimum difference value is greater than a preset invalid difference value threshold value.
Optionally, the apparatus 12 further comprises:
and the conversion unit is used for converting the pixel values of the pixel points in the optimized two-dimensional depth map into the depth values of the corresponding pixel points in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target object.
Fig. 13 is a schematic structural diagram of a point cloud optimization apparatus for a circuit board according to an embodiment of the present disclosure. The device can be realized by software, hardware or a combination of the two to be all or part of the point cloud optimization equipment. The apparatus 13 includes a second point cloud obtaining unit 131, a second projecting unit 132, a second filtering unit 133, a second optimizing unit 134, and a third point cloud obtaining unit 135:
a second point cloud obtaining unit 131, configured to obtain a three-dimensional point cloud corresponding to the target circuit board;
the second projection unit 132 is configured to project the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
a second filtering unit 133, configured to perform convolution filtering operations on the two-dimensional depth map according to a plurality of preset directional filtering kernels, to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
the second optimization unit 134 is configured to replace the original pixel value of each pixel point with a filtered pixel value having the smallest difference with the original pixel value of the pixel point, so as to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and a third point cloud obtaining unit 135, configured to convert the pixel values of the pixels in the optimized two-dimensional depth map into depth values of corresponding pixels in the three-dimensional point cloud, so as to obtain an optimized three-dimensional point cloud corresponding to the target circuit board.
According to the method and the device, the three-dimensional point cloud corresponding to the circuit board is projected to the camera imaging plane, the point cloud data are subjected to dimensionality reduction, the two-dimensional depth map corresponding to the three-dimensional point cloud is obtained, and therefore the real-time performance of subsequent point cloud optimization can be guaranteed. Then, combining the difference of the image edge forms, presetting a plurality of directional filtering kernels to enable the filtering boundaries of the different directional filtering kernels to fully embody the image edges of different forms, then respectively carrying out convolution filtering on the two-dimensional depth map through each preset directional filtering kernel to obtain a plurality of filtering pixel values corresponding to each pixel point, replacing the original pixel value of each pixel point with the filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map, converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board, further avoiding the complex parameter adjustment in the conventional filtering process, improving the robustness of the method, and being capable of not influencing the image edge forms, the method and the device realize smooth filtering processing of the circuit board point cloud and improve accuracy of circuit board point cloud optimization.
Please refer to fig. 14, which is a schematic structural diagram of a point cloud optimizing apparatus according to an embodiment of the present disclosure. As shown in fig. 14, the point cloud optimizing device 14 may include: a processor 140, a memory 140, and a computer program 142 stored in the memory 140 and executable on the processor 140, such as: a point cloud optimization program or a point cloud optimization program of a circuit board; the processor 140 implements the steps of the above-mentioned method embodiments, such as the steps S101 to S104 shown in fig. 1, when executing the computer program 142. Alternatively, the processor 140 implements the functions of the modules/units in the above device embodiments, such as the functions of the modules 121 to 124 shown in fig. 12 or the functions of the modules 131 to 135 shown in fig. 13, when executing the computer program 142.
The processor 140 may include one or more processing cores, among other things. The processor 140 connects various parts in the point cloud optimizing device 14 by using various interfaces and lines, and executes various functions and processes data of the point cloud optimizing device 14 by operating or executing instructions, programs, code sets or instruction sets stored in the memory 141 and calling data in the memory 141, and optionally, the processor 140 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), Programmable Logic Array (PLA). The processor 140 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing contents required to be displayed by the touch display screen; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 140, but may be implemented by a single chip.
The Memory 141 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 141 includes a non-transitory computer-readable medium. The memory 141 may be used to store instructions, programs, code sets, or instruction sets. The memory 141 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for at least one function (such as touch instructions, etc.), instructions for implementing the above-described method embodiments, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. Memory 141 may optionally be at least one memory device located remotely from the aforementioned processor 140.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and being used to execute the method steps in the embodiments shown in fig. 1, fig. 6, fig. 7, and fig. 9 or fig. 10, and a specific execution process may refer to specific descriptions of the embodiments shown in fig. 1, fig. 6, fig. 7, and fig. 9 or fig. 10, which are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the module or unit is only one logical division, and there may be other divisions when the actual implementation is performed, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc.
The present invention is not limited to the above-described embodiments, and various modifications and variations of the present invention are intended to be included within the scope of the claims and the equivalent technology of the present invention if they do not depart from the spirit and scope of the present invention.

Claims (12)

1. A point cloud optimization method is characterized by comprising the following steps:
acquiring a three-dimensional point cloud corresponding to a target object;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
performing convolution filtering operation on the two-dimensional depth map respectively according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
and replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain the optimized two-dimensional depth map.
2. The point cloud optimization method according to claim 1, wherein before performing the convolution filtering operation on the two-dimensional depth map according to the plurality of preset directional filtering kernels, the method comprises the following steps:
dividing effective filtering areas and ineffective filtering areas in the directional filtering kernels based on different forms of the image edges to generate a plurality of directional filtering kernels; wherein, the dividing boundary of the effective filtering area and the ineffective filtering area is the filtering boundary.
3. The point cloud optimization method of claim 2, wherein the image edge shapes include a vertical shape, a horizontal shape, a slanted shape, and a corner shape,
the method for dividing the effective filtering area and the ineffective filtering area in the direction filtering kernel based on the different forms of the image edge to generate a plurality of direction filtering kernels comprises the following steps:
dividing an effective filtering area and an ineffective filtering area in the directional filtering kernel in the vertical direction according to the vertical form of the image edge to generate a directional filtering kernel comprising a vertical filtering boundary;
dividing an effective filtering area and an ineffective filtering area in the directional filtering kernel in the horizontal direction according to the horizontal form of the image edge to generate a directional filtering kernel comprising a horizontal filtering boundary;
dividing an effective filtering area and an ineffective filtering area in the directional filtering kernel in an oblique direction according to the oblique form of the image edge to generate a directional filtering kernel comprising an oblique filtering boundary;
and dividing an effective filtering area and an ineffective filtering area in the directional filtering kernel in an angle changing direction according to the corner form of the image edge to generate the directional filtering kernel comprising the corner filtering boundary.
4. The point cloud optimization method according to claim 1, wherein the performing convolution filtering operation on the two-dimensional depth map according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map comprises:
according to a corresponding target position of a filtering center of the ith direction filtering kernel in the two-dimensional depth map, obtaining a target pixel point corresponding to the filtering center;
obtaining a target area covered by the directional filtering kernel in the two-dimensional depth map according to the target position and the filtering radius of the ith directional filtering kernel;
acquiring an ith filtering pixel value corresponding to the target pixel point based on the value of each filtering element in the ith direction filtering kernel, the original pixel value of the pixel point in the target region and the number of effective filtering elements in the ith direction filtering kernel;
and moving the ith direction filtering kernel to repeatedly execute the steps until the ith filtering pixel value corresponding to each pixel point in the two-dimensional depth map is obtained.
5. The point cloud optimization method of claim 1, wherein replacing an original pixel value of each pixel point with a filtered pixel value having a minimum difference with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map comprises:
obtaining a filtering pixel value with the minimum difference value with the original pixel value of the pixel point and the minimum difference value;
and if the minimum difference is not larger than a preset invalid difference threshold value, replacing the original pixel value of the pixel point with a filtering pixel value with the minimum difference with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map.
6. The point cloud optimization method of claim 5, further comprising the steps of:
and if the minimum difference value is larger than a preset invalid difference value threshold value, replacing the original pixel value of the pixel point with a null value.
7. The method for optimizing point cloud according to claim 1, wherein after obtaining the optimized two-dimensional depth map, the method further comprises the steps of:
and converting the pixel value of the pixel point in the optimized two-dimensional depth map into the depth value of the corresponding pixel point in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target object.
8. A point cloud optimization method of a circuit board is characterized by comprising the following steps:
acquiring a three-dimensional point cloud corresponding to a target circuit board;
projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
performing convolution filtering operation on the two-dimensional depth map respectively according to a plurality of preset directional filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and converting the pixel values of the pixel points in the optimized two-dimensional depth map into the depth values of the corresponding pixel points in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
9. A point cloud optimization device, comprising:
the first point cloud obtaining unit is used for obtaining a three-dimensional point cloud corresponding to a target object;
the first projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the first filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
and the first optimization unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map.
10. A point cloud optimization device for a circuit board, comprising:
the second point cloud obtaining unit is used for obtaining a three-dimensional point cloud corresponding to the target circuit board;
the second projection unit is used for projecting the three-dimensional point cloud to a camera imaging plane to obtain a two-dimensional depth map corresponding to the three-dimensional point cloud; wherein, the original pixel value of the pixel point in the two-dimensional depth map is the depth value of the corresponding pixel point in the three-dimensional point cloud;
the second filtering unit is used for respectively carrying out convolution filtering operation on the two-dimensional depth map according to a plurality of preset direction filtering kernels to obtain a plurality of filtering pixel values corresponding to each pixel point in the two-dimensional depth map; wherein, the filtering boundary of each direction filtering kernel respectively represents the image edge with different forms;
the second optimization unit is used for replacing the original pixel value of each pixel point with a filtering pixel value with the minimum difference value with the original pixel value of the pixel point to obtain an optimized two-dimensional depth map corresponding to the target circuit board;
and the third point cloud obtaining unit is used for converting the pixel values of the pixel points in the optimized two-dimensional depth map into the depth values of the corresponding pixel points in the three-dimensional point cloud to obtain the optimized three-dimensional point cloud corresponding to the target circuit board.
11. A point cloud optimization apparatus, comprising: processor, memory and computer program stored in the memory and executable on the processor, characterized in that the steps of the method according to claims 1 to 7 or 8 are implemented when the processor executes the computer program.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to one of claims 1 to 7 or 8.
CN202011279945.3A 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment Active CN112488910B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279945.3A CN112488910B (en) 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279945.3A CN112488910B (en) 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment

Publications (2)

Publication Number Publication Date
CN112488910A true CN112488910A (en) 2021-03-12
CN112488910B CN112488910B (en) 2024-02-13

Family

ID=74931111

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279945.3A Active CN112488910B (en) 2020-11-16 2020-11-16 Point cloud optimization method, device and equipment

Country Status (1)

Country Link
CN (1) CN112488910B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114066779A (en) * 2022-01-13 2022-02-18 杭州蓝芯科技有限公司 Depth map filtering method and device, electronic equipment and storage medium
CN114723796A (en) * 2022-04-24 2022-07-08 北京百度网讯科技有限公司 Three-dimensional point cloud generation method and device and electronic equipment
WO2022233185A1 (en) * 2021-05-07 2022-11-10 奥比中光科技集团股份有限公司 Image filtering method and apparatus, and terminal and computer-readable storage medium
CN116527663A (en) * 2023-04-10 2023-08-01 北京城市网邻信息技术有限公司 Information processing method, information processing device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264425A (en) * 2019-06-21 2019-09-20 杭州一隅千象科技有限公司 Based on the separate unit TOF camera human body noise-reduction method and system for being angled downward direction
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium
CN111612806A (en) * 2020-01-10 2020-09-01 江西理工大学 Building facade window extraction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110264425A (en) * 2019-06-21 2019-09-20 杭州一隅千象科技有限公司 Based on the separate unit TOF camera human body noise-reduction method and system for being angled downward direction
CN111612806A (en) * 2020-01-10 2020-09-01 江西理工大学 Building facade window extraction method and device
CN111583663A (en) * 2020-04-26 2020-08-25 宁波吉利汽车研究开发有限公司 Monocular perception correction method and device based on sparse point cloud and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王振春等: "基于点云深度映射颜色的导轨表面损伤识别", 中国激光, no. 10, pages 1 - 9 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022233185A1 (en) * 2021-05-07 2022-11-10 奥比中光科技集团股份有限公司 Image filtering method and apparatus, and terminal and computer-readable storage medium
CN114066779A (en) * 2022-01-13 2022-02-18 杭州蓝芯科技有限公司 Depth map filtering method and device, electronic equipment and storage medium
CN114066779B (en) * 2022-01-13 2022-05-06 杭州蓝芯科技有限公司 Depth map filtering method and device, electronic equipment and storage medium
CN114723796A (en) * 2022-04-24 2022-07-08 北京百度网讯科技有限公司 Three-dimensional point cloud generation method and device and electronic equipment
CN116527663A (en) * 2023-04-10 2023-08-01 北京城市网邻信息技术有限公司 Information processing method, information processing device, electronic equipment and storage medium
CN116527663B (en) * 2023-04-10 2024-04-26 北京城市网邻信息技术有限公司 Information processing method, information processing device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112488910B (en) 2024-02-13

Similar Documents

Publication Publication Date Title
CN112488910B (en) Point cloud optimization method, device and equipment
CN110349195B (en) Depth image-based target object 3D measurement parameter acquisition method and system and storage medium
CN108074267B (en) Intersection point detection device and method, camera correction system and method, and recording medium
CN112581629A (en) Augmented reality display method and device, electronic equipment and storage medium
CN110264573B (en) Three-dimensional reconstruction method and device based on structured light, terminal equipment and storage medium
CN110349092B (en) Point cloud filtering method and device
CN111080662A (en) Lane line extraction method and device and computer equipment
US11263356B2 (en) Scalable and precise fitting of NURBS surfaces to large-size mesh representations
WO2023065792A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN112484738B (en) Robot mapping method and device, computer readable storage medium and robot
CN110619660A (en) Object positioning method and device, computer readable storage medium and robot
CN114581331A (en) Point cloud noise reduction method and device suitable for multiple scenes
CN116824070B (en) Real-time three-dimensional reconstruction method and system based on depth image
CN116310060B (en) Method, device, equipment and storage medium for rendering data
CN110838167B (en) Model rendering method, device and storage medium
CN106910196B (en) Image detection method and device
CN114170367B (en) Method, apparatus, storage medium, and device for infinite-line-of-sight pyramidal heatmap rendering
US11893744B2 (en) Methods and apparatus for extracting profiles from three-dimensional images
CN115861403A (en) Non-contact object volume measurement method and device, electronic equipment and medium
CN112539712B (en) Three-dimensional imaging method, device and equipment
CN112802175B (en) Large-scale scene shielding and eliminating method, device, equipment and storage medium
CN114494404A (en) Object volume measurement method, system, device and medium
CN114841943A (en) Part detection method, device, equipment and storage medium
EP4158596A1 (en) Geometry-aware augmented reality effects with a real-time depth map
CN112927323B (en) Drawing generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant