WO2021142843A1 - 图像扫描方法及装置、设备、存储介质 - Google Patents

图像扫描方法及装置、设备、存储介质 Download PDF

Info

Publication number
WO2021142843A1
WO2021142843A1 PCT/CN2020/073038 CN2020073038W WO2021142843A1 WO 2021142843 A1 WO2021142843 A1 WO 2021142843A1 CN 2020073038 W CN2020073038 W CN 2020073038W WO 2021142843 A1 WO2021142843 A1 WO 2021142843A1
Authority
WO
WIPO (PCT)
Prior art keywords
curved surface
dimensional coordinates
initial
vertex
surface data
Prior art date
Application number
PCT/CN2020/073038
Other languages
English (en)
French (fr)
Inventor
张洪伟
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202080093762.4A priority Critical patent/CN114981845A/zh
Priority to PCT/CN2020/073038 priority patent/WO2021142843A1/zh
Publication of WO2021142843A1 publication Critical patent/WO2021142843A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the embodiments of the present application relate to image processing methods, in particular to image scanning methods, devices, equipment, and storage media.
  • Document scanning technology based on image information can be integrated in mobile terminals such as mobile phones, which is convenient to carry and easy to use.
  • Document scanning technology based on image information requires information such as texture and boundary to calculate the transformation matrix, so this technology cannot be applied to document scanning with no borders and less texture.
  • Time of flight (Time of flight, TOF) sensors are not affected by light changes and object textures, and can reduce costs on the premise of meeting accuracy requirements.
  • TOF Time of flight
  • the document scanning can be made independent of the picture information, which can greatly improve the scope of application of the document scanning.
  • the embodiments of the present application provide an image scanning method, device, device, and storage medium.
  • an embodiment of the present application provides an image scanning method, the method includes: acquiring point cloud data of a scanned scene and an initial scanned image of the scanned scene; performing curved surface detection on the point cloud data to obtain an initial curved surface Data; optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data; rectify the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • an embodiment of the present application provides an image scanning device, including: a data acquisition module for acquiring point cloud data of a scanned scene and an initial scanned image of the scanned scene; a curved surface detection module for detecting the point Cloud data performs curved surface detection to obtain initial curved surface data; a curved surface optimization module for optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data; an image correction module for obtaining target curved surface data according to the three-dimensional coordinates in the target curved surface data Rectify the pixel coordinates of the pixel points in the initial scanned image to obtain the target scanned image.
  • an embodiment of the present application provides an electronic device, including a memory and a processor, the memory stores a computer program that can be run on the processor, and when the processor executes the program, any of the embodiments of the present application is implemented.
  • a step in the image scanning method is implemented.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in any of the image scanning methods described in the embodiments of the present application are implemented.
  • the electronic device after obtaining the point cloud data of the scanned scene and the initial scanned image of the scanned scene, the electronic device performs curved surface detection on the point cloud data, and optimizes the detected initial curved surface data to obtain the target curved surface Data; on the one hand, the electronic device can correct the pixel coordinates in the initial scanned image according to the three-dimensional coordinates in the target surface data, so as to obtain a more accurate and detailed scanning result, that is, the target scanned image; on the other hand, Perform surface detection on point cloud data. Compared with plane detection, it can be applied to more scanning scenes. For example, it can scan the surface of cylinders and nearly cylinders.
  • FIG. 1 is a schematic diagram of an implementation process of an image scanning method according to an embodiment of the application
  • FIG. 2 is a schematic diagram of comparison between an initial scanned image and a target scanned image in an embodiment of the application
  • FIG. 3 is a schematic diagram of column features detected by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a grid division result according to an embodiment of the application.
  • Figure 5 is a schematic diagram of the reference plane of the embodiment of the application.
  • FIG. 6 is a schematic diagram of radial optimization based on a cube bounding box according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of radial optimization based on Poisson Surface Reconstruction (PSR) according to an embodiment of the application;
  • FIG. 8 is a schematic diagram of radial optimization based on dual-camera data fusion according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of the implementation process of another image scanning method according to an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of an image scanning device according to an embodiment of the application.
  • FIG. 11 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the application.
  • first ⁇ second ⁇ third referred to in the embodiments of this application only distinguishes similar objects, and does not represent a specific order for the objects. Understandably, “first ⁇ second ⁇ third” "When permitted, the specific order or sequence can be interchanged, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
  • the embodiment of the present application provides an image scanning method, which can be applied to electronic devices, which can be devices with information processing capabilities such as mobile phones, tablet computers, notebook computers, desktop computers, robots, and drones.
  • the functions implemented by the image scanning method can be implemented by a processor in the electronic device calling program code.
  • the program code can be stored in a computer storage medium. It can be seen that the electronic device at least includes a processor and a storage medium.
  • FIG. 1 is a schematic diagram of the implementation process of the image scanning method according to the embodiment of the application. As shown in FIG. 1, the method at least includes the following steps 101 to 104:
  • Step 101 Obtain an initial scanned image of a scanned scene and point cloud data of the scanned scene.
  • the target object scanned by the electronic device such as a label on the surface of a bottle, a poster on a cylinder, a curved book surface, etc., it is understandable that the above-mentioned bottle surface or the cylindrical surface itself is a curved surface.
  • the electronic device can collect data from the scanned scene through the TOF sensor, and then preliminarily filter the sensor data output by the TOF sensor through the processor, and transform the filtered sensor data to three-dimensional coordinates in the camera coordinate system. To obtain the point cloud data.
  • the preliminary filtering includes denoising the characteristics of the sensor data output by the TOF sensor. Denoising methods, for example, point cloud removal based on a position threshold, that is, removing points in the sensor data whose location is greater than a threshold (for example, 7 meters); or, point cloud removal based on mutual distance, that is, removing the point cloud from the surroundings A point whose average distance is greater than the surrounding average of other points.
  • a position threshold that is, removing points in the sensor data whose location is greater than a threshold (for example, 7 meters); or, point cloud removal based on mutual distance, that is, removing the point cloud from the surroundings A point whose average distance is greater than the surrounding average of other points.
  • the electronic device may photograph the scanned scene through a Red Green Blue (RGB) sensor, so as to obtain the initial scanned image.
  • RGB Red Green Blue
  • the electronic device at least includes a TOF sensor and an RGB sensor, where the TOF sensor is used to obtain point cloud data of the scanned scene, and the RGB sensor is used to photograph the scanned scene to obtain the initial scanned image.
  • Step 102 Perform surface detection on the point cloud data to obtain initial surface data.
  • the electronic device may use a nearly cylinder as a fitting model, and determine the rough shape in the point cloud data through cylinder detection, so as to obtain the initial curved surface data.
  • the so-called near-cylindrical body includes a cylinder whose upper and lower bottom surfaces are equal and a truncated cone whose upper and lower bottom surfaces are not equal.
  • Step 103 Optimize the curved surface represented by the initial curved surface data to obtain target curved surface data.
  • the curved surface represented by the initial curved surface data is usually a relatively rough curved surface.
  • the electronic device optimizes the curved surface represented by the initial curved surface data, for example, when the curved surface is the side of a cylinder. In this case, the surface is meshed to obtain a mesh surface, and then the vertices of each mesh on the mesh surface are radially optimized to obtain the target surface data.
  • Step 104 Correct the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • the initial scanned image scanned by the electronic device is an image that does not meet the application conditions.
  • the initial scanned image that is, the image before correction
  • the image 20 in which the target object 201 is inclined.
  • the front-view scan result of the target object 201 can be obtained, that is, the target scan image 202 shown in FIG. 2.
  • the electronic device may implement step 104 through step 307 and step 308 in the following embodiments to obtain the target scanned image.
  • the electronic device after obtaining the point cloud data of the scanned scene and the initial scanned image of the scanned scene, the electronic device performs curved surface detection on the point cloud data, and optimizes the detected initial curved surface data to obtain the target curved surface Data; on the one hand, the electronic device can correct the pixel coordinates in the initial scanned image according to the three-dimensional coordinates in the target surface data, so as to obtain a more accurate and detailed scanning result, that is, the target scanned image; on the other hand, Perform surface detection on point cloud data. Compared with plane detection, it can be applied to more scanning scenes. For example, it can scan the surface of cylinders and nearly cylinders.
  • the embodiment of the present application further provides an image scanning method, the method at least includes the following steps 201 to 205:
  • Step 201 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 202 Perform cylindrical shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object.
  • the target object may be a cylinder.
  • the characteristic parameter values of the cylinder 30 include: the center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ) and the axial range (h) ; Among them, ⁇ 1 is the angle between the tangential line 31 and the reference line, ⁇ 2 is the angle between the tangential line 32 and the reference line; the axial range h is the height of the cylinder.
  • the electronic device may use, for example, a random sampling consensus (RANdom SAmple Consensus, RANSAC) cylinder detection algorithm to obtain these parameter values.
  • RANSAC Random SAmple Consensus
  • Step 203 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • Step 204 optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data
  • Step 205 Correct the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • the electronic device detects the shape of a cylinder on the point cloud data to obtain a plurality of characteristic parameter values of the cylinder; and according to the plurality of characteristic parameter values, determine the initial Curved surface data; in this way, compared to the complex fitting model, using a nearly cylinder as the fitting model can greatly simplify the algorithm, reduce the amount of calculation, and reduce the cost of the algorithm while losing less precision; compared to The plane detection method can cover more target objects and adapt to more user scenarios.
  • the embodiment of the present application further provides an image scanning method.
  • the method at least includes the following steps 301 to 308:
  • Step 301 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 302 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object is a cylinder, and the multiple characteristic parameter values of the target object are as shown in FIG. As shown in 3, it includes the center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ), and the axial range (h).
  • Step 303 Determine initial curved surface data from the point cloud data according to the multiple characteristic parameter values.
  • the initial curved surface data is the point cloud data of the target object 201 in the image 20.
  • Step 304 according to the axial direction, the axial range, the tangential direction, the tangential range and the specific meshing interval, mesh the curved surface represented by the initial curved surface data to obtain N meshes.
  • Grid, N is an integer greater than 0.
  • the specific grid division interval includes the grid division interval in the axial direction and the grid division interval in the tangential direction.
  • the grid division interval in these two directions may be the same or different.
  • the result of the mesh division for example, the mesh surface 401 shown in FIG. 4, the mesh surface 401 has N meshes 402. Understandably, the size of the mesh division interval determines the mesh density of the mesh surface to a certain extent, that is, the value of N. If the grid division interval is larger, the less grids are obtained, the coarser the optimization result is; the smaller the grid division interval is, the more grids are obtained, the finer the optimization result will be, but the algorithm complexity will be higher.
  • Step 305 Perform an optimal solution search for each vertex of each grid according to the three-dimensional coordinates of the sampling points in the initial surface data to obtain the optimal three-dimensional coordinates of the corresponding vertex.
  • each vertex is searched for the optimal solution along its radial direction to obtain the optimal three-dimensional coordinates of the corresponding vertex.
  • three optimal solution search methods are provided. For example, using steps 405 to 407 in the following embodiment to search for the optimal solution in the search space where the radial vector of the vertex is located; for another example, using steps 505 to step 508 in the following embodiment, that is, The optimal solution is obtained by using a radial optimization method based on PSR; for another example, steps 605 to 608 in the following embodiment are adopted, that is, the optimal solution is obtained by using a radial optimization method based on at least dual-camera data fusion.
  • Step 306 Determine the optimal three-dimensional coordinates of each vertex as the target curved surface data.
  • step 306 is an optimized mesh surface, where the coordinates of each vertex of each mesh are the optimal three-dimensional coordinates. That is, the target surface data includes the optimal three-dimensional coordinates of each vertex of each mesh.
  • Step 307 Determine a transformation relationship between the target surface data and the reference surface data according to the optimal three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data.
  • the electronic device determines the position transformation relationship between the optimized mesh surface and the reference surface.
  • the reference surface data can be pre-configured.
  • the reference surface data represents a frontal plane, such as the reference surface 50 shown in FIG. 5, where the reference surface data includes the three-dimensional coordinates of each vertex of each grid in the reference surface 50.
  • the transformation relationship may be characterized by a transformation matrix, a transformation matrix group, or a free mapping relationship.
  • Step 308 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain the target scanned image.
  • the surface represented by the initial surface data is meshed, and only the vertices of the mesh are searched for the optimal solution, which can greatly reduce the amount of calculation for surface optimization.
  • the embodiment of the present application further provides an image scanning method, the method at least includes the following steps 401 to 410:
  • Step 401 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene;
  • Step 402 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object may be a cylinder, and the multiple characteristic parameter values of the target object, Including the center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ), and the axial range (h).
  • Step 403 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • step 404 the curved surface is meshed according to the axial direction, the axial range, the tangential direction, the tangential range, and the specific meshing interval to obtain N meshes, where N is greater than An integer of 0;
  • Step 405 Determine the radial vector where the j-th vertex is located according to the position of the curved surface where the j-th vertex of the i-th grid is located, the radius of the cylinder and the center position of the bottom surface of the cylinder; i is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4;
  • Step 406 Determine a search space from the initial surface data according to the radial vector.
  • the search space may adopt the cube bounding box shown in formula (1):
  • L and U are the lower boundary point and the upper boundary point of the bounding box coordinate system
  • p is the three-dimensional coordinate of the point in the initial surface data
  • [R T] is the augmentation from the world coordinate to the bounding box coordinate Transformation matrix.
  • the j-th vertex may be the center point of the cube bounding box, and according to the length of the axial direction, the grid division interval in the tangential direction, and the size of the radial vector,
  • the cube bounding box is determined from the initial curved surface data.
  • the center point of the bounding box 60 is the j-th vertex
  • the length of the bounding box is the radius r of the cylinder
  • the axial length that is, the height
  • the width is the tangential mesh.
  • the grid is divided into intervals.
  • Step 407 Determine the optimal three-dimensional coordinates of the j-th vertex according to the three-dimensional coordinates of the sampling points in the search space.
  • the electronic device may determine the three-dimensional coordinates of the center of gravity of the search space according to the three-dimensional coordinates of the sampling points in the search space; project the three-dimensional coordinates of the center of gravity onto the radial vector to obtain The optimal three-dimensional coordinate of the j-th vertex.
  • the search space is calculated ⁇ p i
  • p i is the three-dimensional coordinates of a point in the search space
  • K is a constant
  • the calculated three-dimensional coordinates of the center of gravity p g are projected to the radial vector where the j-th vertex is located , So as to obtain the optimal three-dimensional coordinate p * of the j-th vertex.
  • p 0 represents the three-dimensional coordinates of the j-th vertex described before optimization.
  • Step 408 Determine the optimal three-dimensional coordinates of each vertex as the target curved surface data
  • Step 409 Determine a transformation relationship between the target surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
  • Step 410 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain a target scanned image.
  • the embodiment of the present application further provides an image scanning method.
  • the method at least includes the following steps 501 to 511:
  • Step 501 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 502 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object may be a cylinder, and the multiple characteristic parameter values of the target object include The center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ), and the axial range (h).
  • Step 503 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • Step 504 mesh the curved surface according to the axial direction, the axial range, the tangential direction, the tangential range, and the specific meshing interval to obtain N meshes, where N is greater than An integer of 0;
  • Step 505 Perform Poisson surface reconstruction according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain an equivalent surface.
  • the electronic device may use the PSR algorithm to obtain the isosurface. Based on the assumption that the surface of the object in the real world is continuous, the surface of the object is estimated.
  • the PSR algorithm can eliminate the influence of point cloud measurement error on the result to a certain extent, and restore the surface of the real object.
  • Step 506 Determine the radial vector where the j-th vertex is located according to the position of the curved surface where the j-th vertex of the i-th grid is located, the radius of the cylinder, and the center position of the bottom surface of the cylinder; , I is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4;
  • Step 507 Determine the intersection point of the radial vector and the isosurface
  • Step 508 Determine the three-dimensional coordinates of the intersection on the isosurface as the optimal three-dimensional coordinates of the j-th vertex.
  • the electronic device first uses the PSR algorithm to process the initial surface data to generate the corresponding mesh (Mesh), and then detect the radial direction where the j-th vertex is located in the generated Mesh. For the intersection point of the lines, the three-dimensional coordinates of the Mesh corresponding to the intersection point are used as the optimization result, that is, the optimal three-dimensional coordinates of the j-th vertex.
  • the PSR algorithm has the characteristics of being able to recover the surface of real-world objects from the point cloud with a lot of noise.
  • PSR is an algorithm derived from the fact that the surface of the real object is smooth and continuous, so it is in line with the scanning characteristics of real objects. The resulting surface is closer to the true value. Therefore, the PSR algorithm can be used here to remove the influence of noise or misdetection, thereby reducing the influence of the deviation value in the point cloud.
  • the input of the PSR algorithm is the initial surface data
  • the output is Mesh.
  • the implementation steps of the PSR algorithm can include the following steps S1 to S4: S1, perform point cloud normal estimation on the initial surface data; S2, perform spatial meshing on the initial surface data, such as the eighth tree method; S3, find The optimal surface conforms to the continuity of the surface and the estimated normal; S4, the optimized optimal surface is output to form a Mesh grid.
  • Step 509 Determine the optimal three-dimensional coordinates of each of the vertices as the target surface data
  • Step 510 Determine a transformation relationship between the target surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
  • Step 511 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain a target scanned image.
  • the embodiment of the present application further provides an image scanning method.
  • the method at least includes the following steps 601 to 611:
  • Step 601 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 602 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object is a cylinder, and the multiple characteristic parameter values of the target object include: The center position p 0 of the bottom surface of the body, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ) and the axial range (h).
  • Step 603 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • Step 604 Perform meshing on the curved surface according to the axial direction, the axial range, the tangential direction, the tangential range and the specific meshing interval, to obtain N meshes, where N is an integer greater than 0;
  • Step 605 Back-project the three-dimensional coordinates of the k-th sampling point on the radial vector where the j-th vertex of the i-th grid is located on the imaging plane of each camera to obtain the corresponding pixel coordinates; where i Is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4, and k is an integer greater than 0;
  • Step 606 Determine the area block of each pixel coordinate on the image collected by the corresponding camera according to the specific sampling window;
  • Step 607 Determine the degree of correlation between each of the regional blocks
  • Step 608 In the case that the degree of correlation does not meet a specific condition, backproject the three-dimensional coordinates of the next sampling point of the radial vector onto the imaging plane of each camera, until the determined degree of correlation satisfies the Up to the specified conditions, the three-dimensional coordinates of the corresponding sampling points are determined as the optimal three-dimensional coordinates of the j-th vertex.
  • Steps 605 to 608 provide another optimization method, that is, a radial optimization method based on at least dual-camera data fusion, which can further improve the robustness.
  • the j-th vertex may be used as a starting point to perform cost function optimization, so as to obtain the optimal three-dimensional coordinates of the j-th vertex.
  • the radial optimization method of dual-camera data fusion as an example, as shown in Figure 8, the three-dimensional coordinate p j before the optimization of the j-th vertex is used as the starting point, the gradient descent method or the LM algorithm is used, and the formula (4 ) The cost function shown in the optimization:
  • p * is the optimal solution, that is, the optimal three-dimensional coordinates of the j-th vertex
  • p is the position of the search point in the initial surface data, that is, the three-dimensional coordinates of the k-th sampling point
  • R is a regular function, for example L 2 regular function is used
  • ⁇ 0 and ⁇ 1 are the back projection function of the dual-camera space coordinate back-projected to the pixel coordinate
  • W is the sampling window function, for example, the sampling of 3*3 square pixels or 7*7 of the neighboring back-projection point
  • C is the cross-correlation function of the neighboring area blocks of the projection point in the dual-camera image.
  • the cross-correlation function can use functions such as NCC or ZNCC
  • is the scale coefficient to adjust the dependence on TOF data.
  • the projection items of the third and fourth cameras are added to the equation (4), the input of the function C for calculating the cross-correlation becomes multiple, and the calculation method of the cross-correlation function can adopt the result of the pairwise camera correlation. Sum, or average.
  • Step 609 Determine the optimal three-dimensional coordinates of each vertex as the target surface data
  • Step 610 Determine a transformation relationship between the target surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
  • Step 611 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain a target scanned image.
  • a nearly cylindrical body is used as a fitting model, and the rough shape in the point cloud data is determined by cylinder detection, and then the diameter of the cylinder is determined. Optimal processing is performed to obtain a more refined surface result.
  • the document scanning system includes a TOF sensor, an RGB sensor, and a conversion parameter generation module and an image conversion module.
  • the functions of the conversion parameter generation module and the image conversion module can be performed by the processor; wherein,
  • the transformation parameter generation module is used to calculate the sensor data output by the TOF sensor to generate transformation parameters required by the image transformation module.
  • the transformation parameter may be one of the following: transformation matrix, transformation matrix group, and free mapping relationship.
  • the image transformation module is used to transform the RGB data (that is, an example of the initial scan image) obtained from the RGB sensor according to the transformation parameters generated by the transformation parameter generation module, so as to obtain the transformed orthographic scan result, namely The target scan image.
  • the transformation parameter generation module includes a cylinder detection unit, a mesh division unit, a cylindrical radial optimization unit, a free-form surface generation unit, and a transformation parameter generation unit; wherein,
  • the cylinder detection unit can use the RANSAC cylinder detection algorithm
  • the mesh division unit can be used to divide the target area of the cylindrical surface at regular intervals according to the tangential and axial directions;
  • Cylindrical radial optimization unit can be used to search for the optimal solution for each vertex of the divided mesh in the radial direction of the cylinder;
  • the search for the optimal solution can use one of the following methods:
  • the free-form surface generating unit is used to take all the optimal solutions in the radial direction as vertices, and connect topologically according to the topological structure of the mesh division to form a free-form surface mesh.
  • the transformation parameter generation unit is used to generate transformation parameters according to the free-form surface mesh and the correction target parameters. According to the spatial position of the free-form surface mesh and the set spatial position of the correction result (that is, the three-dimensional coordinates of the vertices in the reference surface data), transformation parameters such as transformation matrix or transformation matrix group or free mapping relationship are generated.
  • FIG. 9 is a schematic diagram of the implementation process of the image scanning method according to the embodiment of the present application. As shown in FIG. 9, it may at least include the following steps 901 to 907:
  • Step 901 Perform preliminary filtering on the sensor data output by the TOF, and then transform the preliminary filtered sensor data to three-dimensional coordinates in the camera coordinate system to obtain three-dimensional point cloud data;
  • the preliminary filtering includes denoising the characteristics of the sensor data output by the TOF, and denoising methods, for example, point cloud removal based on position threshold, or point cloud removal based on mutual distance;
  • Step 902 Perform cylinder detection on the three-dimensional point cloud data to obtain cylinder fitting parameters, that is, multiple characteristic parameter values of the cylinder, where the parameters include: the center position p 0 of the cylinder bottom surface and the axis direction n 0 , Radius r, tangential range ( ⁇ 1 , ⁇ 2 ) and axial range (h).
  • the RANSAC cylinder detection algorithm may be used to obtain these parameters.
  • step 903 the target area on the cylindrical surface is divided into regular grids at regular intervals according to the tangential and axial directions;
  • Step 904 searching for an optimal solution for each vertex of the divided grid along the radial direction of the cylinder;
  • Step 905 According to the original mesh topology, update the coordinate position to the optimal position obtained by the radial optimization to form an optimized free-form mesh surface;
  • Step 906 Generate corresponding transformation parameters according to the corresponding relationship of the coordinates
  • Transformation parameters for example, when the transformation is relatively uniform, a single homography matrix (Homography) can be used; when the transformation is relatively complex, the coordinate pair relationship can be directly established to generate an interpolation function for subsequent image transformation.
  • Homography homography matrix
  • Step 907 Transform the input image (ie, the initial scanned image) with the generated transformation parameters to obtain the corrected front view result, that is, the target scanned image.
  • the embodiment of the present application provides an image scanning device, which includes each module included and can be implemented by a processor in a computer device; of course, it can also be implemented by a specific logic circuit;
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • FIG. 10 is a schematic diagram of the composition structure of an image scanning device according to an embodiment of the application.
  • the device 100 includes a data acquisition module 101, a curved surface detection module 102, a curved surface optimization module 103, and an image correction module 104, in which:
  • the data acquisition module 101 is configured to acquire the point cloud data of the scanned scene and the initial scanned image of the scanned scene;
  • the curved surface detection module 102 is configured to perform curved surface detection on the point cloud data to obtain initial curved surface data
  • the curved surface optimization module 103 is configured to optimize the curved surface represented by the initial curved surface data to obtain target curved surface data;
  • the image correction module 104 is configured to correct the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • the curved surface detection module 102 is configured to: perform cylindrical shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; according to the multiple characteristic parameter values, from the point cloud The initial curved surface data is determined in the data.
  • the shape of the target object is a cylinder
  • the multiple characteristic parameter values include the axial direction, the axial range, the tangential direction, and the tangential range of the cylinder
  • the curved surface optimization module 103 is used for : According to the axial direction, the axial range, the tangential direction, the tangential range and the specific meshing interval, mesh the curved surface represented by the initial curved surface data to obtain N meshes, N is an integer greater than 0; according to the three-dimensional coordinates of the sampling points in the initial surface data, the optimal solution search is performed on each vertex of each grid to obtain the optimal three-dimensional coordinates of the corresponding vertex; The optimal three-dimensional coordinates of is determined as the target surface data.
  • the multiple characteristic parameter values further include the radius of the cylinder and the center position of the bottom surface of the cylinder; the curved surface optimization module 103 is configured to: according to the curved surface where the j-th vertex of the i-th mesh is located The position of the cylinder, the radius of the cylinder and the center position of the bottom of the cylinder determine the radial vector where the j-th vertex is located; where i is an integer greater than 0 and less than or equal to N, and j is greater than 0 And an integer less than or equal to 4; determine the search space from the initial surface data according to the radial vector; determine the optimal three-dimensionality of the j-th vertex according to the three-dimensional coordinates of the sampling points in the search space coordinate.
  • the search space adopts a cubic bounding box
  • the curved surface optimization module 103 is used to:
  • the initial surface data is determined The cube bounding box.
  • the curved surface optimization module 103 is configured to: determine the three-dimensional coordinates of the center of gravity of the search space according to the three-dimensional coordinates of the sampling points in the search space; project the three-dimensional coordinates of the center of gravity to the diameter In the direction vector, the optimal three-dimensional coordinate of the j-th vertex is obtained.
  • the curved surface optimization module 103 is configured to: perform Poisson surface reconstruction according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain an equivalent surface; The position of the curved surface, the radius of the cylinder and the center position of the bottom surface of the cylinder determine the radial vector at which the j-th vertex is located; where i is an integer greater than 0 and less than or equal to N, and j is An integer greater than 0 and less than or equal to 4; determine the intersection point between the radial vector and the isosurface; determine the three-dimensional coordinate of the intersection on the isosurface as the j-th vertex The optimal three-dimensional coordinates.
  • the curved surface optimization module 103 is used to back-project the three-dimensional coordinates of the k-th sampling point on the radial vector where the j-th vertex of the i-th grid is located to the imaging plane of each camera.
  • the image correction module 104 is configured to determine the target surface data and the target surface data according to the optimal three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data.
  • the transformation relationship between the reference surface data; according to the transformation relationship, the pixel coordinates of the pixels in the initial scanned image are corrected to obtain the target scanned image.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software.
  • FIG. 11 is a schematic diagram of a hardware entity of the electronic device according to an embodiment of the application.
  • the hardware entity of the electronic device 110 includes: a memory 111 and a processor. 112.
  • the memory 111 stores a computer program that can run on the processor 112, and the processor 112 implements the steps in the image scanning method provided in the foregoing embodiment when the processor 112 executes the program.
  • the memory 111 is configured to store instructions and applications executable by the processor 112, and can also cache data to be processed or processed by each module in the processor 112 and the electronic device 110 (for example, image data, audio data, voice communication data, and Video communication data) can be realized by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • FLASH flash memory
  • RAM Random Access Memory
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the image scanning method provided in the foregoing embodiments are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules is only a logical function division, and there may be other divisions in actual implementation, such as: multiple modules or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or modules, and may be electrical, mechanical, or other forms. of.
  • modules described above as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules; they may be located in one place or distributed on multiple network units; Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the embodiments of the present application may all be integrated into one processing unit, or each module may be individually used as a unit, or two or more modules may be integrated into one unit; the above-mentioned integration
  • the module can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the aforementioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks, or optical disks and other media that can store program codes.
  • the electronic device after obtaining the point cloud data of the scanned scene and the initial scanned image of the scanned scene, the electronic device performs curved surface detection on the point cloud data, and optimizes the detected initial curved surface data to obtain the target curved surface Data; in this way, on the one hand, the electronic device can correct the pixel coordinates in the initial scan image according to the three-dimensional coordinates in the target surface data, so as to obtain more accurate and detailed scan results (ie, the target scan image); on the other hand, In terms of surface detection on point cloud data, compared to plane detection, it can be applied to more scanning scenes, for example, it can scan the surface of cylinders and nearly cylinders.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

一种图像扫描方法及装置、设备、存储介质,其中,所述方法包括:获取扫描场景的点云数据和所述扫描场景的初始扫描图像(101);对所述点云数据进行曲面检测,得到初始曲面数据(102);对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据(103);根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像(104)。

Description

图像扫描方法及装置、设备、存储介质 技术领域
本申请实施例涉及图像处理方法,尤其涉及图像扫描方法及装置、设备、存储介质。
背景技术
基于图像信息的文档扫描技术可以集成在手机等移动终端中,这样具有携带方便、使用方便的特点。基于图像信息的文档扫描技术,需要纹理、边界等信息来计算变换矩阵,因此该技术无法适用于无边界、少纹理的文档扫描。
飞行时间(Time of flight,TOF)传感器,具有不受光照变化和物体纹理影响的特点,在满足精度要求的前提下,还能够降低成本。而借助TOF的三维数据来辅助获取目标物体的三维信息,计算变换矩阵,可以使文档扫描不依赖于图片信息,从而能够极大地提高文档扫描的适用范围。
然而,目前基于TOF数据的图像扫描技术中,却无法应对扫描场景中的目标对象存在弯曲的情况。
发明内容
有鉴于此,本申请实施例提供图像扫描方法及装置、设备、存储介质。
本申请实施例的技术方案是这样实现的:
第一方面,本申请实施例提供一种图像扫描方法,所述方法包括:获取扫描场景的点云数据和所述扫描场景的初始扫描图像;对所述点云数据进行曲面检测,得到初始曲面数据;对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据;根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
第二方面,本申请实施例提供一种图像扫描装置,包括:数据获取模块,用于获取扫描场景的点云数据和所述扫描场景的初始扫描图像;曲面检测模块,用于对所述点云数据进行曲面检测,得到初始曲面数据;曲面优化模块,用于对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据;图像矫正模块,用于根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
第三方面,本申请实施例提供一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本申请实施例任一所述图像扫描方法中的步骤。
第四方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本申请实施例任一所述图像扫描方法中的步骤。
在本申请实施例中,电子设备在获得扫描场景的点云数据和所述扫描场景的初始扫描图像之后,对点云数据进行曲面检测,并对检测得到的初始曲面数据进行优化,得到目标曲面数据;如此,一方面,使得电子设备能够根据目标曲面数据中的三维坐标,对初始扫描图像中的像素坐标进行矫正,从而得到更为准确、精细的扫描结果,即目标扫描图像;另一方面,对点云数据进行曲面检测,相比于平面检测,能够适用更多的扫描场景,例如,能够扫描圆柱体和近圆柱体的表面。
附图说明
图1为本申请实施例图像扫描方法的实现流程示意图;
图2为本申请实施例初始扫描图像和目标扫描图像的对比示意图;
图3为本申请实施例检测得到的柱体特征示意图;
图4为本申请实施例网格划分结果示意图;
图5为本申请实施例参考面示意图;
图6为本申请实施例基于立方体包围盒的径向优化示意图;
图7为本申请实施例基于泊松表面重建(Poisson Surface Reconstruction,PSR)的径向优化示意图;
图8为本申请实施例基于双摄数据融合的径向优化示意图;
图9为本申请实施例另一图像扫描方法的实现流程示意图;
图10为本申请实施例图像扫描装置的结构示意图;
图11为本申请实施例电子设备的一种硬件实体示意图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请的具体技术方案做进一步详细描述。以下实施例用于说明本申请,但不用来限制本申请的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
需要指出,本申请实施例所涉及的术语“第一\第二\第三”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二\第三”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
本申请实施例提供一种图像扫描方法,所述方法可以应用于电子设备,所述电子设备可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机等具有信息处理能力的设备。所述图像扫描方法所实现的功能可以通过所述电子设备中的处理器调用程序代码来实现,当然程序代码可以保存在计算机存储介质中,可见,所述电子设备至少包括处理器和存储介质。
图1为本申请实施例图像扫描方法的实现流程示意图,如图1所示,所述方法至少包括以下步骤101至步骤104:
步骤101,获取扫描场景的初始扫描图像和所述扫描场景的点云数据。
电子设备扫描的目标对象,例如瓶子表面的标签、圆柱上的海报、弯曲放置的书表面等,可以理解地,上述瓶子表面或圆柱表面本身是弯曲的曲面。
在实现时,电子设备可以通过TOF传感器对扫描场景进行数据采集,然后,通过处理器对TOF传感器输出的传感器数据进行初步滤波,并将滤波后的传感器数据变换至相机坐标系下的三维坐标,以得到所述点云数据。
在一些实施例中,初步滤波包括对TOF传感器输出的传感器数据的特性进行去噪。去噪方法,例如,基于位置阈值的点云去除,即,将传感器数据中位置大于阈值(例如7米)的点去除;或者,基于相互距离的点云去除,即,去除与周围的点云的平均距离大于其他点的周围平均值的点。
对于初始扫描图像,电子设备可以通过红绿蓝(Red Green Blue,RGB)传感器对扫描场景进行拍摄,从而得到所述初始扫描图像。
总而言之,电子设备至少包括TOF传感器和RGB传感器,其中,TOF传感器用于 获取扫描场景的点云数据,RGB传感器用于对扫描场景进行拍摄,从而获得初始扫描图像。
步骤102,对所述点云数据进行曲面检测,得到初始曲面数据。
在一些实施例中,电子设备可以采用近圆柱体作为拟合模型,通过圆柱检测来确定点云数据中的概略形状,以得到初始曲面数据。所谓近圆柱体,包括上下底面相等的圆柱体和上下底面不相等的圆台。
步骤103,对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据。
初始曲面数据所表示的曲面通常为比较粗略的曲面,为了能够获得更为精细的曲面,在本申请实施例中,电子设备对初始曲面数据表示的曲面进行优化,例如在曲面为柱体侧面的情况下,对曲面进行网格划分,得到网格曲面,然后对网格曲面上每一网格的顶点进行径向优化,从而得到目标曲面数据。
步骤104,根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
可以理解地,大多数情况,电子设备扫描得到的初始扫描图像为不满足应用条件的图像。例如,图2所示,初始扫描图像,也就是矫正前的图像为图像20,其中的目标对象201是倾斜的。在对其进行矫正之后,可以得到目标对象201的正视扫描结果,即图2所示的目标扫描图像202。
在一些实施例中,电子设备可以通过如下实施例的步骤307和步骤308实现步骤104,以获得目标扫描图像。
在本申请实施例中,电子设备在获得扫描场景的点云数据和所述扫描场景的初始扫描图像之后,对点云数据进行曲面检测,并对检测得到的初始曲面数据进行优化,得到目标曲面数据;如此,一方面,使得电子设备能够根据目标曲面数据中的三维坐标,对初始扫描图像中的像素坐标进行矫正,从而得到更为准确、精细的扫描结果,即目标扫描图像;另一方面,对点云数据进行曲面检测,相比于平面检测,能够适用更多的扫描场景,例如,能够扫描圆柱体和近圆柱体的表面。
本申请实施例再提供一种图像扫描方法,所述方法至少包括以下步骤201至步骤205:
步骤201,获取扫描场景的点云数据和所述扫描场景的初始扫描图像;
步骤202,对所述点云数据进行柱体形状检测,得到目标对象的多个特征参数值。
在一些实施例中,所述目标对象可以为柱体。例如,图3所示,柱体30的特征参 数值包括:柱体底面的中心位置p 0、轴方向n 0、半径r、切向范围(θ 12)和轴向范围(h);其中,θ 1为切向线31与参考线之间的夹角,θ 2为切向线32与参考线之间的夹角;轴向范围h,即为柱体的高度。在一些实施例中,电子设备可以采用如随机抽样一致(RANdom SAmple Consensus,RANSAC)圆柱检测算法,获得这些参数值。
步骤203,根据所述多个特征参数值,从所述点云数据中确定所述初始曲面数据;
步骤204,对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据;
步骤205,根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
在本申请实施例中,电子设备对点云数据进行柱体形状检测,得到柱体的多个特征参数值;并根据所述多个特征参数值,从所述点云数据中确定所述初始曲面数据;如此,相比于复杂的拟合模型,采用近圆柱体作为拟合模型,可以在损失较小精度的同时,大幅度简化算法,降低运算量,降低算法开销成本;而相比于平面检测方法,则能够覆盖更多的目标物体,适应更多的用户场景。
本申请实施例再提供一种图像扫描方法,所述方法至少包括以下步骤301至步骤308:
步骤301,获取扫描场景的点云数据和所述扫描场景的初始扫描图像;
步骤302,对所述点云数据进行柱体形状检测,得到目标对象的多个特征参数值;其中,所述目标对象的形状为柱体,所述目标对象的多个特征参数值,如图3所示,包括柱体底面的中心位置p 0、轴方向n 0、半径r、切向范围(θ 12)和轴向范围(h)。
步骤303,根据所述多个特征参数值,从所述点云数据中确定初始曲面数据。
以图2所示的图像20为例,初始曲面数据即为图像20中目标对象201的点云数据。
步骤304,按照所述轴向、所述轴向范围、所述切向、所述切向范围和特定网格划分间隔,对所述初始曲面数据表示的曲面进行网格划分,得到N个网格,N为大于0的整数。
特定网格划分间隔包括轴向上的网格划分间隔和切向上的网格划分间隔,这两个方向上的网格划分间隔可以相同,也可以不同。网格划分的结果,例如图4所示的网格曲面401,网格曲面401具有N个网格402。可以理解地,网格划分间隔的大小,在一定程度上决定了网格曲面的网格密度,也就是N的值。网格划分间隔约大,得到的网格越少,优化结果越粗略;网格划分间隔约小,得到的网格越多,优化结果越精细,但是算 法复杂度也会越高。
步骤305,根据所述初始曲面数据中采样点的三维坐标,对每一网格的每一顶点进行最优解搜索,得到对应顶点的最优三维坐标。
在一些实施例中,对每一顶点沿其所在的径向进行最优解的搜索,以得到对应顶点的最优三维坐标。在本申请实施例中,提供三种最优解的搜索方式。例如,采用如下实施例中的步骤405至步骤407,在所述顶点的径向向量所在的搜索空间中,搜索最优解;再如,采用如下实施例中的步骤505至步骤508,即,采用基于PSR的径向优化方式获得最优解;又如,采用如下实施例中的步骤605至步骤608,即,采用基于至少双摄数据融合的径向优化方式获得最优解。
步骤306,将每一所述顶点的最优三维坐标,确定为所述目标曲面数据。
可以理解地,通过步骤306得到的是优化后的网格曲面,该曲面由每一网格的每一顶点的坐标为最优三维坐标。也就是说,目标曲面数据包括每一网格的每一顶点的最优三维坐标。
步骤307,根据所述目标曲面数据中每一顶点的最优三维坐标与参考面数据中对应顶点的三维坐标,确定所述目标曲面数据与所述参考面数据之间的变换关系。
实际上,电子设备通过步骤307,确定的是优化后的网格曲面与参考面之间的位置变换关系。参考面数据可以预先配置。例如,参考面数据表示的是一个正视的平面,如图5所示的参考面50,其中,参考面数据包括参考面50中每一网格的每一顶点的三维坐标。
在本申请实施例中,所述变换关系可以通过变换矩阵、变换矩阵群、或者自由映射关系来表征。
步骤308,根据所述变换关系,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
在本申请实施例中,对初始曲面数据表示的曲面进行网格划分,仅对网格的顶点进行最优解搜索,这样可以大大降低曲面优化的计算量。
本申请实施例再提供一种图像扫描方法,所述方法至少包括以下步骤401至步骤410:
步骤401,获取扫描场景的点云数据和所述扫描场景的初始扫描图像;
步骤402,对所述点云数据进行柱体形状检测,得到目标对象的的多个特征参数值;其中,所述目标对象的形状可以为柱体,所述目标对象的多个特征参数值,包括柱体底 面的中心位置p 0、轴方向n 0、半径r、切向范围(θ 12)和轴向范围(h)。
步骤403,根据所述多个特征参数值,从所述点云数据中确定所述初始曲面数据;
步骤404,按照所述轴向、所述轴向范围、所述切向、所述切向范围和特定网格划分间隔,对所述曲面进行网格划分,得到N个网格,N为大于0的整数;
步骤405,根据第i个网格的第j个顶点所在曲面的位置、所述柱体的半径和所述柱体底面的中心位置,确定所述第j个顶点所在的径向向量;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数;
步骤406,根据所述径向向量,从所述初始曲面数据中确定搜索空间。
在一些实施例中,所述搜索空间可以采用公式(1)所示的立方体包围盒:
L<[R T]p<U      (1);
式(1)中,L和U分别为包围盒坐标系下的下限边界点和上限边界点,p为初始曲面数据中点的三维坐标,[R T]为世界坐标至包围盒坐标的增广变换矩阵。
电子设备在实现步骤406时,可以以所述第j个顶点为所述立方体包围盒的中心点,根据所述轴向的长度、切向上的网格划分间隔和所述径向向量的大小,从所述初始曲面数据中确定所述立方体包围盒。
例如,图6所示,包围盒60的中心点为所述第j个顶点,包围盒的长度为柱体的半径r,轴向长度(即高度)为r/2,宽度为切向上的网格划分间隔。
步骤407,根据所述搜索空间中的采样点的三维坐标,确定所述第j个顶点的最优三维坐标。
在一些实施例中,电子设备可以根据所述搜索空间中的采样点的三维坐标,确定所述搜索空间的重心的三维坐标;将所述重心的三维坐标投影至所述径向向量上,得到所述第j个顶点的最优三维坐标。
例如,根据如下公式(2)计算搜索空间{p i|p i∈BOX}的重心p g的三维坐标:
Figure PCTCN2020073038-appb-000001
式(2)中,p i为搜索空间中的点的三维坐标,K为常数。
然后,根据如下公式(3),将计算所得的重心p g的三维坐标投影至所述第j个顶点所在的径向向量
Figure PCTCN2020073038-appb-000002
上,从而得到所述第j个顶点的最优三维坐标p *
Figure PCTCN2020073038-appb-000003
式中,p 0表示优化前所述第j个顶点的三维坐标。
需要说明的是,每一顶点的最优三维坐标的确定,均可通过上述步骤405至步骤407实现。
步骤408,将每一所述顶点的最优三维坐标,确定为所述目标曲面数据;
步骤409,根据所述目标曲面数据中每一顶点的三维坐标与参考面数据中对应顶点的三维坐标,确定所述目标曲面数据与所述参考面数据之间的变换关系;
步骤410,根据所述变换关系,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
本申请实施例再提供一种图像扫描方法,所述方法至少包括以下步骤501至步骤511:
步骤501,获取扫描场景的点云数据和所述扫描场景的初始扫描图像;
步骤502,对所述点云数据进行柱体形状检测,得到目标对象的多个特征参数值;其中,所述目标对象的形状可以为柱体,所述目标对象的多个特征参数值,包括柱体底面的中心位置p 0、轴方向n 0、半径r、切向范围(θ 12)和轴向范围(h)。
步骤503,根据所述多个特征参数值,从所述点云数据中确定所述初始曲面数据;
步骤504,按照所述轴向、所述轴向范围、所述切向、所述切向范围和特定网格划分间隔,对所述曲面进行网格划分,得到N个网格,N为大于0的整数;
步骤505,根据所述初始曲面数据中采样点的三维坐标,进行泊松表面重建,得到等值表面。
在一些实施例中,电子设备可以采用PSR算法,得到所述等值表面。基于对现实世界的物体表面为连续这一假设对物体进行表面估计,PSR算法可以一定程度上消除点云测量误差对结果的影响,复原真实物体的表面。
步骤506,根据第i个网格的第j个顶点所在的曲面的位置、所述柱体的半径和所述柱体底面的中心位置,确定所述第j个顶点所在的径向向量;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数;
步骤507,确定所述径向向量与所述等值表面的相交点;
步骤508,将所述相交点在所述等值表面上的三维坐标,确定为所述第j个顶点的最优三维坐标。
如图7所示,在径向优化前,电子设备先采用PSR算法对初始曲面数据进行处理,从而生成相应的网格(Mesh),然后检测生成的Mesh中与第j个顶点所在的径向线的相 交点,将相交点对应的Mesh的三维坐标作为优化结果,即第j个顶点的最优三维坐标。
PSR算法具有能从噪音很多的点云当中恢复出符合现实世界物体表面的特点,PSR是利用了现实的物体表面是光滑连续这一特点而推导出来的算法,因此符合对于现实物体扫描特点,复原出来的表面更接近真实值。因此在这里利用PSR算法可以很好的去除噪音或者误检测的影响,从而可以减少点云当中偏离值的影响。
需要说明的是,PSR算法的输入为初始曲面数据,输出即为Mesh。PSR算法的实现步骤可以包括以下步骤S1至步骤S4:S1,对初始曲面数据进行点云法线估计;S2,对初始曲面数据进行空间网格划分,可以采用如八分树法;S3,寻找最优曲面以符合曲面连续性以及估计的法线;S4,输出优化出的最优曲面,形成Mesh网格。
还需要说明的是,每一顶点的最优三维坐标的确定,均是通过上述步骤506至步骤508实现的。
步骤509,将每一所述顶点的最优三维坐标,确定为所述目标曲面数据;
步骤510,根据所述目标曲面数据中每一顶点的三维坐标与参考面数据中对应顶点的三维坐标,确定所述目标曲面数据与所述参考面数据之间的变换关系;
步骤511,根据所述变换关系,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
本申请实施例再提供一种图像扫描方法,所述方法至少包括以下步骤601至步骤611:
步骤601,获取扫描场景的点云数据和所述扫描场景的初始扫描图像;
步骤602,对所述点云数据进行柱体形状检测,得到目标对象的多个特征参数值;其中,所述目标对象的形状为柱体,所述目标对象的多个特征参数值包括:柱体底面的中心位置p 0、轴方向n 0、半径r、切向范围(θ 12)和轴向范围(h)。
步骤603,根据所述多个特征参数值,从所述点云数据中确定所述初始曲面数据;
步骤604,按照轴向、轴向范围、切向、切向范围和特定网格划分间隔,对所述曲面进行网格划分,得到N个网格,N为大于0的整数;
步骤605,将第i个网格的第j个顶点所在径向向量上的第k个采样点的三维坐标,分别反投影至每一相机的成像平面上,得到对应的像素坐标;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数,k为大于0的整数;
步骤606,根据特定采样窗口,确定每一所述像素坐标在对应相机采集的图像上的区域块;
步骤607,确定每一所述区域块之间的相关程度;
步骤608,在所述相关程度不满足特定条件的情况下,将所述径向向量的下一个采样点的三维坐标分别反投影至每一相机的成像平面上,直至确定的相关程度满足所述特定条件为止,将对应的采样点的三维坐标确定为所述第j个顶点的最优三维坐标。
步骤605至步骤608,提供了另外一种优化方式,即基于至少双摄数据融合的径向优化方式,如此可以进一步提高鲁棒性。
在一些实施例中,可以以所述第j个顶点为起始点,进行代价函数最优化,从而获得所述第j个顶点的最优三维坐标。以双摄数据融合的径向优化方式为例,如图8所示,以所述第j个顶点优化前的三维坐标p j为起始点,采用梯度下降法或者LM等算法,对公式(4)所示的代价函数进行优化:
p *=argmin p(R(p-p j)+α*C(W(π 0(p j)),W(π 1(p j))))      (4);
式中,p *为最优解,即第j个顶点的最优三维坐标,p为初始曲面数据中搜寻点的位置,即所述第k个采样点的三维坐标,R为正则函数,例如采用L 2正则函数,π 0和π 1为双摄的空间坐标反投影到像素坐标的反投影函数,W为采样窗口函数,例如对反投影点近邻3*3正方形像素的采样或7*7正方形像素的采样,C为对双摄图像内投影点近邻区域块的互相关性函数,互相关性函数可以采用NCC或者ZNCC等函数,α为比例系数用于调整对TOF数据的依赖性。
需要说明的是,对于三摄或者三摄以上的情况,如公式(5)所示,代价函数的形式可以保持一致,有区别的是互相关函数的计算:
p *=argmin p(R(p-p j)+α*C(W(π 0(p j)),W(π 1(p j)),W(π 2(p j)...))    (5);
也就是,在式子(4)当中增加第三、第四相机等的投影项,计算互相关的函数C的输入变为多个,互相关函数的计算方法可以采用两两相机相关的结果的总和、或者平均值。
步骤609,将每一所述顶点的最优三维坐标,确定为所述目标曲面数据;
步骤610,根据所述目标曲面数据中每一顶点的三维坐标与参考面数据中对应顶点的三维坐标,确定所述目标曲面数据与所述参考面数据之间的变换关系;
步骤611,根据所述变换关系,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
下面将说明本申请实施例在一个实际的应用场景中的示例性应用。
本申请实施例在基于TOF数据的形状检测算法中,针对具有曲面的目标物体的情形,采用近圆柱体作为拟合模型,先通过圆柱检测来确定点云数据中的概略形状,再对圆柱径向进行最优化处理,从而得到更为精细的曲面结果。
相比于复杂的模型拟合,可以在损失较小精度的同时,大幅度简化算法,降低运算量,降低算法开销成本;而相比于平面检测方法,则能够覆盖更多的目标物体,适应更多的用户场景。
在一些实施例中,文档扫描***包括TOF传感器、RGB传感器和变换参数生成模块和图像变换模块,变换参数生成模块和图像变换模块的功能可以由处理器完成;其中,
变换参数生成模块,用于对TOF传感器输出的传感器数据进行计算,生成图像变换模块所需要的变换参数,所述变换参数可以为以下之一:变换矩阵、变换矩阵群、自由映射关系。
图像变换模块,用于按照变换参数生成模块生成的变换参数,对从RGB传感器获得的RGB数据(即所述初始扫描图像的一种示例)进行变换,从而获得经过变换的正视扫描结果,即所述目标扫描图像。
在一些实施例中,变换参数生成模块包括圆柱体检测单元、网格划分单元、圆柱径向优化单元、自由曲面生成单元和变换参数生成单元;其中,
圆柱体检测单元可以采用RANSAC圆柱检测算法;
网格划分单元,可以用于对圆柱表面的目标区域按照切向和轴向进行等间隔规则的网格划分;
圆柱径向优化单元,可以用于对划分出来的网格的每一顶点在圆柱的径向上进行最优解的搜索;
其中,最优解的搜索,可以采用以下方法之一:
(1)在径向向上寻找切向范围内点云聚集最多的点作为最优解的方法;
(2)在径向上计算径向线与整体点云的表面再构成的网格所相交点作为最优解的方法;
(3)利用双摄数据融合的方式进行反投影代价函数优化的计算。
自由曲面生成单元,用于将所有径向上的最优解作为顶点,按照网格划分的拓扑结构进行拓扑连接,形成自由曲面网格。
变换参数生成单元,用于根据自由曲面网格和矫正目标参数来生成变换参数。根据自由曲面网格的空间位置和设定的矫正结果的空间位置(即所述参考面数据中顶点的三 维坐标),生成变换矩阵或变换矩阵群或自由映射关系等变换参数。
本申请实施例再提供一种图像扫描方法,图9是本申请实施例图像扫描方法的实现流程示意图,如图9所示,至少可以包括以下步骤901至步骤907:
步骤901,对TOF输出的传感器数据进行初步滤波,然后将初步滤波后的传感器数据变换至相机坐标系下的三维坐标,得到三维点云数据;
在一些实施例中,初步滤波包括对TOF输出的传感器数据的特性进行去噪,去噪方法,例如,基于位置阈值的点云去除,或者基于相互距离的点云去除;
步骤902,对三维点云数据进行圆柱体检测,得到圆柱拟合的参数,即柱体的多个特征参数值,其中,所述参数包括:柱体底面的中心位置p 0、轴方向n 0、半径r、切向范围(θ 12)和轴向范围(h)。在一些实施例中,可以采用如RANSAC圆柱检测算法,获得这些参数。
步骤903,对圆柱表面的目标区域按照切向和轴向进行等间隔规则网格划分;
步骤904,对划分出来的网格的每一顶点沿圆柱径向进行最优解的搜索;
步骤905,按照原网格拓扑,更新坐标位置为径向优化所得的最优位置,形成优化后的***格曲面;
步骤906,根据坐标对应的关系,生成相应的变换参数;
变换参数例如,在变换较为均匀的情况下可以采用单一的单应性矩阵(Homography);在变换较为复杂的情况下,可以直接建立坐标对关系,生成插值函数为后续的图像变换所使用。
步骤907,用生成的变换参数对输入图像(即初始扫描图像)进行变换,得到矫正后的正视图结果,即目标扫描图像。
基于前述的实施例,本申请实施例提供一种图像扫描装置,该装置包括所包括的各模块,可以通过计算机设备中的处理器来实现;当然也可通过具体的逻辑电路实现;在实施的过程中,处理器可以为中央处理器(CPU)、微处理器(MPU)、数字信号处理器(DSP)或现场可编程门阵列(FPGA)等。
图10为本申请实施例图像扫描装置的组成结构示意图,如图10所示,所述装置100包括数据获取模块101、曲面检测模块102、曲面优化模块103和图像矫正模块104,其中:
数据获取模块101,用于获取扫描场景的点云数据和所述扫描场景的初始扫描图像;
曲面检测模块102,用于对所述点云数据进行曲面检测,得到初始曲面数据;
曲面优化模块103,用于对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据;
图像矫正模块104,用于根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
在一些实施例中,曲面检测模块102,用于:对所述点云数据进行柱体形状检测,得到目标对象的多个特征参数值;根据所述多个特征参数值,从所述点云数据中确定所述初始曲面数据。
在一些实施例中,所述目标对象的形状为柱体,所述多个特征参数值包括所述柱体的轴向、轴向范围、切向和切向范围;曲面优化模块103,用于:按照所述轴向、所述轴向范围、所述切向、所述切向范围和特定网格划分间隔,对所述初始曲面数据表示的曲面进行网格划分,得到N个网格,N为大于0的整数;根据所述初始曲面数据中采样点的三维坐标,对每一网格的每一顶点进行最优解搜索,得到对应顶点的最优三维坐标;将每一所述顶点的最优三维坐标,确定为所述目标曲面数据。
在一些实施例中,所述多个特征参数值还包括所述柱体的半径和柱体底面的中心位置;曲面优化模块103,用于:根据第i个网格的第j个顶点所在曲面的位置、所述柱体的半径和所述柱体底面的中心位置,确定所述第j个顶点所在的径向向量;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数;根据所述径向向量,从所述初始曲面数据中确定搜索空间;根据所述搜索空间中的采样点的三维坐标,确定所述第j个顶点的最优三维坐标。
在一些实施例中,所述搜索空间采用立方体包围盒,曲面优化模块103,用于:
以所述第j个顶点为所述立方体包围盒的中心点,根据所述轴向的长度、切向上的网格划分间隔和所述径向向量的大小,从所述初始曲面数据中确定所述立方体包围盒。
在一些实施例中,曲面优化模块103,用于:根据所述搜索空间中的采样点的三维坐标,确定所述搜索空间的重心的三维坐标;将所述重心的三维坐标投影至所述径向向量上,得到所述第j个顶点的最优三维坐标。
在一些实施例中,曲面优化模块103,用于:根据所述初始曲面数据中采样点的三维坐标,进行泊松表面重建,得到等值表面;根据第i个网格的第j个顶点所在的曲面的位置、所述柱体的半径和所述柱体底面的中心位置,确定所述第j个顶点所在的径向向量;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数;确定所述径向向量与所述等值表面的相交点;将所述相交点在所述等值表面 上的三维坐标,确定为所述第j个顶点的最优三维坐标。
在一些实施例中,曲面优化模块103,用于:将第i个网格的第j个顶点所在径向向量上的第k个采样点的三维坐标,分别反投影至每一相机的成像平面上,得到对应的像素坐标;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数;根据特定采样窗口,确定每一所述像素坐标在对应相机采集的图像上的区域块;确定每一所述区域块之间的相关程度;在所述相关程度不满足特定条件的情况下,将所述径向向量的下一个采样点的三维坐标分别反投影至每一相机的成像平面上,直至确定的相关程度满足所述特定条件为止,将对应的采样点的三维坐标确定为所述第j个顶点的最优三维坐标。
在一些实施例中,图像矫正模块104,用于:根据所述目标曲面数据中每一所述顶点的最优三维坐标与参考面数据中对应顶点的三维坐标,确定所述目标曲面数据与所述参考面数据之间的变换关系;根据所述变换关系,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
以上装置实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请装置实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
需要说明的是,本申请实施例中,如果以软件功能模块的形式实现上述的图像扫描方法,并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。这样,本申请实施例不限制于任何特定的硬件和软件结合。
对应地,本申请实施例提供一种电子设备,图11为本申请实施例电子设备的一种硬件实体示意图,如图11所示,该电子设备110的硬件实体包括:包括存储器111和处理器112,所述存储器111存储有可在处理器112上运行的计算机程序,所述处理器112执行所述程序时实现上述实施例中提供的图像扫描方法中的步骤。
存储器111配置为存储由处理器112可执行的指令和应用,还可以缓存待处理器112以及电子设备110中各模块待处理或已经处理的数据(例如,图像数据、音频数据、语 音通信数据和视频通信数据),可以通过闪存(FLASH)或随机访问存储器(Random Access Memory,RAM)实现。
对应地,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中提供的图像扫描方法中的步骤。
这里需要指出的是:以上存储介质和设备实施例的描述,与上述方法实施例的描述是类似的,具有同方法实施例相似的有益效果。对于本申请存储介质和设备实施例中未披露的技术细节,请参照本申请方法实施例的描述而理解。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个模块或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或模块的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的模块可以是、或也可以不是物理上分开的,作为模块显示的部件可以是、或也可以不是物理模块;既可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部模块来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能模块可以全部集成在一个处理单元中,也可以是各模块分别单独作为一个单元,也可以两个或两个以上模块集成在一个单元中;上述 集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(Read Only Memory,ROM)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本申请上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得电子设备(可以是手机、平板电脑、笔记本电脑、台式计算机、机器人、无人机等)执行本申请各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、磁碟或者光盘等各种可以存储程序代码的介质。
本申请所提供的几个方法实施例中所揭露的方法,在不冲突的情况下可以任意组合,得到新的方法实施例。
本申请所提供的几个产品实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的产品实施例。
本申请所提供的几个方法或设备实施例中所揭露的特征,在不冲突的情况下可以任意组合,得到新的方法实施例或设备实施例。
以上所述,仅为本申请的实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。
工业实用性
在本申请实施例中,电子设备在获得扫描场景的点云数据和所述扫描场景的初始扫描图像之后,对点云数据进行曲面检测,并对检测得到的初始曲面数据进行优化,得到目标曲面数据;如此,一方面,使得电子设备能够根据目标曲面数据中的三维坐标,对初始扫描图像中的像素坐标进行矫正,从而得到更为准确、精细的扫描结果(即目标扫描图像);另一方面,对点云数据进行曲面检测,相比于平面检测,能够适用更多的扫描场景,例如,能够扫描圆柱体和近圆柱体的表面。

Claims (12)

  1. 一种图像扫描方法,所述方法包括:
    获取扫描场景的初始扫描图像和所述扫描场景的点云数据;
    对所述点云数据进行曲面检测,得到初始曲面数据;
    对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据;
    根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
  2. 根据权利要求1所述的方法,所述对所述点云数据进行曲面检测,得到初始曲面数据,包括:
    对所述点云数据进行柱体形状检测,得到目标对象的多个特征参数值;
    根据所述多个特征参数值,从所述点云数据中确定所述初始曲面数据。
  3. 根据权利要求2所述的方法,所述目标对象的形状为柱体,所述多个特征参数值包括所述柱体的轴向、轴向范围、切向和切向范围,所述对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据,包括:
    按照所述轴向、所述轴向范围、所述切向、所述切向范围和特定网格划分间隔,对所述初始曲面数据表示的曲面进行网格划分,得到N个网格,N为大于0的整数;
    根据所述初始曲面数据中采样点的三维坐标,对每一网格的每一顶点进行最优解搜索,得到对应顶点的最优三维坐标;
    将每一所述顶点的最优三维坐标,确定为所述目标曲面数据。
  4. 根据权利要求3所述的方法,所述多个特征参数值还包括所述柱体的半径和中心位置,所述根据所述初始曲面数据中采样点的三维坐标,对每一网格的每一顶点进行最优解搜索,得到对应顶点的最优三维坐标,包括:
    根据第i个网格的第j个顶点所在曲面的位置、所述柱体的半径和所述柱体底面的中心位置,确定所述第j个顶点所在的径向向量;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数;
    根据所述径向向量,从所述初始曲面数据中确定搜索空间;
    根据所述搜索空间中的采样点的三维坐标,确定所述第j个顶点的最优三维坐标。
  5. 根据权利要求4所述的方法,所述搜索空间采用立方体包围盒,所述根据所述径向向量,从所述初始曲面数据中确定搜索空间,包括:
    以所述第j个顶点为所述立方体包围盒的中心点,根据所述轴向的长度、切向上的网格划分间隔和所述径向向量的大小,从所述初始曲面数据中确定所述立方体包围盒。
  6. 根据权利要求4所述的方法,所述根据所述搜索空间中的采样点的三维坐标,确定所述第j个顶点的最优三维坐标,包括:
    根据所述搜索空间中的采样点的三维坐标,确定所述搜索空间的重心的三维坐标;
    将所述重心的三维坐标投影至所述径向向量上,得到所述第j个顶点的最优三维坐标。
  7. 根据权利要求3所述的方法,所述多个特征参数值还包括所述柱体的半径和中心位置,所述根据所述初始曲面数据中采样点的三维坐标,对每一网格的每一顶点进行最优解搜索,得到对应顶点的最优三维坐标,包括:
    根据所述初始曲面数据中采样点的三维坐标,进行泊松表面重建,得到等值表面;
    根据第i个网格的第j个顶点所在的曲面的位置、所述柱体的半径和所述柱体底面的中心位置,确定所述第j个顶点所在的径向向量;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数;
    确定所述径向向量与所述等值表面的相交点;
    将所述相交点在所述等值表面上的三维坐标,确定为所述第j个顶点的最优三维坐标。
  8. 根据权利要求3所述的方法,所述根据所述初始曲面数据中采样点的三维坐标,对每一网格的每一顶点进行最优解搜索,得到对应顶点的最优三维坐标,包括:
    将第i个网格的第j个顶点所在径向向量上的第k个采样点的三维坐标,分别反投影至每一相机的成像平面上,得到对应的像素坐标;其中,i为大于0且小于或等于N的整数,j为大于0且小于或等于4的整数,k为大于0的整数;
    根据特定采样窗口,确定每一所述像素坐标在对应相机采集的图像上的区域块;
    确定每一所述区域块之间的相关程度;
    在所述相关程度不满足特定条件的情况下,将所述径向向量的下一个采样点的三维坐标分别反投影至每一相机的成像平面上,直至确定的相关程度满足所述特定条件为止,将对应的采样点的三维坐标确定为所述第j个顶点的最优三维坐标。
  9. 根据权利要求3至8任一项所述的方法,所述根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像,包括:
    根据所述目标曲面数据中每一所述顶点的最优三维坐标与参考面数据中对应顶点 的三维坐标,确定所述目标曲面数据与所述参考面数据之间的变换关系;
    根据所述变换关系,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
  10. 一种图像扫描装置,包括:
    数据获取模块,用于获取扫描场景的点云数据和所述扫描场景的初始扫描图像;
    曲面检测模块,用于对所述点云数据进行曲面检测,得到初始曲面数据;
    曲面优化模块,用于对所述初始曲面数据表示的曲面进行优化,得到目标曲面数据;
    图像矫正模块,用于根据所述目标曲面数据中的三维坐标,对所述初始扫描图像中像素点的像素坐标进行矫正,得到目标扫描图像。
  11. 一种电子设备,包括存储器和处理器,所述存储器存储有可在处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至9任一项所述图像扫描方法中的步骤。
  12. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现权利要求1至9任一项所述图像扫描方法中的步骤。
PCT/CN2020/073038 2020-01-19 2020-01-19 图像扫描方法及装置、设备、存储介质 WO2021142843A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080093762.4A CN114981845A (zh) 2020-01-19 2020-01-19 图像扫描方法及装置、设备、存储介质
PCT/CN2020/073038 WO2021142843A1 (zh) 2020-01-19 2020-01-19 图像扫描方法及装置、设备、存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073038 WO2021142843A1 (zh) 2020-01-19 2020-01-19 图像扫描方法及装置、设备、存储介质

Publications (1)

Publication Number Publication Date
WO2021142843A1 true WO2021142843A1 (zh) 2021-07-22

Family

ID=76863441

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073038 WO2021142843A1 (zh) 2020-01-19 2020-01-19 图像扫描方法及装置、设备、存储介质

Country Status (2)

Country Link
CN (1) CN114981845A (zh)
WO (1) WO2021142843A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270378A (zh) * 2022-09-28 2022-11-01 中国空气动力研究与发展中心计算空气动力研究所 一种弓形激波外场网格的生成方法
CN115661104A (zh) * 2022-11-04 2023-01-31 广东杰成新能源材料科技有限公司 动力电池的整体完整度评估方法、装置、设备及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116500379B (zh) * 2023-05-15 2024-03-08 珠海中瑞电力科技有限公司 一种sts装置电压跌落精准定位方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489218A (zh) * 2013-09-17 2014-01-01 中国科学院深圳先进技术研究院 点云数据质量自动优化方法及***
CN107767442A (zh) * 2017-10-16 2018-03-06 浙江工业大学 一种基于Kinect和双目视觉的脚型三维重建与测量方法
US10089781B2 (en) * 2015-12-30 2018-10-02 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Three-dimensional point cloud model reconstruction method, computer readable storage medium and device
CN109344786A (zh) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 目标识别方法、装置及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489218A (zh) * 2013-09-17 2014-01-01 中国科学院深圳先进技术研究院 点云数据质量自动优化方法及***
US10089781B2 (en) * 2015-12-30 2018-10-02 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Three-dimensional point cloud model reconstruction method, computer readable storage medium and device
CN107767442A (zh) * 2017-10-16 2018-03-06 浙江工业大学 一种基于Kinect和双目视觉的脚型三维重建与测量方法
CN109344786A (zh) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 目标识别方法、装置及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SPREITZER GABRIEL; TUNNICLIFFE JON; FRIEDRICH HEIDE: "Large wood (LW) 3D accumulation mapping and assessment using structure from Motion photogrammetry in the laboratory", JOURNAL OF HYDROLOGY, ELSEVIER, AMSTERDAM, NL, vol. 581, 5 December 2019 (2019-12-05), AMSTERDAM, NL, XP085987121, ISSN: 0022-1694, DOI: 10.1016/j.jhydrol.2019.124430 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270378A (zh) * 2022-09-28 2022-11-01 中国空气动力研究与发展中心计算空气动力研究所 一种弓形激波外场网格的生成方法
CN115270378B (zh) * 2022-09-28 2022-12-30 中国空气动力研究与发展中心计算空气动力研究所 一种弓形激波外场网格的生成方法
CN115661104A (zh) * 2022-11-04 2023-01-31 广东杰成新能源材料科技有限公司 动力电池的整体完整度评估方法、装置、设备及介质
CN115661104B (zh) * 2022-11-04 2023-08-11 广东杰成新能源材料科技有限公司 动力电池的整体完整度评估方法、装置、设备及介质

Also Published As

Publication number Publication date
CN114981845A (zh) 2022-08-30

Similar Documents

Publication Publication Date Title
WO2020206903A1 (zh) 影像匹配方法、装置及计算机可读存储介质
CN111243093B (zh) 三维人脸网格的生成方法、装置、设备及存储介质
WO2021142843A1 (zh) 图像扫描方法及装置、设备、存储介质
WO2020063139A1 (zh) 脸部建模方法、装置、电子设备和计算机可读介质
WO2016065632A1 (zh) 一种图像处理方法和设备
CN109801374B (zh) 一种通过多角度图像集重构三维模型的方法、介质及***
CN115205489A (zh) 一种大场景下的三维重建方法、***及装置
CN110648397B (zh) 场景地图生成方法、装置、存储介质及电子设备
CN108010123B (zh) 一种保留拓扑信息的三维点云获取方法
CN109472820B (zh) 单目rgb-d相机实时人脸重建方法及装置
CN111144349B (zh) 一种室内视觉重定位方法及***
WO2022021782A1 (zh) 六维姿态数据集自动生成方法、***、终端以及存储介质
CN111524168A (zh) 点云数据的配准方法、***、装置及计算机存储介质
WO2021035627A1 (zh) 获取深度图的方法、装置及计算机存储介质
CN116129037B (zh) 视触觉传感器及其三维重建方法、***、设备及存储介质
CN109003307B (zh) 基于水下双目视觉测量的捕鱼网目尺寸设计方法
CN113643414A (zh) 一种三维图像生成方法、装置、电子设备及存储介质
TW202244680A (zh) 位置姿勢獲取方法、電子設備及電腦可讀儲存媒體
CN114202632A (zh) 网格线性结构恢复方法、装置、电子设备及存储介质
CN113808269A (zh) 地图生成方法、定位方法、***及计算机可读存储介质
JP6086491B2 (ja) 画像処理装置およびそのデータベース構築装置
CN111161138B (zh) 用于二维全景图像的目标检测方法、装置、设备、介质
CN111091117B (zh) 用于二维全景图像的目标检测方法、装置、设备、介质
CN117726747A (zh) 补全弱纹理场景的三维重建方法、装置、存储介质和设备
CN115086625B (zh) 投影画面的校正方法、装置、***、校正设备和投影设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914043

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914043

Country of ref document: EP

Kind code of ref document: A1