WO2021142843A1 - Procédé et dispositif de balayage d'image, appareil et support de stockage - Google Patents

Procédé et dispositif de balayage d'image, appareil et support de stockage Download PDF

Info

Publication number
WO2021142843A1
WO2021142843A1 PCT/CN2020/073038 CN2020073038W WO2021142843A1 WO 2021142843 A1 WO2021142843 A1 WO 2021142843A1 CN 2020073038 W CN2020073038 W CN 2020073038W WO 2021142843 A1 WO2021142843 A1 WO 2021142843A1
Authority
WO
WIPO (PCT)
Prior art keywords
curved surface
dimensional coordinates
initial
vertex
surface data
Prior art date
Application number
PCT/CN2020/073038
Other languages
English (en)
Chinese (zh)
Inventor
张洪伟
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN202080093762.4A priority Critical patent/CN114981845A/zh
Priority to PCT/CN2020/073038 priority patent/WO2021142843A1/fr
Publication of WO2021142843A1 publication Critical patent/WO2021142843A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • the embodiments of the present application relate to image processing methods, in particular to image scanning methods, devices, equipment, and storage media.
  • Document scanning technology based on image information can be integrated in mobile terminals such as mobile phones, which is convenient to carry and easy to use.
  • Document scanning technology based on image information requires information such as texture and boundary to calculate the transformation matrix, so this technology cannot be applied to document scanning with no borders and less texture.
  • Time of flight (Time of flight, TOF) sensors are not affected by light changes and object textures, and can reduce costs on the premise of meeting accuracy requirements.
  • TOF Time of flight
  • the document scanning can be made independent of the picture information, which can greatly improve the scope of application of the document scanning.
  • the embodiments of the present application provide an image scanning method, device, device, and storage medium.
  • an embodiment of the present application provides an image scanning method, the method includes: acquiring point cloud data of a scanned scene and an initial scanned image of the scanned scene; performing curved surface detection on the point cloud data to obtain an initial curved surface Data; optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data; rectify the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • an embodiment of the present application provides an image scanning device, including: a data acquisition module for acquiring point cloud data of a scanned scene and an initial scanned image of the scanned scene; a curved surface detection module for detecting the point Cloud data performs curved surface detection to obtain initial curved surface data; a curved surface optimization module for optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data; an image correction module for obtaining target curved surface data according to the three-dimensional coordinates in the target curved surface data Rectify the pixel coordinates of the pixel points in the initial scanned image to obtain the target scanned image.
  • an embodiment of the present application provides an electronic device, including a memory and a processor, the memory stores a computer program that can be run on the processor, and when the processor executes the program, any of the embodiments of the present application is implemented.
  • a step in the image scanning method is implemented.
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in any of the image scanning methods described in the embodiments of the present application are implemented.
  • the electronic device after obtaining the point cloud data of the scanned scene and the initial scanned image of the scanned scene, the electronic device performs curved surface detection on the point cloud data, and optimizes the detected initial curved surface data to obtain the target curved surface Data; on the one hand, the electronic device can correct the pixel coordinates in the initial scanned image according to the three-dimensional coordinates in the target surface data, so as to obtain a more accurate and detailed scanning result, that is, the target scanned image; on the other hand, Perform surface detection on point cloud data. Compared with plane detection, it can be applied to more scanning scenes. For example, it can scan the surface of cylinders and nearly cylinders.
  • FIG. 1 is a schematic diagram of an implementation process of an image scanning method according to an embodiment of the application
  • FIG. 2 is a schematic diagram of comparison between an initial scanned image and a target scanned image in an embodiment of the application
  • FIG. 3 is a schematic diagram of column features detected by an embodiment of this application.
  • FIG. 4 is a schematic diagram of a grid division result according to an embodiment of the application.
  • Figure 5 is a schematic diagram of the reference plane of the embodiment of the application.
  • FIG. 6 is a schematic diagram of radial optimization based on a cube bounding box according to an embodiment of the application.
  • FIG. 7 is a schematic diagram of radial optimization based on Poisson Surface Reconstruction (PSR) according to an embodiment of the application;
  • FIG. 8 is a schematic diagram of radial optimization based on dual-camera data fusion according to an embodiment of the application.
  • FIG. 9 is a schematic diagram of the implementation process of another image scanning method according to an embodiment of the application.
  • FIG. 10 is a schematic structural diagram of an image scanning device according to an embodiment of the application.
  • FIG. 11 is a schematic diagram of a hardware entity of an electronic device according to an embodiment of the application.
  • first ⁇ second ⁇ third referred to in the embodiments of this application only distinguishes similar objects, and does not represent a specific order for the objects. Understandably, “first ⁇ second ⁇ third” "When permitted, the specific order or sequence can be interchanged, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
  • the embodiment of the present application provides an image scanning method, which can be applied to electronic devices, which can be devices with information processing capabilities such as mobile phones, tablet computers, notebook computers, desktop computers, robots, and drones.
  • the functions implemented by the image scanning method can be implemented by a processor in the electronic device calling program code.
  • the program code can be stored in a computer storage medium. It can be seen that the electronic device at least includes a processor and a storage medium.
  • FIG. 1 is a schematic diagram of the implementation process of the image scanning method according to the embodiment of the application. As shown in FIG. 1, the method at least includes the following steps 101 to 104:
  • Step 101 Obtain an initial scanned image of a scanned scene and point cloud data of the scanned scene.
  • the target object scanned by the electronic device such as a label on the surface of a bottle, a poster on a cylinder, a curved book surface, etc., it is understandable that the above-mentioned bottle surface or the cylindrical surface itself is a curved surface.
  • the electronic device can collect data from the scanned scene through the TOF sensor, and then preliminarily filter the sensor data output by the TOF sensor through the processor, and transform the filtered sensor data to three-dimensional coordinates in the camera coordinate system. To obtain the point cloud data.
  • the preliminary filtering includes denoising the characteristics of the sensor data output by the TOF sensor. Denoising methods, for example, point cloud removal based on a position threshold, that is, removing points in the sensor data whose location is greater than a threshold (for example, 7 meters); or, point cloud removal based on mutual distance, that is, removing the point cloud from the surroundings A point whose average distance is greater than the surrounding average of other points.
  • a position threshold that is, removing points in the sensor data whose location is greater than a threshold (for example, 7 meters); or, point cloud removal based on mutual distance, that is, removing the point cloud from the surroundings A point whose average distance is greater than the surrounding average of other points.
  • the electronic device may photograph the scanned scene through a Red Green Blue (RGB) sensor, so as to obtain the initial scanned image.
  • RGB Red Green Blue
  • the electronic device at least includes a TOF sensor and an RGB sensor, where the TOF sensor is used to obtain point cloud data of the scanned scene, and the RGB sensor is used to photograph the scanned scene to obtain the initial scanned image.
  • Step 102 Perform surface detection on the point cloud data to obtain initial surface data.
  • the electronic device may use a nearly cylinder as a fitting model, and determine the rough shape in the point cloud data through cylinder detection, so as to obtain the initial curved surface data.
  • the so-called near-cylindrical body includes a cylinder whose upper and lower bottom surfaces are equal and a truncated cone whose upper and lower bottom surfaces are not equal.
  • Step 103 Optimize the curved surface represented by the initial curved surface data to obtain target curved surface data.
  • the curved surface represented by the initial curved surface data is usually a relatively rough curved surface.
  • the electronic device optimizes the curved surface represented by the initial curved surface data, for example, when the curved surface is the side of a cylinder. In this case, the surface is meshed to obtain a mesh surface, and then the vertices of each mesh on the mesh surface are radially optimized to obtain the target surface data.
  • Step 104 Correct the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • the initial scanned image scanned by the electronic device is an image that does not meet the application conditions.
  • the initial scanned image that is, the image before correction
  • the image 20 in which the target object 201 is inclined.
  • the front-view scan result of the target object 201 can be obtained, that is, the target scan image 202 shown in FIG. 2.
  • the electronic device may implement step 104 through step 307 and step 308 in the following embodiments to obtain the target scanned image.
  • the electronic device after obtaining the point cloud data of the scanned scene and the initial scanned image of the scanned scene, the electronic device performs curved surface detection on the point cloud data, and optimizes the detected initial curved surface data to obtain the target curved surface Data; on the one hand, the electronic device can correct the pixel coordinates in the initial scanned image according to the three-dimensional coordinates in the target surface data, so as to obtain a more accurate and detailed scanning result, that is, the target scanned image; on the other hand, Perform surface detection on point cloud data. Compared with plane detection, it can be applied to more scanning scenes. For example, it can scan the surface of cylinders and nearly cylinders.
  • the embodiment of the present application further provides an image scanning method, the method at least includes the following steps 201 to 205:
  • Step 201 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 202 Perform cylindrical shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object.
  • the target object may be a cylinder.
  • the characteristic parameter values of the cylinder 30 include: the center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ) and the axial range (h) ; Among them, ⁇ 1 is the angle between the tangential line 31 and the reference line, ⁇ 2 is the angle between the tangential line 32 and the reference line; the axial range h is the height of the cylinder.
  • the electronic device may use, for example, a random sampling consensus (RANdom SAmple Consensus, RANSAC) cylinder detection algorithm to obtain these parameter values.
  • RANSAC Random SAmple Consensus
  • Step 203 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • Step 204 optimizing the curved surface represented by the initial curved surface data to obtain target curved surface data
  • Step 205 Correct the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • the electronic device detects the shape of a cylinder on the point cloud data to obtain a plurality of characteristic parameter values of the cylinder; and according to the plurality of characteristic parameter values, determine the initial Curved surface data; in this way, compared to the complex fitting model, using a nearly cylinder as the fitting model can greatly simplify the algorithm, reduce the amount of calculation, and reduce the cost of the algorithm while losing less precision; compared to The plane detection method can cover more target objects and adapt to more user scenarios.
  • the embodiment of the present application further provides an image scanning method.
  • the method at least includes the following steps 301 to 308:
  • Step 301 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 302 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object is a cylinder, and the multiple characteristic parameter values of the target object are as shown in FIG. As shown in 3, it includes the center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ), and the axial range (h).
  • Step 303 Determine initial curved surface data from the point cloud data according to the multiple characteristic parameter values.
  • the initial curved surface data is the point cloud data of the target object 201 in the image 20.
  • Step 304 according to the axial direction, the axial range, the tangential direction, the tangential range and the specific meshing interval, mesh the curved surface represented by the initial curved surface data to obtain N meshes.
  • Grid, N is an integer greater than 0.
  • the specific grid division interval includes the grid division interval in the axial direction and the grid division interval in the tangential direction.
  • the grid division interval in these two directions may be the same or different.
  • the result of the mesh division for example, the mesh surface 401 shown in FIG. 4, the mesh surface 401 has N meshes 402. Understandably, the size of the mesh division interval determines the mesh density of the mesh surface to a certain extent, that is, the value of N. If the grid division interval is larger, the less grids are obtained, the coarser the optimization result is; the smaller the grid division interval is, the more grids are obtained, the finer the optimization result will be, but the algorithm complexity will be higher.
  • Step 305 Perform an optimal solution search for each vertex of each grid according to the three-dimensional coordinates of the sampling points in the initial surface data to obtain the optimal three-dimensional coordinates of the corresponding vertex.
  • each vertex is searched for the optimal solution along its radial direction to obtain the optimal three-dimensional coordinates of the corresponding vertex.
  • three optimal solution search methods are provided. For example, using steps 405 to 407 in the following embodiment to search for the optimal solution in the search space where the radial vector of the vertex is located; for another example, using steps 505 to step 508 in the following embodiment, that is, The optimal solution is obtained by using a radial optimization method based on PSR; for another example, steps 605 to 608 in the following embodiment are adopted, that is, the optimal solution is obtained by using a radial optimization method based on at least dual-camera data fusion.
  • Step 306 Determine the optimal three-dimensional coordinates of each vertex as the target curved surface data.
  • step 306 is an optimized mesh surface, where the coordinates of each vertex of each mesh are the optimal three-dimensional coordinates. That is, the target surface data includes the optimal three-dimensional coordinates of each vertex of each mesh.
  • Step 307 Determine a transformation relationship between the target surface data and the reference surface data according to the optimal three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data.
  • the electronic device determines the position transformation relationship between the optimized mesh surface and the reference surface.
  • the reference surface data can be pre-configured.
  • the reference surface data represents a frontal plane, such as the reference surface 50 shown in FIG. 5, where the reference surface data includes the three-dimensional coordinates of each vertex of each grid in the reference surface 50.
  • the transformation relationship may be characterized by a transformation matrix, a transformation matrix group, or a free mapping relationship.
  • Step 308 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain the target scanned image.
  • the surface represented by the initial surface data is meshed, and only the vertices of the mesh are searched for the optimal solution, which can greatly reduce the amount of calculation for surface optimization.
  • the embodiment of the present application further provides an image scanning method, the method at least includes the following steps 401 to 410:
  • Step 401 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene;
  • Step 402 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object may be a cylinder, and the multiple characteristic parameter values of the target object, Including the center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ), and the axial range (h).
  • Step 403 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • step 404 the curved surface is meshed according to the axial direction, the axial range, the tangential direction, the tangential range, and the specific meshing interval to obtain N meshes, where N is greater than An integer of 0;
  • Step 405 Determine the radial vector where the j-th vertex is located according to the position of the curved surface where the j-th vertex of the i-th grid is located, the radius of the cylinder and the center position of the bottom surface of the cylinder; i is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4;
  • Step 406 Determine a search space from the initial surface data according to the radial vector.
  • the search space may adopt the cube bounding box shown in formula (1):
  • L and U are the lower boundary point and the upper boundary point of the bounding box coordinate system
  • p is the three-dimensional coordinate of the point in the initial surface data
  • [R T] is the augmentation from the world coordinate to the bounding box coordinate Transformation matrix.
  • the j-th vertex may be the center point of the cube bounding box, and according to the length of the axial direction, the grid division interval in the tangential direction, and the size of the radial vector,
  • the cube bounding box is determined from the initial curved surface data.
  • the center point of the bounding box 60 is the j-th vertex
  • the length of the bounding box is the radius r of the cylinder
  • the axial length that is, the height
  • the width is the tangential mesh.
  • the grid is divided into intervals.
  • Step 407 Determine the optimal three-dimensional coordinates of the j-th vertex according to the three-dimensional coordinates of the sampling points in the search space.
  • the electronic device may determine the three-dimensional coordinates of the center of gravity of the search space according to the three-dimensional coordinates of the sampling points in the search space; project the three-dimensional coordinates of the center of gravity onto the radial vector to obtain The optimal three-dimensional coordinate of the j-th vertex.
  • the search space is calculated ⁇ p i
  • p i is the three-dimensional coordinates of a point in the search space
  • K is a constant
  • the calculated three-dimensional coordinates of the center of gravity p g are projected to the radial vector where the j-th vertex is located , So as to obtain the optimal three-dimensional coordinate p * of the j-th vertex.
  • p 0 represents the three-dimensional coordinates of the j-th vertex described before optimization.
  • Step 408 Determine the optimal three-dimensional coordinates of each vertex as the target curved surface data
  • Step 409 Determine a transformation relationship between the target surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
  • Step 410 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain a target scanned image.
  • the embodiment of the present application further provides an image scanning method.
  • the method at least includes the following steps 501 to 511:
  • Step 501 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 502 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object may be a cylinder, and the multiple characteristic parameter values of the target object include The center position p 0 of the bottom surface of the cylinder, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ), and the axial range (h).
  • Step 503 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • Step 504 mesh the curved surface according to the axial direction, the axial range, the tangential direction, the tangential range, and the specific meshing interval to obtain N meshes, where N is greater than An integer of 0;
  • Step 505 Perform Poisson surface reconstruction according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain an equivalent surface.
  • the electronic device may use the PSR algorithm to obtain the isosurface. Based on the assumption that the surface of the object in the real world is continuous, the surface of the object is estimated.
  • the PSR algorithm can eliminate the influence of point cloud measurement error on the result to a certain extent, and restore the surface of the real object.
  • Step 506 Determine the radial vector where the j-th vertex is located according to the position of the curved surface where the j-th vertex of the i-th grid is located, the radius of the cylinder, and the center position of the bottom surface of the cylinder; , I is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4;
  • Step 507 Determine the intersection point of the radial vector and the isosurface
  • Step 508 Determine the three-dimensional coordinates of the intersection on the isosurface as the optimal three-dimensional coordinates of the j-th vertex.
  • the electronic device first uses the PSR algorithm to process the initial surface data to generate the corresponding mesh (Mesh), and then detect the radial direction where the j-th vertex is located in the generated Mesh. For the intersection point of the lines, the three-dimensional coordinates of the Mesh corresponding to the intersection point are used as the optimization result, that is, the optimal three-dimensional coordinates of the j-th vertex.
  • the PSR algorithm has the characteristics of being able to recover the surface of real-world objects from the point cloud with a lot of noise.
  • PSR is an algorithm derived from the fact that the surface of the real object is smooth and continuous, so it is in line with the scanning characteristics of real objects. The resulting surface is closer to the true value. Therefore, the PSR algorithm can be used here to remove the influence of noise or misdetection, thereby reducing the influence of the deviation value in the point cloud.
  • the input of the PSR algorithm is the initial surface data
  • the output is Mesh.
  • the implementation steps of the PSR algorithm can include the following steps S1 to S4: S1, perform point cloud normal estimation on the initial surface data; S2, perform spatial meshing on the initial surface data, such as the eighth tree method; S3, find The optimal surface conforms to the continuity of the surface and the estimated normal; S4, the optimized optimal surface is output to form a Mesh grid.
  • Step 509 Determine the optimal three-dimensional coordinates of each of the vertices as the target surface data
  • Step 510 Determine a transformation relationship between the target surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
  • Step 511 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain a target scanned image.
  • the embodiment of the present application further provides an image scanning method.
  • the method at least includes the following steps 601 to 611:
  • Step 601 Obtain point cloud data of a scanned scene and an initial scanned image of the scanned scene
  • Step 602 Perform cylinder shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; wherein the shape of the target object is a cylinder, and the multiple characteristic parameter values of the target object include: The center position p 0 of the bottom surface of the body, the axial direction n 0 , the radius r, the tangential range ( ⁇ 1 , ⁇ 2 ) and the axial range (h).
  • Step 603 Determine the initial curved surface data from the point cloud data according to the multiple characteristic parameter values
  • Step 604 Perform meshing on the curved surface according to the axial direction, the axial range, the tangential direction, the tangential range and the specific meshing interval, to obtain N meshes, where N is an integer greater than 0;
  • Step 605 Back-project the three-dimensional coordinates of the k-th sampling point on the radial vector where the j-th vertex of the i-th grid is located on the imaging plane of each camera to obtain the corresponding pixel coordinates; where i Is an integer greater than 0 and less than or equal to N, j is an integer greater than 0 and less than or equal to 4, and k is an integer greater than 0;
  • Step 606 Determine the area block of each pixel coordinate on the image collected by the corresponding camera according to the specific sampling window;
  • Step 607 Determine the degree of correlation between each of the regional blocks
  • Step 608 In the case that the degree of correlation does not meet a specific condition, backproject the three-dimensional coordinates of the next sampling point of the radial vector onto the imaging plane of each camera, until the determined degree of correlation satisfies the Up to the specified conditions, the three-dimensional coordinates of the corresponding sampling points are determined as the optimal three-dimensional coordinates of the j-th vertex.
  • Steps 605 to 608 provide another optimization method, that is, a radial optimization method based on at least dual-camera data fusion, which can further improve the robustness.
  • the j-th vertex may be used as a starting point to perform cost function optimization, so as to obtain the optimal three-dimensional coordinates of the j-th vertex.
  • the radial optimization method of dual-camera data fusion as an example, as shown in Figure 8, the three-dimensional coordinate p j before the optimization of the j-th vertex is used as the starting point, the gradient descent method or the LM algorithm is used, and the formula (4 ) The cost function shown in the optimization:
  • p * is the optimal solution, that is, the optimal three-dimensional coordinates of the j-th vertex
  • p is the position of the search point in the initial surface data, that is, the three-dimensional coordinates of the k-th sampling point
  • R is a regular function, for example L 2 regular function is used
  • ⁇ 0 and ⁇ 1 are the back projection function of the dual-camera space coordinate back-projected to the pixel coordinate
  • W is the sampling window function, for example, the sampling of 3*3 square pixels or 7*7 of the neighboring back-projection point
  • C is the cross-correlation function of the neighboring area blocks of the projection point in the dual-camera image.
  • the cross-correlation function can use functions such as NCC or ZNCC
  • is the scale coefficient to adjust the dependence on TOF data.
  • the projection items of the third and fourth cameras are added to the equation (4), the input of the function C for calculating the cross-correlation becomes multiple, and the calculation method of the cross-correlation function can adopt the result of the pairwise camera correlation. Sum, or average.
  • Step 609 Determine the optimal three-dimensional coordinates of each vertex as the target surface data
  • Step 610 Determine a transformation relationship between the target surface data and the reference surface data according to the three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data;
  • Step 611 Correct the pixel coordinates of the pixels in the initial scanned image according to the transformation relationship to obtain a target scanned image.
  • a nearly cylindrical body is used as a fitting model, and the rough shape in the point cloud data is determined by cylinder detection, and then the diameter of the cylinder is determined. Optimal processing is performed to obtain a more refined surface result.
  • the document scanning system includes a TOF sensor, an RGB sensor, and a conversion parameter generation module and an image conversion module.
  • the functions of the conversion parameter generation module and the image conversion module can be performed by the processor; wherein,
  • the transformation parameter generation module is used to calculate the sensor data output by the TOF sensor to generate transformation parameters required by the image transformation module.
  • the transformation parameter may be one of the following: transformation matrix, transformation matrix group, and free mapping relationship.
  • the image transformation module is used to transform the RGB data (that is, an example of the initial scan image) obtained from the RGB sensor according to the transformation parameters generated by the transformation parameter generation module, so as to obtain the transformed orthographic scan result, namely The target scan image.
  • the transformation parameter generation module includes a cylinder detection unit, a mesh division unit, a cylindrical radial optimization unit, a free-form surface generation unit, and a transformation parameter generation unit; wherein,
  • the cylinder detection unit can use the RANSAC cylinder detection algorithm
  • the mesh division unit can be used to divide the target area of the cylindrical surface at regular intervals according to the tangential and axial directions;
  • Cylindrical radial optimization unit can be used to search for the optimal solution for each vertex of the divided mesh in the radial direction of the cylinder;
  • the search for the optimal solution can use one of the following methods:
  • the free-form surface generating unit is used to take all the optimal solutions in the radial direction as vertices, and connect topologically according to the topological structure of the mesh division to form a free-form surface mesh.
  • the transformation parameter generation unit is used to generate transformation parameters according to the free-form surface mesh and the correction target parameters. According to the spatial position of the free-form surface mesh and the set spatial position of the correction result (that is, the three-dimensional coordinates of the vertices in the reference surface data), transformation parameters such as transformation matrix or transformation matrix group or free mapping relationship are generated.
  • FIG. 9 is a schematic diagram of the implementation process of the image scanning method according to the embodiment of the present application. As shown in FIG. 9, it may at least include the following steps 901 to 907:
  • Step 901 Perform preliminary filtering on the sensor data output by the TOF, and then transform the preliminary filtered sensor data to three-dimensional coordinates in the camera coordinate system to obtain three-dimensional point cloud data;
  • the preliminary filtering includes denoising the characteristics of the sensor data output by the TOF, and denoising methods, for example, point cloud removal based on position threshold, or point cloud removal based on mutual distance;
  • Step 902 Perform cylinder detection on the three-dimensional point cloud data to obtain cylinder fitting parameters, that is, multiple characteristic parameter values of the cylinder, where the parameters include: the center position p 0 of the cylinder bottom surface and the axis direction n 0 , Radius r, tangential range ( ⁇ 1 , ⁇ 2 ) and axial range (h).
  • the RANSAC cylinder detection algorithm may be used to obtain these parameters.
  • step 903 the target area on the cylindrical surface is divided into regular grids at regular intervals according to the tangential and axial directions;
  • Step 904 searching for an optimal solution for each vertex of the divided grid along the radial direction of the cylinder;
  • Step 905 According to the original mesh topology, update the coordinate position to the optimal position obtained by the radial optimization to form an optimized free-form mesh surface;
  • Step 906 Generate corresponding transformation parameters according to the corresponding relationship of the coordinates
  • Transformation parameters for example, when the transformation is relatively uniform, a single homography matrix (Homography) can be used; when the transformation is relatively complex, the coordinate pair relationship can be directly established to generate an interpolation function for subsequent image transformation.
  • Homography homography matrix
  • Step 907 Transform the input image (ie, the initial scanned image) with the generated transformation parameters to obtain the corrected front view result, that is, the target scanned image.
  • the embodiment of the present application provides an image scanning device, which includes each module included and can be implemented by a processor in a computer device; of course, it can also be implemented by a specific logic circuit;
  • the processor can be a central processing unit (CPU), a microprocessor (MPU), a digital signal processor (DSP), or a field programmable gate array (FPGA).
  • FIG. 10 is a schematic diagram of the composition structure of an image scanning device according to an embodiment of the application.
  • the device 100 includes a data acquisition module 101, a curved surface detection module 102, a curved surface optimization module 103, and an image correction module 104, in which:
  • the data acquisition module 101 is configured to acquire the point cloud data of the scanned scene and the initial scanned image of the scanned scene;
  • the curved surface detection module 102 is configured to perform curved surface detection on the point cloud data to obtain initial curved surface data
  • the curved surface optimization module 103 is configured to optimize the curved surface represented by the initial curved surface data to obtain target curved surface data;
  • the image correction module 104 is configured to correct the pixel coordinates of the pixels in the initial scanned image according to the three-dimensional coordinates in the target curved surface data to obtain the target scanned image.
  • the curved surface detection module 102 is configured to: perform cylindrical shape detection on the point cloud data to obtain multiple characteristic parameter values of the target object; according to the multiple characteristic parameter values, from the point cloud The initial curved surface data is determined in the data.
  • the shape of the target object is a cylinder
  • the multiple characteristic parameter values include the axial direction, the axial range, the tangential direction, and the tangential range of the cylinder
  • the curved surface optimization module 103 is used for : According to the axial direction, the axial range, the tangential direction, the tangential range and the specific meshing interval, mesh the curved surface represented by the initial curved surface data to obtain N meshes, N is an integer greater than 0; according to the three-dimensional coordinates of the sampling points in the initial surface data, the optimal solution search is performed on each vertex of each grid to obtain the optimal three-dimensional coordinates of the corresponding vertex; The optimal three-dimensional coordinates of is determined as the target surface data.
  • the multiple characteristic parameter values further include the radius of the cylinder and the center position of the bottom surface of the cylinder; the curved surface optimization module 103 is configured to: according to the curved surface where the j-th vertex of the i-th mesh is located The position of the cylinder, the radius of the cylinder and the center position of the bottom of the cylinder determine the radial vector where the j-th vertex is located; where i is an integer greater than 0 and less than or equal to N, and j is greater than 0 And an integer less than or equal to 4; determine the search space from the initial surface data according to the radial vector; determine the optimal three-dimensionality of the j-th vertex according to the three-dimensional coordinates of the sampling points in the search space coordinate.
  • the search space adopts a cubic bounding box
  • the curved surface optimization module 103 is used to:
  • the initial surface data is determined The cube bounding box.
  • the curved surface optimization module 103 is configured to: determine the three-dimensional coordinates of the center of gravity of the search space according to the three-dimensional coordinates of the sampling points in the search space; project the three-dimensional coordinates of the center of gravity to the diameter In the direction vector, the optimal three-dimensional coordinate of the j-th vertex is obtained.
  • the curved surface optimization module 103 is configured to: perform Poisson surface reconstruction according to the three-dimensional coordinates of the sampling points in the initial curved surface data to obtain an equivalent surface; The position of the curved surface, the radius of the cylinder and the center position of the bottom surface of the cylinder determine the radial vector at which the j-th vertex is located; where i is an integer greater than 0 and less than or equal to N, and j is An integer greater than 0 and less than or equal to 4; determine the intersection point between the radial vector and the isosurface; determine the three-dimensional coordinate of the intersection on the isosurface as the j-th vertex The optimal three-dimensional coordinates.
  • the curved surface optimization module 103 is used to back-project the three-dimensional coordinates of the k-th sampling point on the radial vector where the j-th vertex of the i-th grid is located to the imaging plane of each camera.
  • the image correction module 104 is configured to determine the target surface data and the target surface data according to the optimal three-dimensional coordinates of each vertex in the target surface data and the three-dimensional coordinates of the corresponding vertex in the reference surface data.
  • the transformation relationship between the reference surface data; according to the transformation relationship, the pixel coordinates of the pixels in the initial scanned image are corrected to obtain the target scanned image.
  • the technical solutions of the embodiments of the present application can be embodied in the form of software products in essence or the parts that contribute to related technologies.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (Read Only Memory, ROM), magnetic disk or optical disk and other media that can store program codes. In this way, the embodiments of the present application are not limited to any specific combination of hardware and software.
  • FIG. 11 is a schematic diagram of a hardware entity of the electronic device according to an embodiment of the application.
  • the hardware entity of the electronic device 110 includes: a memory 111 and a processor. 112.
  • the memory 111 stores a computer program that can run on the processor 112, and the processor 112 implements the steps in the image scanning method provided in the foregoing embodiment when the processor 112 executes the program.
  • the memory 111 is configured to store instructions and applications executable by the processor 112, and can also cache data to be processed or processed by each module in the processor 112 and the electronic device 110 (for example, image data, audio data, voice communication data, and Video communication data) can be realized by flash memory (FLASH) or random access memory (Random Access Memory, RAM).
  • FLASH flash memory
  • RAM Random Access Memory
  • an embodiment of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the image scanning method provided in the foregoing embodiments are implemented.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules is only a logical function division, and there may be other divisions in actual implementation, such as: multiple modules or components can be combined, or It can be integrated into another system, or some features can be ignored or not implemented.
  • the coupling, or direct coupling, or communication connection between the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or modules, and may be electrical, mechanical, or other forms. of.
  • modules described above as separate components may or may not be physically separate, and the components displayed as modules may or may not be physical modules; they may be located in one place or distributed on multiple network units; Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional modules in the embodiments of the present application may all be integrated into one processing unit, or each module may be individually used as a unit, or two or more modules may be integrated into one unit; the above-mentioned integration
  • the module can be implemented in the form of hardware, or in the form of hardware plus software functional units.
  • the foregoing program can be stored in a computer readable storage medium.
  • the execution includes The steps of the foregoing method embodiment; and the foregoing storage medium includes: various media that can store program codes, such as a mobile storage device, a read only memory (Read Only Memory, ROM), a magnetic disk, or an optical disk.
  • ROM Read Only Memory
  • the aforementioned integrated unit of the present application is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer readable storage medium.
  • the computer software products are stored in a storage medium and include several instructions to enable An electronic device (which may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a robot, a drone, etc.) executes all or part of the method described in each embodiment of the present application.
  • the aforementioned storage media include: removable storage devices, ROMs, magnetic disks, or optical disks and other media that can store program codes.
  • the electronic device after obtaining the point cloud data of the scanned scene and the initial scanned image of the scanned scene, the electronic device performs curved surface detection on the point cloud data, and optimizes the detected initial curved surface data to obtain the target curved surface Data; in this way, on the one hand, the electronic device can correct the pixel coordinates in the initial scan image according to the three-dimensional coordinates in the target surface data, so as to obtain more accurate and detailed scan results (ie, the target scan image); on the other hand, In terms of surface detection on point cloud data, compared to plane detection, it can be applied to more scanning scenes, for example, it can scan the surface of cylinders and nearly cylinders.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé et un dispositif de balayage d'image, ainsi qu'un appareil et un support de stockage. Le procédé consiste à : acquérir les données de nuage de points d'un scénario de balayage ainsi qu'une image balayée initiale du scénario de balayage (101) ; effectuer une détection de surface incurvée sur les données de nuage de points de façon à obtenir des données de surface incurvée initiales (102) ; effectuer une optimisation sur une surface incurvée représentée par les données de surface incurvée initiales de façon à obtenir des données de surface incurvée cibles (103) ; et corriger, en fonction des coordonnées tridimensionnelles dans les données de surface incurvée cibles, les coordonnées des pixels dans l'image balayée initiale de façon à obtenir une image balayée cible (104).
PCT/CN2020/073038 2020-01-19 2020-01-19 Procédé et dispositif de balayage d'image, appareil et support de stockage WO2021142843A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080093762.4A CN114981845A (zh) 2020-01-19 2020-01-19 图像扫描方法及装置、设备、存储介质
PCT/CN2020/073038 WO2021142843A1 (fr) 2020-01-19 2020-01-19 Procédé et dispositif de balayage d'image, appareil et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/073038 WO2021142843A1 (fr) 2020-01-19 2020-01-19 Procédé et dispositif de balayage d'image, appareil et support de stockage

Publications (1)

Publication Number Publication Date
WO2021142843A1 true WO2021142843A1 (fr) 2021-07-22

Family

ID=76863441

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/073038 WO2021142843A1 (fr) 2020-01-19 2020-01-19 Procédé et dispositif de balayage d'image, appareil et support de stockage

Country Status (2)

Country Link
CN (1) CN114981845A (fr)
WO (1) WO2021142843A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270378A (zh) * 2022-09-28 2022-11-01 中国空气动力研究与发展中心计算空气动力研究所 一种弓形激波外场网格的生成方法
CN115661104A (zh) * 2022-11-04 2023-01-31 广东杰成新能源材料科技有限公司 动力电池的整体完整度评估方法、装置、设备及介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116500379B (zh) * 2023-05-15 2024-03-08 珠海中瑞电力科技有限公司 一种sts装置电压跌落精准定位方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489218A (zh) * 2013-09-17 2014-01-01 中国科学院深圳先进技术研究院 点云数据质量自动优化方法及***
CN107767442A (zh) * 2017-10-16 2018-03-06 浙江工业大学 一种基于Kinect和双目视觉的脚型三维重建与测量方法
US10089781B2 (en) * 2015-12-30 2018-10-02 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Three-dimensional point cloud model reconstruction method, computer readable storage medium and device
CN109344786A (zh) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 目标识别方法、装置及计算机可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489218A (zh) * 2013-09-17 2014-01-01 中国科学院深圳先进技术研究院 点云数据质量自动优化方法及***
US10089781B2 (en) * 2015-12-30 2018-10-02 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Three-dimensional point cloud model reconstruction method, computer readable storage medium and device
CN107767442A (zh) * 2017-10-16 2018-03-06 浙江工业大学 一种基于Kinect和双目视觉的脚型三维重建与测量方法
CN109344786A (zh) * 2018-10-11 2019-02-15 深圳步智造科技有限公司 目标识别方法、装置及计算机可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SPREITZER GABRIEL; TUNNICLIFFE JON; FRIEDRICH HEIDE: "Large wood (LW) 3D accumulation mapping and assessment using structure from Motion photogrammetry in the laboratory", JOURNAL OF HYDROLOGY, ELSEVIER, AMSTERDAM, NL, vol. 581, 5 December 2019 (2019-12-05), AMSTERDAM, NL, XP085987121, ISSN: 0022-1694, DOI: 10.1016/j.jhydrol.2019.124430 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115270378A (zh) * 2022-09-28 2022-11-01 中国空气动力研究与发展中心计算空气动力研究所 一种弓形激波外场网格的生成方法
CN115270378B (zh) * 2022-09-28 2022-12-30 中国空气动力研究与发展中心计算空气动力研究所 一种弓形激波外场网格的生成方法
CN115661104A (zh) * 2022-11-04 2023-01-31 广东杰成新能源材料科技有限公司 动力电池的整体完整度评估方法、装置、设备及介质
CN115661104B (zh) * 2022-11-04 2023-08-11 广东杰成新能源材料科技有限公司 动力电池的整体完整度评估方法、装置、设备及介质

Also Published As

Publication number Publication date
CN114981845A (zh) 2022-08-30

Similar Documents

Publication Publication Date Title
WO2020206903A1 (fr) Procédé et dispositif de mise en correspondance d'images et support de mémoire lisible par ordinateur
KR101921672B1 (ko) 이미지 처리 방법 및 장치
WO2021142843A1 (fr) Procédé et dispositif de balayage d'image, appareil et support de stockage
CN109801374B (zh) 一种通过多角度图像集重构三维模型的方法、介质及***
CN115205489A (zh) 一种大场景下的三维重建方法、***及装置
CN108010123B (zh) 一种保留拓扑信息的三维点云获取方法
CN109472820B (zh) 单目rgb-d相机实时人脸重建方法及装置
CN111144349B (zh) 一种室内视觉重定位方法及***
CN111524168A (zh) 点云数据的配准方法、***、装置及计算机存储介质
WO2022021782A1 (fr) Procédé et système de génération automatique d'ensemble de données de posture en six dimensions, terminal, et support de stockage
WO2021035627A1 (fr) Procédé et dispositif d'acquisition de carte de profondeur et support de stockage informatique
CN116129037B (zh) 视触觉传感器及其三维重建方法、***、设备及存储介质
CN109003307B (zh) 基于水下双目视觉测量的捕鱼网目尺寸设计方法
CN111161138B (zh) 用于二维全景图像的目标检测方法、装置、设备、介质
CN113643414A (zh) 一种三维图像生成方法、装置、电子设备及存储介质
CN114202632A (zh) 网格线性结构恢复方法、装置、电子设备及存储介质
CN116051736A (zh) 一种三维重建方法、装置、边缘设备和存储介质
TW202244680A (zh) 位置姿勢獲取方法、電子設備及電腦可讀儲存媒體
CN113808269A (zh) 地图生成方法、定位方法、***及计算机可读存储介质
JP6086491B2 (ja) 画像処理装置およびそのデータベース構築装置
CN111091117B (zh) 用于二维全景图像的目标检测方法、装置、设备、介质
CN117726747A (zh) 补全弱纹理场景的三维重建方法、装置、存储介质和设备
CN115086625B (zh) 投影画面的校正方法、装置、***、校正设备和投影设备
CN117152330A (zh) 一种基于深度学习的点云3d模型贴图方法和装置
WO2021208630A1 (fr) Procédé d'étalonnage, appareil d'étalonnage et dispositif électronique l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20914043

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20914043

Country of ref document: EP

Kind code of ref document: A1