WO2021017314A1 - 信息处理方法、定位方法及装置、电子设备和存储介质 - Google Patents

信息处理方法、定位方法及装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2021017314A1
WO2021017314A1 PCT/CN2019/118453 CN2019118453W WO2021017314A1 WO 2021017314 A1 WO2021017314 A1 WO 2021017314A1 CN 2019118453 W CN2019118453 W CN 2019118453W WO 2021017314 A1 WO2021017314 A1 WO 2021017314A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional point
point cloud
image
dimensional
information
Prior art date
Application number
PCT/CN2019/118453
Other languages
English (en)
French (fr)
Inventor
冯友计
叶智超
金嘉诚
章国锋
Original Assignee
浙江商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 浙江商汤科技开发有限公司 filed Critical 浙江商汤科技开发有限公司
Priority to JP2021574903A priority Critical patent/JP7328366B2/ja
Publication of WO2021017314A1 publication Critical patent/WO2021017314A1/zh
Priority to US17/551,865 priority patent/US11983820B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/008Cut plane or projection plane definition

Definitions

  • the present disclosure relates to the field of computer vision technology, and in particular to an information processing method, positioning method and device, electronic equipment, and storage medium.
  • Three-dimensional reconstruction technology is one of the emerging technologies in recent years, and its application is extremely wide, and it is applied in various fields such as industry, medicine, life and entertainment.
  • the three-dimensional reconstruction technology can reconstruct three-dimensional objects in the scene, and the three-dimensional images of the objects can be reconstructed using the images collected by image acquisition devices such as cameras, so that the objects are presented on the images in an intuitive manner.
  • Image-based three-dimensional reconstruction can generate a three-dimensional point cloud of the scene, where the coordinates of the three-dimensional point are usually defined in the coordinate system of a certain image acquisition device, which does not have real geographical significance. It is of great significance to use these three-dimensional point clouds in some practical applications (such as visual positioning).
  • the present disclosure proposes a technical solution for information processing and positioning.
  • an information processing method including: acquiring three-dimensional point information of a three-dimensional point cloud; based on the three-dimensional point information, generating a two-dimensional point cloud image projected from the three-dimensional point cloud to a horizontal plane; The degree of consistency between the two-dimensional point cloud image and the reference plan is to determine the projection coordinates of the three-dimensional points contained in the three-dimensional point cloud in the reference coordinate system of the reference plan, wherein the reference plan is used to represent the target object A projection map with reference coordinates projected on the horizontal plane, and the three-dimensional point cloud is used to represent the three-dimensional space information of the target object.
  • the method before generating the two-dimensional point cloud image projected from the three-dimensional point cloud to the horizontal plane based on the three-dimensional point information, the method further includes: acquiring at least two of the process of acquiring the image information by the image acquisition device. Individual pose information, wherein the image information is used to construct the three-dimensional point cloud; and the horizontal plane of the three-dimensional point cloud projection is determined according to at least two pose information of the image acquisition device.
  • the pose information includes orientation information and position information;
  • the determining the horizontal plane of the three-dimensional point cloud projection based on at least two pose information of the image acquisition device includes: The at least two position information of the image acquisition device determine the displacement between any two positions of the image acquisition device in the process of acquiring image information; according to the at least two orientation information of the image acquisition device and any two positions The displacement between these positions determines the horizontal plane of the three-dimensional point cloud projection.
  • the image acquisition device satisfies at least one of the following preset basic conditions: the horizontal axis where the image acquisition device is in the process of acquiring image information is parallel to the horizontal plane of the three-dimensional point cloud projection; In the process of collecting image information, the height of the image collecting device to the ground changes within a preset height range.
  • the generating, based on the three-dimensional point information, the two-dimensional point cloud image projected from the three-dimensional point cloud onto a horizontal plane includes: determining the three-dimensional point cloud according to the three-dimensional point information At least one plane included in the three-dimensional point cloud; determining the three-dimensional points to be filtered out in the three-dimensional point cloud according to the number of three-dimensional points included in each plane in the at least one plane and the normal direction of each plane; Delete the three-dimensional points to be filtered out of the three-dimensional point cloud to obtain the remaining three-dimensional points of the three-dimensional point cloud; according to the three-dimensional point information of the remaining three-dimensional points, project the remaining three-dimensional points on the horizontal plane to generate the three-dimensional The two-dimensional point cloud image of the point cloud projection.
  • said determining the to-be-filtered three-dimensional point in the three-dimensional point cloud according to the number of three-dimensional points included in each plane in the at least one plane and the normal direction of each plane It includes: determining the first plane with the largest number of three-dimensional points in the at least one plane and greater than a first threshold according to the number of three-dimensional points included in each plane in the at least one plane;
  • the determining, according to the coordinate information of the two-dimensional point cloud, a target straight line that satisfies a straight line condition included in the two-dimensional point cloud includes: according to the coordinate information of the two-dimensional point cloud , Determine at least one straight line included in the two-dimensional point cloud; wherein the number of two-dimensional points included in each straight line in the at least one straight line is greater than a second threshold; and count the at least one straight line included in each straight line According to the number of two-dimensional points, the at least one straight line is sorted according to the number of the two-dimensional points to obtain a sorting result; the current straight line in the at least one straight line is successively obtained according to the sorting result, and the at least one straight line is determined In the case where the number of straight lines perpendicular to the current straight line is greater than the third threshold, the current straight line is determined to be a target straight line that satisfies the straight line condition.
  • the projection coordinates of the three-dimensional point contained in the three-dimensional point cloud in the reference coordinate system of the reference plane are determined based on the degree of consistency between the two-dimensional point cloud image and the reference plane. , Including: performing at least one similarity transformation on the two-dimensional point cloud image; determining the degree of consistency between the two-dimensional point in the two-dimensional point cloud image and the reference point of the reference plan after each similarity transformation; according to the at least one similarity The degree of consistency determined after the transformation is determined to determine the transformation relationship between the three-dimensional point in the three-dimensional point cloud and the reference point in the reference plan; based on the transformation relationship, the three-dimensional point cloud is projected onto the reference plan to obtain Projection coordinates of the three-dimensional point cloud in the reference coordinate system of the reference plan.
  • the performing at least one similarity transformation on the two-dimensional point cloud image includes: determining a transformation range for similar transformation of the two-dimensional point cloud image; The two-dimensional point cloud image undergoes at least one similarity transformation.
  • the similarity transformation includes translational transformation;
  • the determining the degree of consistency between the two-dimensional point in the two-dimensional point cloud image and the reference point of the reference plan after each similarity transformation includes: For the two-dimensional point cloud image after translational transformation, the two-dimensional point cloud image is down-sampled for a preset number of times to obtain the first sampled image after each down-sampling process; according to the number of down-sampling processes from large to small In the sequence of each down-sampling process, determine the degree of consistency between the two-dimensional points in the first sampled image and the reference points in the second sampled image for the first sampled image after each downsampling process; wherein, the second sampled image is the reference plane image It is obtained through the same down-sampling process as the first sampled image; according to the degree of consistency between the first sampled image and the second sampled image determined after the first down-sampling process, the two-dimensional point cloud image after each translational transformation is determined The degree of consistency between the mid-dimensional point
  • the determining the degree of consistency between the two-dimensional point in the two-dimensional point cloud image and the reference point of the reference plan after each similarity transformation includes:
  • the first pixel is the two-dimensional point in the two-dimensional point cloud image
  • the first pixel is the first target pixel; determining the first ratio of the number of first target pixels contained in the two-dimensional point cloud image to the number of first pixels contained in the two-dimensional point cloud image ;
  • the degree of consistency between the two-dimensional point cloud image and the reference plane image after each similar transformation is determined according to the first ratio.
  • the determining the degree of consistency between the two-dimensional point in the two-dimensional point cloud image and the reference point of the reference plan after each similarity transformation includes: comparing the two-dimensional point cloud image each time After the similarity transformation, traverse the second pixel points of the reference plan view, where the second pixel points are the pixels that constitute the reference point in the reference plan view; determine that the two-dimensional point cloud image corresponds to all The second image area of the second pixel; if there is a first pixel representing the two-dimensional point in the second image area, determining that the second pixel is a second target pixel; determining A second ratio of the number of second target pixels included in the reference plane to the number of second pixels included in the reference plane; the two-dimensional point after each similar transformation is determined according to the second ratio The degree of agreement between the cloud image and the reference floor plan.
  • the determining the degree of consistency between the two-dimensional point in the two-dimensional point cloud image and the reference point of the reference plan after each similarity transformation includes: comparing the two-dimensional point cloud image each time After similar transformation, determine the first pixel in the non-enclosed area in the two-dimensional point cloud image, where the first pixel is the pixel that constitutes the two-dimensional point in the two-dimensional point cloud image ; Determine the third ratio of the number of first pixels located in the non-closed area to the number of first pixels contained in the two-dimensional point cloud image; determine the third ratio after each similar transformation according to the third ratio The degree of consistency between the two-dimensional point cloud image and the reference plan.
  • the determining the degree of consistency between the two-dimensional point in the two-dimensional point cloud image and the reference point of the reference plan after each similarity transformation includes: comparing the two-dimensional point cloud image each time After the similar transformation, the third pixel point projected by the image acquisition device in the two-dimensional point cloud image is determined according to the pose information in the process of acquiring the image information by the image acquisition device; wherein the image information is used for Construct the three-dimensional point cloud; determine the fourth ratio of the number of third pixels located in the non-closed area to the number of third pixels contained in the two-dimensional point cloud image; determine according to the fourth ratio The degree of consistency between the two-dimensional point cloud image and the reference plane image after each similar transformation.
  • the determining a transformation relationship from a three-dimensional point in the three-dimensional point cloud to a reference point in the reference plan according to the degree of consistency determined after the at least one similarity transformation includes: The degree of consistency determined after the at least one similarity transformation is determined, and the two-dimensional transformation matrix that matches the two-dimensional point in the two-dimensional point cloud image to the reference point in the reference plan is determined; based on the two-dimensional transformation matrix, the three-dimensional The transformation relationship between the three-dimensional points in the point cloud and the reference points in the reference plane.
  • a positioning method comprising: acquiring target image information collected by an image acquisition device of a target object; and comparing the collected target image information with three-dimensional points in a three-dimensional point cloud The comparison, wherein the three-dimensional point cloud is used to represent the three-dimensional spatial information of the target object, the three-dimensional points in the three-dimensional point cloud correspond to projection coordinates, and the projection coordinates are based on the two-dimensional point cloud image and the reference plan
  • the two-dimensional point cloud image is generated by projecting the three-dimensional point cloud onto the horizontal plane, and the reference plan view is used to represent the target object projected on the horizontal plane with reference coordinates
  • the projection map according to the projection coordinates corresponding to the three-dimensional point matching the target image information, the image acquisition device is positioned.
  • an information processing device including: an acquisition module for acquiring three-dimensional point information of a three-dimensional point cloud; and a generating module for generating the three-dimensional point cloud based on the three-dimensional point information A two-dimensional point cloud image projected to a horizontal plane; a determining module for determining the reference coordinate system of the three-dimensional point contained in the three-dimensional point cloud in the reference plane based on the degree of consistency between the two-dimensional point cloud image and the reference plane Projection coordinates below, wherein the reference plane image is used to represent a projection image of the target object projected on the horizontal plane with reference coordinates, and the three-dimensional point cloud is used to represent the three-dimensional space information of the target object.
  • the device further includes: a pose acquisition module for acquiring at least two pose information in the process of acquiring image information by the image acquisition device, wherein the image information is used to construct the Three-dimensional point cloud; a plane determination module for determining the horizontal plane of the three-dimensional point cloud projection according to at least two pose information of the image acquisition device.
  • the pose information includes orientation information and position information;
  • the plane determination module is specifically configured to determine the image acquisition device according to at least two position information of the image acquisition device The displacement between any two positions in the process of collecting image information;
  • the horizontal plane of the three-dimensional point cloud projection is determined according to at least two orientation information of the image acquisition device and the displacement between any two positions.
  • the image acquisition device satisfies at least one of the following preset basic conditions: the horizontal axis where the image acquisition device is in the process of acquiring image information is parallel to the horizontal plane of the three-dimensional point cloud projection; In the process of collecting image information, the height of the image collecting device to the ground changes within a preset height range.
  • the generating module is specifically configured to determine at least one plane included in the three-dimensional point cloud according to the three-dimensional point information of the three-dimensional point cloud; The number of three-dimensional points included in each plane and the normal direction of each plane determine the three-dimensional points to be filtered out in the three-dimensional point cloud; delete the three-dimensional points to be filtered out from the three-dimensional point cloud to obtain the The remaining three-dimensional points of the three-dimensional point cloud; according to the three-dimensional point information of the remaining three-dimensional points, the remaining three-dimensional points are projected on the horizontal plane to generate a two-dimensional point cloud image of the three-dimensional point cloud projection.
  • the generating module is specifically configured to determine that the number of three-dimensional points in the at least one plane is the largest and is greater than the number of three-dimensional points included in each plane in the at least one plane.
  • a threshold first plane judging whether the normal direction of the first plane is perpendicular to the horizontal plane; in the case where the normal direction of the first plane is perpendicular to the horizontal plane, determining that the first plane includes The three-dimensional point is the three-dimensional point to be filtered out.
  • the three-dimensional point information includes a three-dimensional coordinate vector; the generating module is specifically configured to determine the location of the three-dimensional point cloud according to the three-dimensional coordinate vector of the three-dimensional point cloud and the horizontal plane of the projection.
  • the positional relationship of the coordinate axis determines the rotation angle of the two-dimensional point cloud; the two-dimensional point cloud is rotated according to the rotation angle to obtain a two-dimensional point cloud image projected from the three-dimensional point cloud onto the horizontal plane.
  • the generating module is specifically configured to determine at least one straight line included in the two-dimensional point cloud according to the coordinate information of the two-dimensional point cloud; wherein, the at least one straight line The number of two-dimensional points included in each line in the at least one line is greater than the second threshold; the number of two-dimensional points included in each line in the at least one line is counted, and the at least one line is sorted according to the number of two-dimensional points , Obtain the sorting result; successively obtain the current straight line in the at least one straight line according to the sorting result, and determine the number of straight lines perpendicular to the current straight line in the at least one straight line; the number of straight lines perpendicular to the current straight line is greater than In the case of the third threshold, it is determined that the current straight line is the target straight line meeting the straight line condition.
  • the determining module is specifically configured to perform at least one similarity transformation on the two-dimensional point cloud image; determine that the two-dimensional point in the two-dimensional point cloud image after each similarity transformation is The degree of consistency of the reference points in the reference plan; according to the degree of consistency determined after the at least one similarity transformation, determine the transformation relationship from the three-dimensional point in the three-dimensional point cloud to the reference points in the reference plan; based on the transformation relationship, The three-dimensional point cloud is projected onto the reference plane to obtain the projection coordinates of the three-dimensional point cloud in the reference coordinate system of the reference plane.
  • the determining module is specifically configured to determine a transformation range for similar transformation of the two-dimensional point cloud image; perform at least once on the two-dimensional point cloud image within the transformation range Similar transformation.
  • the similarity transformation includes translational transformation; the determination module is specifically configured to preset the two-dimensional point cloud image for the two-dimensional point cloud image after each translational transformation
  • the number of times of downsampling is processed to obtain the first sampled image after each downsampling process; in descending order of the number of downsampling processing, the first sampled image is determined for the first sampled image after each downsampling process.
  • the determining module is specifically configured to traverse the first pixel of the two-dimensional point cloud image for the two-dimensional point cloud image after each similarity transformation, wherein the first pixel point of the two-dimensional point cloud image is A pixel point is a pixel point constituting the two-dimensional point in the two-dimensional point cloud image; determining the first image area corresponding to the first pixel point in the reference plan view; storing in the first image area In the case of representing the second pixel of the reference point, determine that the first pixel is the first target pixel; determine that the number of first target pixels contained in the two-dimensional point cloud image is related to the A first ratio of the number of first pixel points contained in the two-dimensional point cloud image; and determining the degree of consistency between the two-dimensional point cloud image and the reference plan after each similar transformation according to the first ratio.
  • the determining module is specifically configured to traverse the second pixel of the reference plane image after similarly transforming the two-dimensional point cloud image, wherein the second pixel The point is the pixel point that constitutes the reference point in the reference plan view; the second image area corresponding to the second pixel point in the two-dimensional point cloud image is determined; In the case of the first pixel of the two-dimensional point, determine that the second pixel is a second target pixel; determine that the number of second target pixels included in the reference plane image and the number of second target pixels included in the reference plane A second ratio of the number of second pixel points; the degree of consistency between the two-dimensional point cloud image and the reference plane image after each similar transformation is determined according to the second ratio.
  • the determining module is specifically configured to determine the first pixel located in the non-enclosed area in the two-dimensional point cloud image after each similar transformation of the two-dimensional point cloud image Point, wherein the first pixel point is the pixel point constituting the two-dimensional point in the two-dimensional point cloud image; determining the number of first pixel points located in the non-closed area and the two-dimensional point cloud The third ratio of the number of first pixel points contained in the image; and the degree of consistency between the two-dimensional point cloud image and the reference plan after each similar transformation is determined according to the third ratio.
  • the determining module is specifically configured to determine the position and orientation information in the process of acquiring image information by the image acquisition device after each similar transformation of the two-dimensional point cloud image.
  • the fourth ratio of the number of third pixels contained in the two-dimensional point cloud image; and the degree of consistency between the two-dimensional point cloud image and the reference plan after each similar transformation is determined according to the fourth ratio.
  • the determining module is specifically configured to determine that the two-dimensional point in the two-dimensional point cloud image matches the reference plane image according to the degree of consistency determined after the at least one similarity transformation A two-dimensional transformation matrix of a reference point; based on the two-dimensional transformation matrix, a transformation relationship from a three-dimensional point in the three-dimensional point cloud to a reference point in the reference plane is determined.
  • a positioning device characterized in that the device includes: an acquisition module for acquiring target image information collected by an image acquisition device on a target object; a comparison module for comparing The target image information is compared with three-dimensional points in a three-dimensional point cloud, wherein the three-dimensional point cloud is used to represent the three-dimensional space information of the target object, and the three-dimensional points in the three-dimensional point cloud correspond to projection coordinates, The projection coordinates are determined based on the consistency between a two-dimensional point cloud image and a reference plan, the two-dimensional point cloud image is generated by the projection of the three-dimensional point cloud to the horizontal plane, and the reference plan is used to represent A projection map with reference coordinates projected by the target object on the horizontal plane; a positioning module for positioning the image acquisition device according to the projection coordinates corresponding to the three-dimensional point matching the target image information.
  • an electronic device including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to execute the above-mentioned information processing method.
  • a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the foregoing information processing method is implemented.
  • a computer program wherein the computer program includes computer-readable code, and when the computer-readable code runs in an electronic device, a processor in the electronic device executes To implement some or all of the steps described in any method in the first aspect of the embodiments of the present disclosure.
  • three-dimensional point information of a three-dimensional point cloud can be obtained, and based on the three-dimensional point information, a two-dimensional point cloud image projected from the three-dimensional point cloud to a horizontal plane can be generated, so that the three-dimensional point cloud can be converted into a two-dimensional point cloud image . Then, based on the degree of consistency between the two-dimensional point cloud image and the reference plane, the projection coordinates of the three-dimensional points contained in the three-dimensional point cloud in the reference coordinate system of the reference plane can be determined, where the reference plane is used to indicate that the target object is on the horizontal plane. A projected projection map with reference coordinates. A three-dimensional point cloud is used to represent the three-dimensional space information of the target object.
  • the 3D point cloud can be automatically matched to the reference plan, so that the 3D point cloud can be correctly marked on the reference plan, and the efficiency and accuracy of matching the 3D point cloud to the reference plan can be improved.
  • the user's position in the reference coordinate system can be determined, and the user can be positioned.
  • Fig. 1 shows a flowchart of an information processing method according to an embodiment of the present disclosure.
  • Fig. 2 shows a block diagram of a two-dimensional point cloud image of a three-dimensional point cloud projection according to an embodiment of the present disclosure.
  • Fig. 3 shows a block diagram of a projection image of a three-dimensional point cloud in a reference coordinate system according to an embodiment of the present disclosure.
  • Fig. 4 shows a flowchart of a positioning method according to an embodiment of the present disclosure.
  • FIG. 5 shows a block diagram of an information processing device according to an embodiment of the present disclosure.
  • Fig. 6 shows a block diagram of a positioning device according to an embodiment of the present disclosure.
  • Fig. 7 shows a block diagram of an example of an electronic device according to an embodiment of the present disclosure.
  • the term "at least one" in this document means any one or any combination of at least two of the multiple, for example, including at least one of A, B, and C, may mean including A, Any one or more elements selected in the set formed by B and C.
  • numerous specific details are given in the following specific embodiments. Those skilled in the art should understand that the present disclosure can also be implemented without some specific details. In some instances, the methods, means, elements, and circuits well-known to those skilled in the art have not been described in detail in order to highlight the gist of the present disclosure.
  • the information processing solution provided by the embodiments of the present disclosure can obtain the three-dimensional point information of the three-dimensional point cloud obtained by three-dimensional reconstruction, and then use the three-dimensional point information of the three-dimensional point cloud to generate a two-dimensional point cloud image projected from the three-dimensional point cloud to the horizontal plane.
  • the degree of consistency between the generated two-dimensional point cloud image and the reference plan is to determine the projected coordinates of the three-dimensional points contained in the three-dimensional point cloud in the reference coordinate system of the reference plan, so that the three-dimensional point cloud in the coordinate system of the image acquisition device can be transformed to Under the reference coordinates of the reference plan, the 3D points contained in the 3D point cloud are automatically matched to the corresponding positions of the reference plan, and the 2D points projected by the 3D points are aligned with the reference points representing the same target object in the reference plan.
  • the reference plane image is used to represent the projection image of the target object on the horizontal plane with reference coordinates
  • the three-dimensional point cloud is used to represent the three-dimensional spatial information of the target object.
  • the 3D points of the 3D point cloud are matched to the reference plan by artificial means, for example, to the indoor map, and some visual clues such as Manually adjust the scale, rotation and translation of the 3D point cloud to align it with the reference plan for the shape, boundary, and corner.
  • This method has low efficiency and is not conducive to handling large-scale tasks.
  • the information processing solution provided by the embodiments of the present disclosure can automatically match the 3D points in the 3D point cloud to the reference plan according to the degree of consistency between the 2D point cloud image corresponding to the 3D point cloud and the reference plan, which not only saves a lot of manpower , Improve the matching efficiency, and also improve the accuracy of matching the 3D point cloud to the reference plan.
  • the information processing solution provided by the embodiments of the present disclosure can be applied to any scene where three-dimensional points are projected onto a plane, for example, a three-dimensional point cloud corresponding to an indoor scene of a large building is automatically matched to a plan view of a building. It can also be applied to scenes that use three-dimensional points for positioning and navigation. For example, users can obtain three-dimensional point information from images taken by devices such as mobile phones to estimate the user's position in the current scene to achieve visual positioning.
  • the information processing solution provided by the present disclosure will be described below through embodiments.
  • Fig. 1 shows a flowchart of an information processing method according to an embodiment of the present disclosure.
  • the information processing method can be executed by a terminal device, server, or other information processing device, where the terminal device can be a user equipment (UE), mobile device, user terminal, terminal, cellular phone, cordless phone, personal digital processing ( Personal Digital Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • the information processing method may be implemented by a processor invoking computer-readable instructions stored in the memory.
  • the information processing method in the embodiments of the present disclosure will be described below by taking an information processing device as an example.
  • the information processing method includes the following steps:
  • Step S11 acquiring three-dimensional point information of the three-dimensional point cloud.
  • the information processing device may obtain a three-dimensional reconstructed three-dimensional point cloud, and obtain three-dimensional point information of the three-dimensional point cloud.
  • the three-dimensional point cloud may be a set of three-dimensional points formed by a plurality of three-dimensional points, and the three-dimensional points in the set may be obtained by collecting image information of a certain scene by the image collecting device.
  • the three-dimensional point information of the three-dimensional point cloud may include the position information of the three-dimensional point.
  • the position information may be the position information of the three-dimensional point in the coordinate system of the image acquisition device, which may be expressed as the three-dimensional coordinates in the coordinate system of the image acquisition device, or expressed as The three-dimensional vector in the coordinate system of the image acquisition device.
  • the three-dimensional point cloud can be used to represent the three-dimensional space information where the target object is located, for example, the three-dimensional space information of a certain scene where the target object is located.
  • the target object may be any object existing in the scene, for example, fixed objects such as walls, columns, tables and chairs, signs, buildings, etc.
  • the target object may be moving objects such as vehicles and pedestrians.
  • the three-dimensional points contained in the three-dimensional point cloud can be obtained from image information collected by one or more imaging devices.
  • the image collection device can photograph the target object in the scene from different angles, and is formed by the target object photographed by the image acquisition device
  • the image information can form a three-dimensional point corresponding to the target object, and multiple three-dimensional points can form a three-dimensional point cloud in the scene.
  • the formed three-dimensional points have corresponding coordinates in the three-dimensional space coordinate system, so that the three-dimensional points of the target object are arranged in the three-dimensional space coordinate system according to the corresponding coordinates to form a three-dimensional three-dimensional model.
  • the three-dimensional model is It is a three-dimensional point cloud.
  • Step S12 based on the three-dimensional point information, generate a two-dimensional point cloud image in which the three-dimensional point cloud is projected onto a horizontal plane.
  • the three-dimensional point cloud can be projected on a horizontal plane based on the acquired three-dimensional point information of the three-dimensional point cloud.
  • the horizontal plane here may be a virtual plane determined according to the shooting plane where the image acquisition device is located during shooting. Projecting the three-dimensional point cloud on the horizontal plane can generate a two-dimensional point cloud image after the three-dimensional point cloud projection.
  • the shooting plane of the image capturing device during the shooting process can be determined according to the pose information of the image capturing device, and then the three-dimensional point cloud can be determined according to the shooting plane
  • the horizontal plane of the projection so that the three-dimensional point cloud can be projected onto a certain horizontal plane to generate a two-dimensional point cloud image of the three-dimensional point cloud.
  • the horizontal plane here can be the plane in the coordinate system of the image acquisition device, which can be the same as or different from the horizontal plane in the real three-dimensional space.
  • the three-dimensional point cloud is projected in the coordinate system of the image acquisition device to generate a two-dimensional point after the projection. Cloud image.
  • Fig. 2 shows a block diagram of a two-dimensional point cloud image of a three-dimensional point cloud projection according to an embodiment of the present disclosure.
  • the horizontal plane of the 3D point cloud projection is not perpendicular to the Z axis in the real 3D space. After the three-dimensional point cloud is projected to the horizontal plane, a two-dimensional point cloud image of the three-dimensional point cloud can be obtained.
  • Step S13 based on the degree of consistency between the two-dimensional point cloud image and the reference plan, determine the projection coordinates of the three-dimensional points contained in the three-dimensional point cloud in the reference coordinate system of the reference plan, wherein the reference plan uses To represent a projection map with reference coordinates projected on the horizontal plane of the target object, the three-dimensional point cloud is used to represent the three-dimensional space information of the target object.
  • the degree of consistency between the two-dimensional point cloud image and the reference plan can be understood as the degree of mutual matching between the two-dimensional point in the two-dimensional point cloud image and the reference point in the reference plan within the same image area .
  • the similarity transformation of the two-dimensional point cloud image matching to the reference plane can be determined, and then based on the determined similarity transformation, the two-dimensional point cloud image can be aligned with the reference plane image to obtain the three-dimensional point cloud
  • the projection transformation of the three-dimensional point cloud in the reference coordinate system of the reference plane can be determined, so that the three-dimensional point cloud can be projected on the reference plane through the projection transformation to obtain the three-dimensional point cloud on the reference plane.
  • the projected image in the reference coordinate system refers to the transformation relationship from the two-dimensional point cloud image to the reference plane image.
  • the similar transformation from matching the two-dimensional point cloud image to the reference plan may include, but is not limited to, image transformation such as rotation, translation, and scaling of the two-dimensional point cloud image.
  • the two-dimensional point cloud image can be matched to the corresponding position of the reference plan, so that the two-dimensional point representing a certain target object in the two-dimensional point cloud image is aligned with the reference point representing the target object in the reference plan.
  • the reference plan may be a plan view of the target object projected to a horizontal plane, for example, a plan view of a building, a two-dimensional map of surveying and mapping, and the like.
  • prominent structures such as walls and columns can be used to automatically match the three-dimensional point cloud to the reference plan.
  • the reference plan here can be a simplified plan, that is, the reference plan can be retained Indicates the reference point or reference line segment of significant structures such as walls and columns.
  • the pixel value of the reserved reference point or reference line segment can be set to 1, and the other pixels can be set to 0, which can simplify the reference plan.
  • Fig. 3 shows a block diagram of a projection image of a three-dimensional point cloud in a reference coordinate system according to an embodiment of the present disclosure.
  • the two-dimensional point cloud image is automatically aligned with the reference plane image.
  • the three-dimensional point cloud can be projected on the reference plane according to the degree of consistency between the two-dimensional point cloud image and the reference plane, so that the three-dimensional point cloud in the coordinate system of the image acquisition device can be automatically transformed Going to the reference coordinate system of the reference plan can save a lot of manpower and improve the matching efficiency.
  • the embodiment of the present disclosure provides a possible implementation for generating the horizontal plane of the three-dimensional point cloud projection.
  • the above information processing method also includes the following steps:
  • Step S121 acquiring at least two pose information in the process of acquiring image information by the image acquisition device, where the image information is used to construct the three-dimensional point cloud;
  • Step S122 Determine the horizontal plane of the three-dimensional point cloud projection according to the at least two pose information of the image acquisition device.
  • the coordinate system of the image acquisition device may be different from the coordinate system of the actual three-dimensional space, and the horizontal plane on which the three-dimensional point cloud is projected can be determined first.
  • the coordinate system of the image acquisition device may be a coordinate system established on the plane where the image sensor of the image acquisition device is located.
  • the coordinate system of the actual three-dimensional space may be the world coordinate system.
  • the pose information corresponding to at least two moments in the shooting process of the image acquisition device can be acquired, and the pose information at each moment may be one pose information.
  • the pose information of two image acquisition devices during the shooting process may be acquired, and the pose information of one image acquisition device may be one pose information.
  • the pose information may include position information and orientation information of the image acquisition device, where the position information may be the position in the coordinate system of the image acquisition device.
  • the shooting plane of the image acquisition device can be determined, and the horizontal plane of the three-dimensional point cloud projection can be determined according to the shooting plane.
  • the three-dimensional point information of the three-dimensional point cloud can be projected under the coordinate system of the horizontal plane to generate a two-dimensional point cloud image of the three-dimensional point cloud.
  • step S122 may include the following steps:
  • Step S1221 Determine the displacement between any two positions of the image acquisition device in the process of acquiring image information according to at least two position information of the image acquisition device;
  • Step S1222 Determine the horizontal plane of the three-dimensional point cloud projection according to the at least two orientation information of the image acquisition device and the displacement between any two positions.
  • the horizontal plane of the three-dimensional point cloud projection is parallel to the horizontal axis where the image capture device is located, and the horizontal plane is parallel to the plane on which the image capture device moves. Therefore, the orientation corresponding to the at least two orientation information of the image acquisition device is parallel to the horizontal plane, and the displacement determined by the at least two position information of the image acquisition device is parallel to the horizontal plane, so that the image acquisition device can be determined according to the at least two position information of the image acquisition device The displacement between any two positions in the process of collecting image information, and then according to the parallel relationship between the horizontal plane and the orientation and displacement of the image collecting device, the horizontal plane of the three-dimensional point cloud projection is determined.
  • the above-mentioned image acquisition device satisfies at least one of the following preset basic conditions: the horizontal axis on which the image acquisition device is in the process of acquiring image information is parallel to the horizontal plane of the three-dimensional point cloud projection; the image acquisition device is acquiring images The collection height in the information process changes within the preset height range.
  • the horizontal axis where the image acquisition device is in the process of acquiring image information is parallel to the horizontal plane of the 3D point cloud projection, which can indicate that the image acquisition device is horizontal when the image acquisition device is photographing the image information used to reconstruct the 3D point cloud, that is The x or y axis of the coordinate system of the image acquisition device determined based on the orientation of the image acquisition device is parallel to the horizontal plane of shooting.
  • the center of the imaging plane of the image acquisition device can be regarded as the origin of the image acquisition device coordinate system, and the direction perpendicular to the imaging plane of the image acquisition device and passing through the above origin can be regarded as the z axis of the image acquisition device coordinate system.
  • Any two perpendicular directions of the plane where the imaging plane is located can be used as the x-axis or y-axis of the coordinate system of the image acquisition device.
  • the acquisition height of the image acquisition device in the process of acquiring image information changes within the preset height range, which can indicate that the height of the image acquisition device can be approximately fixed, so that the displacement of the image acquisition device is parallel to the horizontal plane.
  • the horizontal plane of the three-dimensional point cloud projection can be determined according to at least one of the aforementioned basic conditions. That is, when the horizontal axis where the image acquisition device is located is parallel to the horizontal plane of the three-dimensional point cloud projection, the three-dimensional point cloud projection can be determined by the plane formed by the horizontal axis at least two moments in the image acquisition process of the image acquisition device. The horizontal plane. Or, when the collection height of the image collection device changes within the preset height range, the horizontal plane of the 3D point cloud projection can be determined by the displacement vectors in different directions in the process of collecting image information, so that even if the image collection where the 3D point cloud is located The device coordinate system is different from the world coordinate system. It can also quickly determine the horizontal plane of the three-dimensional point cloud projection, providing projection conditions for the three-dimensional point cloud to generate the two-dimensional point cloud image projected to the horizontal plane.
  • the x-axis direction of the horizontal plane of the three-dimensional point cloud projection can be determined based on the above basic conditions, and then the y-axis and z-axis directions of the horizontal plane can be determined.
  • A can be a matrix determined based on the orientation of the image capture device, each row of A can represent the transposition of the x-axis direction vector of the horizontal axis of the image capture device at the capture time.
  • the x-axis direction vector of the image acquisition device at the acquisition time can be the first row vector (1,0,0) T of R; assuming that B can be based on the displacement of the image acquisition device at the two acquisition time
  • the obtained matrix can be displaced larger than a certain threshold at the two acquisition moments of the image acquisition device.
  • the threshold can be set to 0.2 times the maximum displacement of the image acquisition device at the two acquisition moments, so as to filter out too small The displacement.
  • n is the normal vector of the horizontal plane.
  • the formula (1) can indicate that the normal direction of the horizontal plane to be obtained is perpendicular to the x-axis of the image acquisition device, and at the same time, is perpendicular to the displacement of the image acquisition device.
  • the above formula (1) can be used to obtain the least square solution of n through Singular Value Decomposition (SVD).
  • the x-axis direction and the y-axis direction of the horizontal plane of the three-dimensional point cloud projection can also be determined.
  • the other two singular vectors in the above V namely V 1 and V 2 can be used as the x-axis direction vector and the y-axis direction vector of the projected horizontal plane, respectively.
  • the coordinate system where the horizontal plane of the three-dimensional point cloud projection is located can be determined, so that the three-dimensional point cloud can be further projected to the determined horizontal plane.
  • step S12 may include the following steps:
  • Step S123 Determine at least one plane included in the three-dimensional point cloud according to the three-dimensional point information of the three-dimensional point cloud;
  • Step S124 determining the to-be-filtered three-dimensional points in the three-dimensional point cloud according to the number of three-dimensional points included in each plane in the at least one plane and the normal direction of each plane;
  • Step S125 deleting the three-dimensional points to be filtered from the three-dimensional point cloud to obtain the remaining three-dimensional points of the three-dimensional point cloud;
  • Step S126 Project the remaining three-dimensional points on the horizontal plane according to the three-dimensional point information of the remaining three-dimensional points to generate a two-dimensional point cloud image of the three-dimensional point cloud projection.
  • identifying structures such as walls and columns can be used to match the three-dimensional point cloud to the reference plan.
  • the three-dimensional point cloud can include any one or more target objects in the indoor scene
  • 3D points if the 3D points in the 3D point cloud are projected to the horizontal plane, some 3D points corresponding to the non-identifying structure will be projected to the horizontal plane, which will cause interference to the 2D points generated by the projection of the 3D points corresponding to the identifying structure to the horizontal plane. It will increase the difficulty of distinguishing two-dimensional points that represent landmark structures such as walls and columns.
  • the three-dimensional points in the three-dimensional point cloud can be filtered out, for example, the three-dimensional point cloud can be filtered out Represents the three-dimensional points of objects such as the ceiling and the ground, which can reduce a large number of three-dimensional points corresponding to non-identifying structures in the three-dimensional points.
  • one or more planes formed by the 3D point cloud can be determined according to the position information in the 3D point information, and then the number of 3D points included in each formed plane can be counted, and Get the normal direction of each plane.
  • the plane of the ceiling, ground and other objects can be determined, and then the three-dimensional points included in the plane of the ceiling, ground and other objects can be determined as the three-dimensional points to be filtered, so that the The three-dimensional points on the plane are filtered out from the three-dimensional point cloud to obtain the remaining three-dimensional points of the three-dimensional point cloud. Then, according to the position information of the remaining three-dimensional points, the remaining three-dimensional points can be projected on the horizontal plane to generate a two-dimensional point cloud image of the three-dimensional point cloud projection.
  • the above step S124 may include: determining, according to the number of three-dimensional points included in each plane in the at least one plane, that the number of three-dimensional points in the at least one plane is the largest and is greater than the first threshold.
  • the first plane determine whether the normal direction of the first plane is perpendicular to the horizontal plane; in the case that the normal direction of the first plane is perpendicular to the horizontal plane, determine the three-dimensional points included in the first plane Is the three-dimensional point to be filtered out.
  • the first plane according to the number of three-dimensional points included in each plane obtained, it can be determined that among one or more planes included in the three-dimensional points in the three-dimensional point set, the first plane with the largest number of three-dimensional points and greater than the first threshold . Then it can be judged whether the normal direction of the first plane is perpendicular to the horizontal plane. If the normal direction of the first plane is perpendicular to the horizontal plane, it can be considered that the first plane represents the plane where the ceiling or the ground is located, and the three-dimensional points included in the first plane It is the three-dimensional point to be filtered out in the three-dimensional point cloud.
  • the three-dimensional point on the first plane can be transferred from the above-mentioned three-dimensional point set to the reserved three-dimensional point set to obtain the remaining three-dimensional points in the three-dimensional point set, and then repeatedly determine the three-dimensional points The step of the first plane having the largest number of three-dimensional points and greater than the first threshold among one or more planes included in the three-dimensional points in the set, until the number of three-dimensional points in the three-dimensional point set is less than or equal to the preset remaining number threshold .
  • the remaining three-dimensional points may be composed of the remaining three-dimensional points in the three-dimensional point set and the remaining three-dimensional points in the three-dimensional point set.
  • the first threshold can be set according to actual application scenarios.
  • step S12 may include the following steps:
  • Step S12a Determine the coordinate information of the two-dimensional point cloud projected by the three-dimensional point cloud on the horizontal plane according to the three-dimensional coordinate vector of the three-dimensional point cloud and the horizontal plane of the projection;
  • Step S12b according to the coordinate information of the two-dimensional point cloud, determine the target straight line that meets the straight line condition included in the two-dimensional point cloud;
  • Step S12c determining the rotation angle of the two-dimensional point cloud according to the positional relationship between the target straight line and the coordinate axis of the horizontal plane;
  • Step S12d Rotate the two-dimensional point cloud according to the rotation angle to obtain a two-dimensional point cloud image projected by the three-dimensional point cloud onto the horizontal plane.
  • the three-dimensional point information may include a three-dimensional coordinate vector
  • the three-dimensional vector coordinate may be a coordinate vector in the coordinate system of the image acquisition device.
  • the three-dimensional point cloud can be projected into the horizontal plane according to the image position of the identifying structure in the reference plan.
  • a three-dimensional point cloud can be projected into a horizontal plane based on the characteristic of identifying structures such as walls or columns in the reference plan, which are usually parallel to the x-axis or y-axis of the reference plan coordinate system.
  • the two-dimensional point cloud is fitted into at least one straight line, the target straight line meeting the straight line condition is determined from the fitted straight line, and the two-dimensional points included on the target straight line meeting the straight line condition are determined As a two-dimensional point that represents an identifying structure.
  • the included angle between the target straight line and the horizontal plane's x-axis or y-axis can be determined, and the included angle can be used as the rotation angle of the two-dimensional point cloud. Rotate according to the rotation angle so that the x-axis or y-axis that meets the target line parallel or perpendicular to the horizontal plane coordinate axis is obtained to obtain the final two-dimensional point cloud image of the three-dimensional point cloud projection.
  • the rotation angle of the two-dimensional point cloud is r ini
  • the rotation angle of the two-dimensional point cloud can be rotated by r ini so that the target line is parallel to the x-axis or y-axis of the horizontal plane.
  • the extreme value of the coordinates of the two-dimensional point cloud can be determined, and the extreme value coordinates of the obtained two-dimensional point cloud can be expressed as (x l , y t ).
  • the length and width of the rectangular area where the two-dimensional point cloud is located are expressed as w and h, respectively, and the rectangular area may contain at least one two-dimensional point of the two-dimensional point cloud.
  • a two-dimensional point cloud image with a certain length can be generated.
  • the size of the two-dimensional point cloud image can be adjusted according to the resolution of the reference plan.
  • the length of the two-dimensional point cloud image can be set to the length of a certain area in the reference plan
  • the pixel value of the location of the two-dimensional point in the two-dimensional point cloud image can be set to 1
  • the pixel value of other locations can be set to 0 In this way, a two-dimensional point cloud image of a three-dimensional point cloud projection can be obtained.
  • determining, according to the coordinate information of the two-dimensional point cloud, the target straight line that satisfies the straight line condition included in the two-dimensional point cloud may include: according to the coordinate information of the two-dimensional point cloud, Determine at least one straight line included in the two-dimensional point cloud; count the number of two-dimensional points contained in each straight line in the at least one straight line, and sort the at least one line according to the number of two-dimensional points to obtain Sorting result; successively acquiring the current straight line in the at least one straight line according to the sorting result, and determining the number of straight lines perpendicular to the current straight line in the at least one straight line; the number of straight lines perpendicular to the current straight line is greater than the third In the case of the threshold, it is determined that the current straight line is the target straight line that satisfies the straight line condition.
  • the number of two-dimensional points included in each of the at least one straight line is greater than the second threshold.
  • prominent structures such as walls and columns are usually parallel to the x-axis or the y-axis.
  • Step 1 In the two-dimensional point set for the two-dimensional point cloud, straight line fitting can be performed on the two-dimensional point, for example, using the RANSAC algorithm. Obtain the line with the largest number of two-dimensional points on the line and greater than the second threshold, put the line into the line queue, and remove the two-dimensional point on the line from the two-dimensional point set.
  • the number of two-dimensional points can be understood as the peak of the number of two-dimensional points.
  • Step 2 If the number of remaining two-dimensional points in the two-dimensional point set is greater than a certain remaining number threshold, repeat the previous step. In this way, at least one straight line whose number of two-dimensional points is greater than the second threshold can be determined.
  • Step 3 According to the statistics of the number of two-dimensional points contained in each straight line, take the current line that is the top line from the line queue.
  • the current line that is the top line can be understood as the number of two-dimensional points in the line The most straight line. Calculate the angle between the current line and other lines. If the number of straight lines perpendicular to the current straight line is greater than the third threshold, it can be considered that the current straight line represents a certain identifying structure parallel to the x-axis or the y-axis, and the current straight line is determined to be a target straight line that meets the straight line condition. Otherwise, repeat step 3 until the target line meeting the line condition appears or the line queue is empty. If there is no straight line that satisfies the straight line condition in the straight line queue, the first straight line added to the queue, that is, the straight line with the most two-dimensional points included in the line can be used as the target straight line.
  • the embodiments of the present disclosure also provide a possible implementation manner for determining the projection coordinates of the three-dimensional points contained in the three-dimensional point cloud in the reference coordinate system.
  • the above step S13 is described below, and the above step S13 may include the following steps:
  • Step S131 Perform at least one similarity transformation on the two-dimensional point cloud image
  • Step S132 determining the degree of consistency between the two-dimensional point in the two-dimensional point cloud image and the reference point of the reference plan after each similarity transformation
  • Step S133 Determine a transformation relationship from a three-dimensional point in the three-dimensional point cloud to a reference point in the reference plan view according to the degree of consistency determined after the at least one similarity transformation;
  • Step S134 based on the transformation relationship, project the three-dimensional point cloud onto the reference plan view to obtain the projection coordinates of the three-dimensional point cloud in the reference coordinate system of the reference plan view.
  • the two-dimensional point cloud image and the reference plan may not match in size and position, the two-dimensional point cloud image needs to be similarly transformed at least once, so that the two-dimensional point cloud image and the reference plan represent the same object
  • the images are aligned.
  • the similarity transformation can include rotation, scaling and translation. After each similarity transformation, the degree of consistency between the two-dimensional point in the two-dimensional point cloud image after the similarity transformation and the reference point in the reference plan can be determined, and the similarity transformation with the highest degree of agreement may be the final similarity transformation.
  • the transformation relationship between the three-dimensional point in the three-dimensional point cloud and the reference point in the reference plan can be determined according to the final similar transformation.
  • the three-dimensional transformation relationship is determined according to the two-dimensional similarity transformation, and the three-dimensional point cloud can be matched to the reference plane according to the three-dimensional transformation relationship to obtain the projection coordinates of the three-dimensional point cloud in the reference coordinate system.
  • step S131 may include the following steps:
  • Step S1311 Determine the transformation range for similar transformation of the two-dimensional point cloud image
  • Step S1312 Perform at least one similarity transformation on the two-dimensional point cloud image within the transformation range.
  • the transformation range for similar transformation of the two-dimensional point cloud image can be determined first.
  • the transformation range of the similar transformation here can include the rotation angle, the zoom scale, and the translation interval.
  • the two-dimensional The point cloud image undergoes one or more similar transformations to match the two-dimensional point cloud image with the reference plan.
  • the two-dimensional point cloud image can be rotated by the above-mentioned rotation angle r ini .
  • the two-dimensional point in the two-dimensional point cloud image that represents the landmark structure such as the wall is parallel to the x-axis or the y-axis, and the reference plan shows the wall
  • the reference point of the landmark structure such as the plane is also parallel to the x-axis or y-axis, so the rotation angle of the two-dimensional point cloud image can include 4 rotation angles, that is, the rotation angle can be ⁇ 0°,90°,180°,270° ⁇ .
  • the scaling scale can be transformed at equal intervals in the interval [0.55,1.4], and the interval can be set to 0.05.
  • the translation interval can be set as a rectangular area around the center of the reference plan, assuming that the translation vector is (t x , t y ),
  • the change interval of the translation vector can be 1 pixel.
  • w f represents the width of the reference plan
  • h f represents the height of the reference plan
  • Represents the y coordinate of the center of the two-dimensional point cloud image Represents the y coordinate of the center of the two-dimensional point cloud image.
  • the translation interval may mean that the center of the two-dimensional point cloud image is moved to a rectangular area around the center of the reference plan, and the rectangular area is the same size as the reference plan.
  • the similarity transformation includes translational transformation;
  • the above step S132 may include: for the two-dimensional point cloud image after each translational transformation, down-sampling the two-dimensional point cloud image for a preset number of times Processing to obtain the first sampled image after each downsampling processing; for the first sampled image after each downsampling processing, determine the degree of consistency between the two-dimensional points in the first sampled image and the reference points in the second sampled image; The second sampled image is obtained by the reference plane image through the same downsampling process as the first sampled image; according to the degree of consistency between the first sampled image and the second sampled image determined after the first downsampling process, it is determined The degree of consistency between the two-dimensional point in the two-dimensional point cloud image after each translation and the reference point of the reference plane image.
  • a coarse-to-fine method can be used to determine the two-dimensional point cloud image and the reference plane after each translation transformation.
  • the degree of agreement That is, for the two-dimensional point cloud image after each translational transformation, the two-dimensional point cloud image can be down-sampled a preset number of times, and the first sample corresponding to the two-dimensional point cloud image can be obtained after each down-sampling process image.
  • a preset number of down-sampling processing is performed on the reference plan, and after each down-sampling processing, a second sampled image corresponding to the reference plan can be obtained.
  • Multiple first sampled images and two-dimensional point cloud images can form an image pyramid.
  • the image pyramid includes multiple layers.
  • the bottom layer can represent the two-dimensional point cloud image, and the other layers can represent the first two-dimensional point cloud image obtained through down-sampling.
  • the sampled image for example, represents the first sampled image obtained by the maximum pooling operation of the two-dimensional point cloud image. The higher the number of layers, the more up-sampling processing times corresponding to the first sampled image.
  • multiple second sampled images and reference plane images can form an image pyramid.
  • the bottom layer of the image pyramid can represent the reference plane image, and other layers can represent the second sampled image obtained by downsampling the reference plane image, and the image pyramid corresponding to the reference plane image
  • the number of layers is the same as that of the image pyramid corresponding to the two-dimensional point cloud image.
  • determine the degree of consistency between the first sampled image and the second sampled image of each layer in sequence that is, determine the first sample with the same number of downsampling processing in the descending order of the number of downsampling processing
  • the degree of consistency between the image and the second sampled image is to determine the degree of consistency between the position of each pixel in the first sampled image and the second sampled image of each layer.
  • the best 20 candidate positions can be reserved, and the next layer can be reserved Determine the degree of consistency between the first sampled image and the second sampled image of the layer in the neighborhood of the 7x7 pixel position around the candidate position, up to the bottom layer, that is, determine the degree of consistency between the two-dimensional point cloud image and the reference plane image. In this way, the efficiency of determining the best similarity transformation can be improved.
  • step S132 may include the following steps:
  • Step S1321 For the two-dimensional point cloud image after each similarity transformation, traverse the first pixel of the two-dimensional point cloud image, where the first pixel is the two-dimensional point cloud image that constitutes the Pixels of two-dimensional points;
  • Step S1322 Determine a first image area corresponding to the first pixel in the reference plan view
  • Step S1323 in the case that there is a second pixel representing the reference point in the first image area, determining that the first pixel is a first target pixel;
  • Step S1324 determining a first ratio of the number of first target pixels included in the two-dimensional point cloud image to the number of first pixels included in the two-dimensional point cloud image;
  • Step S1325 Determine the degree of consistency between the two-dimensional point cloud image and the reference plan after each similarity transformation according to the first ratio.
  • the degree of consistency between the two-dimensional point cloud image after each similar transformation and the reference plan can be determined.
  • the degree of consistency here may be the degree of consistency between the two-dimensional point cloud image and the reference plan in the same image area.
  • each first pixel point representing a two-dimensional point in the two-dimensional point cloud image can be traversed.
  • the image position of the first pixel is determined, and then the first image area is determined at the same image position in the reference plan.
  • the neighborhood of the same image position can be used as the first image area. Then determine whether there is a second pixel that represents the reference point in the first image area.
  • the first pixel can be determined as the first target pixel, and then the value of the first target pixel in the two-dimensional point cloud image is calculated.
  • the first ratio between the number and the number of the first pixel points, and the first ratio can be determined as the degree of consistency between the two-dimensional point cloud image and the reference plane image.
  • C p2f can be used to indicate the degree of consistency between the two-dimensional point cloud image and the reference plane image.
  • the pixel representing the two-dimensional point may be the first pixel, and the first pixel may be considered as a meaningful pixel.
  • the pixel value of the first pixel in the two-dimensional point cloud image may be 1, and the pixel value of other pixels except the first pixel may be set to 0.
  • the same location can be adjusted to a nearby location, for example, the nearby location can be set to a neighborhood of 7 ⁇ 7 pixels.
  • the above step S132 may include: each time the two-dimensional point cloud image is similarly transformed, traversing the second pixel of the reference plane image, wherein the second pixel is all The pixel points constituting the reference point in the reference plan view; the second image area corresponding to the second pixel point in the two-dimensional point cloud image is determined; the two-dimensional image area is present in the second image area In the case of the first pixel of the dot, determine that the second pixel is the second target pixel; determine the number of second target pixels included in the reference plan and the second pixel included in the reference plane The second ratio of the number of points; the degree of consistency between the two-dimensional point cloud image and the reference plan after each similarity transformation is determined according to the second ratio.
  • the degree of consistency between the two-dimensional point cloud image and the reference plan may be the degree of consistency between the reference plan and the two-dimensional point cloud image in the same image area.
  • each second pixel representing the reference point in the reference plan can be traversed. For any second pixel, determine the image position of the second pixel, and then determine the second image area at the same image position of the two-dimensional point cloud image. For example, the neighborhood of the same image position can be used as the second Image area. Then determine whether there is a first pixel that represents a two-dimensional point in the second image area.
  • the second pixel can be determined as the second target pixel, and then the number of second target pixels in the reference plan is calculated
  • the first ratio to the number of the first pixel points, and the second ratio may indicate the degree of consistency between the reference plan view and the two-dimensional point cloud image.
  • C f2p can be used to indicate the degree of consistency between the reference plane image and the two-dimensional point cloud image.
  • C p2f + C f2p can be used to indicate the degree of mutual agreement between the two-dimensional point cloud image and the reference plane image. The greater the degree of agreement, the higher the degree of alignment between the two-dimensional point cloud image and the reference plane image.
  • step S132 may further include the following steps:
  • Step S132a after each similar transformation of the two-dimensional point cloud image, determine a first pixel in the two-dimensional point cloud image located in an unclosed area, wherein the first pixel is the two-dimensional The pixel points constituting the two-dimensional point in the point cloud image;
  • Step S132b determining a third ratio of the number of first pixels located in the non-closed area to the number of first pixels contained in the two-dimensional point cloud image;
  • Step S132c Determine the degree of consistency between the two-dimensional point cloud image and the reference plan after each similarity transformation according to the third ratio.
  • the constraint condition of the limited area in the projection of the 3D point cloud can be considered when determining the degree of consistency between the 2D point cloud image and the reference plan. It can be understood that the three-dimensional points in the three-dimensional point cloud should not appear in certain areas, for example, they should not appear in certain enclosed image spaces. Correspondingly, the two-dimensional points of the three-dimensional point projection should not appear in some image areas.
  • the first pixel points located in the non-closed area in the two-dimensional point cloud image can be counted, and then the number of first pixels located in the non-closed area can be calculated between the number of first pixels contained in the two-dimensional point cloud image
  • the third ratio can indicate the degree of consistency between the two-dimensional point cloud image and the reference plan.
  • the above step S132 may further include: after each similar transformation of the two-dimensional point cloud image, determining the image according to the pose information in the process of acquiring the image information by the image acquisition device The third pixel point projected by the acquisition device in the two-dimensional point cloud image; wherein the image information is used to construct the three-dimensional point cloud; the number of third pixels located in the non-enclosed area is determined to be related to the The fourth ratio of the number of third pixel points included in the two-dimensional point cloud image; and the degree of consistency between the two-dimensional point cloud image and the reference plan after each similar transformation is determined according to the fourth ratio.
  • the constraint condition of the image acquisition device in the process of acquiring image information can also be considered, that is, the image acquisition device does not Should appear in certain spaces, for example, should not appear in certain enclosed spaces.
  • the two-dimensional points projected by the image capture device on the two-dimensional point cloud image should not appear in certain areas.
  • the third pixel point projected by the image acquisition device in the two-dimensional point cloud image can be determined according to the pose information in the process of image acquisition by the image acquisition device, and then the number of third pixels located in the non-enclosed area can be counted and calculated
  • the fourth ratio of the number of third pixels located in the non-enclosed area to the third pixels contained in the two-dimensional point cloud image, and the third ratio may indicate the degree of consistency between the two-dimensional point cloud image and the reference plan.
  • one or more of the above-mentioned first ratio, second ratio, third ratio, and fourth ratio may also be considered to determine The degree of consistency between the two-dimensional point cloud image and the reference plan. The greater the degree of coincidence, the higher the degree of alignment between the two-dimensional point cloud image and the reference plan.
  • the determination of the degree of consistency between the two-dimensional point cloud image and the reference plan after each similarity transformation may be based on the foregoing first ratio, second ratio, third ratio, and fourth ratio,
  • the corresponding expression for the degree of consistency can be shown in formula (2):
  • C C p2f + C f2p + C lc + C lp formula (2); where C can be the degree of consistency between the two-dimensional point cloud image and the reference plan.
  • C p2f can represent the aforementioned first ratio;
  • C f2p can represent the aforementioned second ratio;
  • C lc can represent the aforementioned third ratio;
  • C lp can represent the aforementioned fourth ratio.
  • step S133 may include the following steps:
  • Step S1331 Determine a two-dimensional transformation matrix that matches a two-dimensional point in the two-dimensional point cloud image to a reference point in the reference plan view according to the degree of consistency determined after the at least one similarity transformation;
  • Step S1332 based on the two-dimensional transformation matrix, determine a transformation relationship from matching three-dimensional points in the three-dimensional point cloud to reference points in the reference plan view.
  • the similarity transformation with the highest degree of consistency among at least one similarity transformation can be used as the final similarity transformation, and the two-dimensional point cloud image matching the two-dimensional transformation matrix of the reference plane can be determined according to the final similarity transformation. Then, based on the two-dimensional transformation matrix, the transformation relationship from the three-dimensional point in the three-dimensional point cloud to the reference point in the reference plan can be obtained, and the transformation relationship can be characterized by the three-dimensional transformation matrix. For example, the rotation angle corresponding to the similarity transformation with the highest degree of consistency can be r best , and the scaling scale can be s best .
  • r best may already include the initial rotation angle r ini
  • s best may already include the initial scaling scale s ini .
  • the best translation under this rotation angle and zoom scale may be t best .
  • a two-dimensional transformation matrix similarly transformed from a two-dimensional point in the two-dimensional point cloud to a reference point in the reference plane can be obtained, and the two-dimensional transformation matrix S 2D can be as shown in formula (3):
  • R(r best ) can represent a 2 ⁇ 2 rotation matrix with a rotation angle of r best .
  • the three-dimensional transformation matrix can be obtained from the two-dimensional similarity matrix.
  • the three-dimensional transformation matrix S 3D can be as shown in formula (4):
  • R z (r best ) can represent a three-dimensional rotation matrix that rotates r best with the z axis as the rotation axis
  • V can be the singular vector matrix in step S1222
  • the three column vectors V 1 , V 2 and V 3 of V can be These are the x-axis, y-axis and z-axis of the horizontal plane of the projection.
  • the projected coordinates of any three-dimensional point in the three-dimensional point cloud in the reference plane can be obtained according to the three-dimensional point information of the three-dimensional point cloud, which can improve the efficiency and accuracy of matching the three-dimensional point cloud to the reference plane.
  • Fig. 4 shows a flowchart of a positioning method according to an embodiment of the present disclosure.
  • the positioning method can be executed by a terminal device, a server, or other information processing equipment.
  • the terminal device can be a user equipment (UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, or a personal digital processing (Personal Digital Processing Unit).
  • UE user equipment
  • PDA Personal Digital Assistant
  • handheld devices computing devices
  • vehicle-mounted devices wearable devices, etc.
  • the positioning method may be implemented by a processor calling computer-readable instructions stored in the memory.
  • the positioning method includes the following steps:
  • Step S21 acquiring target image information collected by the image acquisition device on the target object
  • Step S22 comparing the acquired target image information with 3D points in a 3D point cloud, where the 3D point cloud is used to represent the 3D space information of the target object, and the 3D point cloud in the 3D point cloud
  • the point corresponds to the projection coordinate
  • the projection coordinate is determined based on the consistency of the two-dimensional point cloud image and the reference plan
  • the two-dimensional point cloud image is generated by the projection of the three-dimensional point cloud to the horizontal plane
  • the reference plane diagram is used to represent a projection diagram with reference coordinates of the target object projected on the horizontal plane;
  • Step S23 Position the image acquisition device according to the projection coordinates corresponding to the three-dimensional point matching the target image information.
  • the positioning device can acquire the target image information of the target object in the current scene collected by the image acquisition device, and then can compare the acquired target image with the 3D points in the 3D point cloud of the current scene to determine the acquisition target 3D points that match the image information. Then, the projection coordinates of the three-dimensional point in the reference plane can be determined according to the three-dimensional point information of the three-dimensional point. For example, the three-dimensional projection matrix described above can be used to determine the projection coordinates of the three-dimensional point in the reference plane. Then, according to the projection coordinates of the three-dimensional point, the position of the image acquisition device in the current scene can be determined. For example, the user can use the image acquisition device to collect and take pictures of the target object, and the positioning device can determine the position of the user in the reference plan of the current scene according to the target image information taken by the image acquisition device to realize the positioning of the user.
  • the present disclosure also provides information processing devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any information processing method provided in the present disclosure.
  • information processing devices electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any information processing method provided in the present disclosure.
  • the information processing solution provided by the embodiments of the present disclosure can project the three-dimensional point cloud on the reference plane according to the degree of consistency between the two-dimensional point cloud image and the reference plane, so that the three-dimensional point cloud in the coordinate system of the image acquisition device can be automatically transformed to Under the reference coordinate system of the reference plan, a lot of manpower can be saved and the matching efficiency can be improved. And through the preprocessing of filtering out the three-dimensional point cloud and the combination of multiple constraint conditions, automatic matching and registration are performed to improve the accuracy of matching.
  • an image of the scene can be collected first for three-dimensional reconstruction, and then the three-dimensional point cloud obtained by the three-dimensional reconstruction can be automatically matched to the plan view of the building using the information processing solution provided by the embodiment of the present disclosure.
  • the user Based on the projected image obtained after the matching, the user can estimate the user's position in the floor plan of the building, that is, the position in the current scene, through the image taken by the mobile phone and other devices, and realize visual positioning.
  • the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
  • the specific execution order of each step should be based on its function and possibility.
  • the inner logic is determined.
  • FIG. 5 shows a block diagram of an information processing device according to an embodiment of the present disclosure. As shown in FIG. 5, the information processing device includes:
  • the obtaining module 31 is used to obtain the three-dimensional point information of the three-dimensional point cloud; the generating module 32 is used to generate, based on the three-dimensional point information, a two-dimensional point cloud image of the three-dimensional point cloud projected to the horizontal plane; the determining module 33 is used to Based on the degree of agreement between the two-dimensional point cloud image and the reference plan, the projection coordinates of the three-dimensional points contained in the three-dimensional point cloud in the reference coordinate system of the reference plan are determined, wherein the reference plan is used to represent the target A projection map with reference coordinates of an object projected on the horizontal plane, and the three-dimensional point cloud is used to represent the three-dimensional spatial information of the target object.
  • Fig. 6 shows a block diagram of a positioning device according to an embodiment of the present disclosure.
  • the positioning device includes: an acquisition module 41 for acquiring target image information collected by the image acquisition device on a target object; and a comparison module 42 for comparing the collected data
  • the target image information is compared with the three-dimensional points in the three-dimensional point cloud, where the three-dimensional point cloud is used to represent the three-dimensional space information of the target object, and the three-dimensional points in the three-dimensional point cloud correspond to the projection coordinates.
  • the projection coordinates are determined based on the consistency of the two-dimensional point cloud image and the reference plane, the two-dimensional point cloud image is generated by the projection of the three-dimensional point cloud to the horizontal plane, the reference plane is used to represent the The projection map with reference coordinates of the target object projected on the horizontal plane; the positioning module 43 is used for positioning the image acquisition device according to the projection coordinates corresponding to the three-dimensional point matching the target image information.
  • the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
  • the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
  • An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
  • the electronic device can be provided as a terminal, server or other form of device.
  • Fig. 7 is a block diagram showing an electronic device 1900 according to an exemplary embodiment.
  • the electronic device 1900 may be provided as a server.
  • the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932 for storing instructions executable by the processing component 1922, such as application programs.
  • the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
  • a non-volatile computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
  • the present disclosure may be a system, method, and/or computer program product.
  • the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
  • the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • flash memory flash memory
  • SRAM static random access memory
  • CD-ROM compact disk read-only memory
  • DVD digital versatile disk
  • memory stick floppy disk
  • mechanical encoding device such as a printer with instructions stored thereon
  • the computer-readable storage medium used here is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
  • the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
  • the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
  • Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
  • Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
  • LAN local area network
  • WAN wide area network
  • an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
  • the computer-readable program instructions are executed to realize various aspects of the present disclosure.
  • These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

一种信息处理方法、定位方法及装置、电子设备和存储介质,其中,所述方法包括:获取三维点云的三维点信息(S11);基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像(S12);基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,其中,所述参考平面图用于表示目标物体在所述水平面投影的带有参考坐标的投影图,所述三维点云用于表示所述目标物体的三维空间信息(S13)。所述方法可以提高三维点匹配到参考平面图中的效率。

Description

信息处理方法、定位方法及装置、电子设备和存储介质
本公开要求在2019年7月29日提交中国专利局、申请号为201910690235.0、申请名称为“信息处理方法、定位方法及装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。
技术领域
本公开涉及计算机视觉技术领域,尤其涉及一种信息处理方法、定位方法及装置、电子设备和存储介质。
背景技术
三维重建技术是近年来新兴技术之一,其应用极为广泛,在工业、医学乃至生活娱乐等各个领域都有所应用。三维重建技术可以重构场景中的三维物体,利用相机等图像采集装置采集的图像可以重建物体的三维影像,使物体以一种直观的方式呈现在图像上。
基于图像的三维重建可以生成场景的三维点云,其中三维点的坐标通常是定义在某个图像采集装置的坐标系下的,不具有真实的地理意义。在一些实际应用(如视觉定位)中利用这些三维点云,具有重要的意义。
发明内容
本公开提出了一种信息处理和定位技术方案。
根据本公开的一方面,提供了一种信息处理方法,包括:获取三维点云的三维点信息;基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像;基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,其中,所述参考平面图用于表示目标物体在所述水平面投影的带有参考坐标的投影图,所述三维点云用于表示所述目标物体的三维空间信息。
在一种可能的实现方案中,所述基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像之前,还包括:获取图像采集装置采集图像信息过程中的至少两个位姿信息,其中,所述图像信息用于构建所述三维点云;根据所述图像采集装置的至少两个位姿信息,确定所述三维点云投影的水平面。
在一种可能的实现方案中,所述位姿信息包括朝向信息和位置信息;所述根据所述图像采集装置的至少两个位姿信息,确定所述三维点云投影的水平面,包括:根据所述图像采集装置的至少两个位置信息,确定所述图像采集装置在采集图像信息过程中的任意两个位置之间的位移;根据所述图像采集装置的至少两个朝向信息以及在任意两个位置之间的位移,确定所述三维点云投影的水平面。
在一种可能的实现方案中,所述图像采集装置满足以下至少一个预设的基础条件:所述图像采集装置在采集图像信息过程中所在的水平轴与所述三维点云投影的水平面平行;在采集图像信息过程中所述图像采集装置到地面的高度在预设高度范围内变化。
在一种可能的实现方案中,所述基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像,包括:根据所述三维点云的三维点信息,确定所述三维点云中包括的至少一个平面;根据所述至少一个平面中每个平面包括的三维点的数量以及每个平面的法线方向,确定所述三维点云中的待滤除三维点;在所述三维点云中删除所述待滤除三维点,得到所述三维点云的剩余三维点;根据剩余三维点的三维点信息,将剩余三维点投影在所述水平面上,生成所述三维点云投影的二维点云图像。
在一种可能的实现方案中,所述根据所述至少一个平面中每个平面包括的三维点的数量以及每个平面的法线方向,确定所述三维点云中的待滤除三维点,包括:根据所述至少一个平面中每个平面包括的三维点的数量,确定所述至少一个平面中三维点的数量最多并且大于第一阈值的第一平面;
判断所述第一平面的法线方向是否垂直于所述水平面;在所述第一平面的法线方向垂直于所述水平面的情况下,确定所述第一平面包括的三维点为所述待滤除三维点。
在一种可能的实现方案中,所述三维点信息包括三维坐标向量;所述基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像,包括:根据所述三维点云的三维坐标向量以及投影的水平面,确定所述三维点云在所述水平面投影的二维点云的坐标信息;根据所述二维点云的坐标信息, 确定所述二维点云包括的满足直线条件的目标直线;根据所述目标直线与所述水平面的坐标轴的位置关系,确定所述二维点云的旋转角;按照所述旋转角对所述二维点云进行旋转,得到所述三维点云向所述水平面投影的二维点云图像。
在一种可能的实现方案中,所述根据所述二维点云的坐标信息,确定所述二维点云包括的满足直线条件的目标直线,包括:根据所述二维点云的坐标信息,确定所述二维点云中包括的至少一条直线;其中,所述至少一条直线中每条直线包括的二维点的数量大于第二阈值;统计所述至少一条直线中每条直线所包含的二维点的数量,按照所述二维点的数量对所述至少一条直线排序,得到排序结果;根据所述排序结果逐次获取所述至少一条直线中的当前直线,确定所述至少一个直线中与所述当前直线垂直的直线的数量;在与当前直线垂直的直线的数量大于第三阈值的情况下,确定当前直线为满足直线条件的目标直线。
在一种可能的实现方案中,所述基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,包括:对所述二维点云图像进行至少一次相似变换;确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度;根据所述至少一次相似变换后确定的一致程度,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系;基于所述变换关系,将所述三维点云向所述参考平面图进行投影,得到所述三维点云在所述参考平面图的参考坐标系下的投影坐标。
在一种可能的实现方案中,所述对所述二维点云图像进行至少一次相似变换,包括:确定所述二维点云图像进行相似变换的变换范围;在所述变换范围内对所述二维点云图像进行至少一次相似变换。
在一种可能的实现方案中,所述相似变换包括平移变换;所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:针对每次平移变换后的二维点云图像,对所述二维点云图像进行预设次数的下采样处理,得到每次下采样处理后的第一采样图像;按照下采样处理次数由大到小的顺序,依次针对每次下采样处理后的第一采样图像,确定该第一采样图像中二维点与第二采样图像中参考点的一致程度;其中,第二采样图像为所述参考平面图经过与该第一采样图像相同的下采样处理得到的;根据第一次下采样处理后确定的第一采样图像与第二采样图像的一致程度,确定每次平移变换后的二维点云图像中二维点与参考平面图的参考点的一致程度。
在一种可能的实现方案中,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:
针对每次相似变换后的二维点云图像,遍历所述二维点云图像的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;确定所述参考平面图中对应于所述第一像素点的第一图像区域;在所述第一图像区域内存在表示所述参考点的第二像素点的情况下,确定所述第一像素点为第一目标像素点;确定所述二维点云图像中包含的第一目标像素点的数量与所述二维点云图像中包含的第一像素点的数量的第一比例;
根据所述第一比例确定每次相似变换后所述二维点云图像与所述参考平面图的一致程度。
在一种可能的实现方案中,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:每次对所述二维点云图像相似变换后,遍历所述参考平面图的第二像素点,其中,所述第二像素点为所述参考平面图中构成所述参考点的像素点;确定所述二维点云图像中对应于所述第二像素点的第二图像区域;在所述第二图像区域内存在表示所述二维点的第一像素点的情况下,确定所述第二像素点为第二目标像素点;确定所述参考平面图中包含的第二目标像素点的数量与所述参考平面中包含的第二像素点的数量的第二比例;根据所述第二比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在一种可能的实现方案中,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:每次对所述二维点云图像相似变换后,确定所述二维点云图像中位于非封闭区域内的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;确定位于所述非封闭区域的第一像素点的数量与所述二维点云图像中包含的第一像素点的数量的第三比例;根据所述第三比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在一种可能的实现方案中,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:每次对所述二维点云图像相似变换后,根据所述图像采集装置采集图像信息过程中的位姿信息,确定所述图像采集装置在所述二维点云图像中投影的第三像素点;其中,所述图像信息用于构建所述三维点云;确定位于所述非封闭区域的第三像素点的数量与所述二维点云图像中包含的第三像素点的数量的第四比例;根据所述第四比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在一种可能的实现方案中,所述根据所述至少一次相似变换后确定的一致程度,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系,包括:根据所述至少一次相似变换后确定的一致程度,确定所述二维点云图像中二维点匹配到所述参考平面图中参考点的二维变换矩阵;基于所述二维变换矩阵,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系。
根据本公开的另一方面,提供了一种定位方法,所述方法包括:获取图像采集装置对目标物体采集的目标图像信息;将采集的所述目标图像信息与三维点云中的三维点进行比对,其中,所述三维点云用于表示所述目标物体的三维空间信息,所述三维点云中的三维点与投影坐标对应,所述投影坐标是基于二维点云图像与参考平面图的一致性确定的,所述二维点云图像为所述三维点云向水平面投影的向水平面投影生成的,所述参考平面图用于表示所述目标物体在所述水平面投影的带有参考坐标的投影图;根据与所述目标图像信息相匹配的三维点所对应的投影坐标,对所述图像采集装置进行定位。
根据本公开的另一方面,提供了一种信息处理装置,包括:获取模块,用于获取三维点云的三维点信息;生成模块,用于基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像;确定模块,用于基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,其中,所述参考平面图用于表示目标物体在所述水平面投影的带有参考坐标的投影图,所述三维点云用于表示所述目标物体的三维空间信息。
在一种可能的实现方案中,所述装置还包括:位姿获取模块,用于获取图像采集装置采集图像信息过程中的至少两个位姿信息,其中,所述图像信息用于构建所述三维点云;平面确定模块,用于根据所述图像采集装置的至少两个位姿信息,确定所述三维点云投影的水平面。
在一种可能的实现方案中,所述位姿信息包括朝向信息和位置信息;所述平面确定模块,具体用于,根据所述图像采集装置的至少两个位置信息,确定所述图像采集装置在采集图像信息过程中的任意两个位置之间的位移;根据所述图像采集装置的至少两个朝向信息以及在任意两个位置之间的位移,确定所述三维点云投影的水平面。
在一种可能的实现方案中,所述图像采集装置满足以下至少一个预设的基础条件:所述图像采集装置在采集图像信息过程中所在的水平轴与所述三维点云投影的水平面平行;在采集图像信息过程中所述图像采集装置到地面的高度在预设高度范围内变化。
在一种可能的实现方案中,所述生成模块,具体用于,根据所述三维点云的三维点信息,确定所述三维点云中包括的至少一个平面;根据所述至少一个平面中每个平面包括的三维点的数量以及每个平面的法线方向,确定所述三维点云中的待滤除三维点;在所述三维点云中删除所述待滤除三维点,得到所述三维点云的剩余三维点;根据剩余三维点的三维点信息,将剩余三维点投影在所述水平面上,生成所述三维点云投影的二维点云图像。
在一种可能的实现方案中,所述生成模块,具体用于,根据所述至少一个平面中每个平面包括的三维点的数量,确定所述至少一个平面中三维点的数量最多并且大于第一阈值的第一平面;判断所述第一平面的法线方向是否垂直于所述水平面;在所述第一平面的法线方向垂直于所述水平面的情况下,确定所述第一平面包括的三维点为所述待滤除三维点。
在一种可能的实现方案中,所述三维点信息包括三维坐标向量;所述生成模块,具体用于,根据所述三维点云的三维坐标向量以及投影的水平面,确定所述三维点云在所述水平面投影的二维点云的坐标信息;根据所述二维点云的坐标信息,确定所述二维点云包括的满足直线条件的目标直线;根据所述目标直线与所述水平面的坐标轴的位置关系,确定所述二维点云的旋转角;按照所述旋转角对所述二维点云进行旋转,得到所述三维点云向所述水平面投影的二维点云图像。
在一种可能的实现方案中,所述生成模块,具体用于,根据所述二维点云的坐标信息,确定所述二维点云中包括的至少一条直线;其中,所述至少一条直线中每条直线包括的二维点的数量大于第二阈值;统计所述至少一条直线中每条直线所包含的二维点的数量,按照所述二维点的数量对所述至少一条直线排序,得到排序结果;根据所述排序结果逐次获取所述至少一条直线中的当前直线,确定所述至少一个直线中与所述当前直线垂直的直线的数量;在与当前直线垂直的直线的数量大于第三阈值的情况下,确定当前直线为满足直线条件的目标直线。
在一种可能的实现方案中,所述确定模块,具体用于,对所述二维点云图像进行至少一次相似变换;确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度;根据所述至少一次相似变换后确定的一致程度,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系;基于所述变换关系,将所述三维点云向所述参考平面图进行投影,得到所述三维点云在所述参考平面图的参考坐标系下的投影坐标。
在一种可能的实现方案中,所述确定模块,具体用于,确定所述二维点云图像进行相似变换的变换范围;在所述变换范围内对所述二维点云图像进行至少一次相似变换。
在一种可能的实现方案中,所述相似变换包括平移变换;所述确定模块,具体用于,针对每次平移变换后的二维点云图像,对所述二维点云图像进行预设次数的下采样处理,得到每次下采样处理后的第一采样图像;按照下采样处理次数由大到小的顺序,依次针对每次下采样处理后的第一采样图像,确定该第一采样图像中二维点与第二采样图像中参考点的一致程度;其中,第二采样图像为所述参考平面图经过与该第一采样图像相同的下采样处理得到的;根据第一次下采样处理后确定的第一采样图像与第二采样图像的一致程度,确定每次平移变换后的二维点云图像中二维点与参考平面图的参考点的一致程度。
在一种可能的实现方案中,所述确定模块,具体用于,针对每次相似变换后的二维点云图像,遍历所述二维点云图像的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;确定所述参考平面图中对应于所述第一像素点的第一图像区域;在所述第一图像区域内存在表示所述参考点的第二像素点的情况下,确定所述第一像素点为第一目标像素点;确定所述二维点云图像中包含的第一目标像素点的数量与所述二维点云图像中包含的第一像素点的数量的第一比例;根据所述第一比例确定每次相似变换后所述二维点云图像与所述参考平面图的一致程度。
在一种可能的实现方案中,所述确定模块,具体用于,每次对所述二维点云图像相似变换后,遍历所述参考平面图的第二像素点,其中,所述第二像素点为所述参考平面图中构成所述参考点的像素点;确定所述二维点云图像中对应于所述第二像素点的第二图像区域;在所述第二图像区域内存在表示所述二维点的第一像素点的情况下,确定所述第二像素点为第二目标像素点;确定所述参考平面图中包含的第二目标像素点的数量与所述参考平面中包含的第二像素点的数量的第二比例;根据所述第二比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在一种可能的实现方案中,所述确定模块,具体用于,每次对所述二维点云图像相似变换后,确定所述二维点云图像中位于非封闭区域内的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;确定位于所述非封闭区域的第一像素点的数量与所述二维点云图像中包含的第一像素点的数量的第三比例;根据所述第三比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在一种可能的实现方案中,所述确定模块,具体用于,每次对所述二维点云图像相似变换后,根据所述图像采集装置采集图像信息过程中的位姿信息,确定所述图像采集装置在所述二维点云图像中投影的第三像素点;其中,所述图像信息用于构建所述三维点云;确定位于所述非封闭区域的第三像素点的数量与所述二维点云图像中包含的第三像素点的数量的第四比例;根据所述第四比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在一种可能的实现方案中,所述确定模块,具体用于,根据所述至少一次相似变换后确定的一致程度,确定所述二维点云图像中二维点匹配到所述参考平面图中参考点的二维变换矩阵;基于所述二维变换矩阵,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系。
根据本公开的另一方面,提供了一种定位装置,其特征在于,所述装置包括:获取模块,用于获取图像采集装置对目标物体采集的目标图像信息;对比模块,用于将采集的所述目标图像信息与三维点云中的三维点进行比对,其中,所述三维点云用于表示所述目标物体的三维空间信息,所述三维点云中的三维点与投影坐标对应,所述投影坐标是基于二维点云图像与参考平面图的一致性确定的,所述二维点云图像为所述三维点云向水平面投影的向水平面投影生成的,所述参考平面图用于表示所述目标物体在所述水平面投影的带有参考坐标的投影图;定位模块,用于根据与所述目标图像信息相匹配的三维点所对应的投影坐标,对所述图像采集装置进行定位。
根据本公开的另一方面,提供了一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行上述信息处理方法。
根据本公开的一方面,提供了一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现上述信息处理方法。
根据本公开的一方面,提供了一种计算机程序,其中,所述计算机程序包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现本公开实施例第一方面任一方法中所描述的部分或全部步骤。
在本公开实施例中,可以获取三维点云的三维点信息,并基于三维点信息,生成三维点云向水平面投影的二维点云图像,这样可以将三维点云转换为二维点云图像。然后,可以基于二维点云图像与参考平面的一致程度,确定所述三维点云中包含的三维点在参考平面图的参考坐标系下的投影坐标,其中,参考平面图用于表示目标物体在水平面投影的带有参考坐标的投影图,三维点云用于表示目标物体的三维空间信息。这样,可以将三维点云自动匹配到参考平面图上,使三维点云可以正确地标注在参考平面图上,提高将三维点云匹配到参考平面图上的效率和精度。此外,通过用户的三维点信息,可以确定用户在参考坐标系下的位置,实现对用户进行定位。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,而非限制本公开。
根据下面参考附图对示例性实施例的详细说明,本公开的其它特征及方面将变得清楚。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。
图1示出根据本公开实施例的信息处理方法的流程图。
图2示出根据本公开实施例的三维点云投影的二维点云图像的框图。
图3示出根据本公开实施例的三维点云在参考坐标系下的投影图像的框图。
图4示出根据本公开实施例的定位方法的流程图。
图5示出根据本公开实施例的信息处理装置的框图。
图6示出根据本公开实施例的定位装置的框图。
图7示出根据本公开实施例的一种电子设备一示例的框图。
具体实施方式
以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。另外,为了更好地说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。
本公开实施例提供的信息处理方案,可以获取由三维重建得到的三维点云的三维点信息,然后利 用三维点云的三维点信息,生成三维点云向水平面投影的二维点云图像,基于生成的二维点云图像与参考平面图的一致程度,确定三维点云中包含的三维点在参考平面图的参考坐标系下的投影坐标,从而可以将图像采集装置坐标系下的三维点云转变到参考平面图的参考坐标下,使三维点云中包含的三维点自动匹配到参考平面图对应的位置,使三维点投影的二维点与参考平面图中表示相同目标物体的参考点对齐。其中,参考平面图用于表示目标物体在水平面投影的带有参考坐标的投影图,三维点云用于表示目标物体的三维空间信息。
在相关技术中,在将三维点云匹配到参考平面图中时,是通过的人工方式将三维点云的三维点匹配到参考平面图中,例如,匹配到室内地图中,通过肉眼观察一些视觉线索如形状、边界和转角等,手动调整三维点云的尺度、旋转和平移使之与参考平面图进行对齐。这种方法效率低,不利于处理大规模的任务,而且人工处理方式没有统一的标准,不同人操作的精度也可能会差别很大。本公开实施例提供的信息处理方案,可以通过三维点云对应的二维点云图像与参考平面图的一致程度,将三维点云中的三维点自动匹配到参考平面图上,不仅可以节省大量的人力,提高匹配效率,还可以提高三维点云匹配到参考平面图的准确率。
本公开实施例提供的信息处理方案,可以应用于任何将三维点投影到平面上的场景,例如,将某个大型建筑室内场景对应的三维点云自动匹配到建筑物的平面图上。还可以应用于利用三维点进行定位、导航的场景,例如,用户可以通过手机等设备拍摄的图像得到的三维点信息,估计用户在当前场景中的位置,实现视觉定位。下面通过实施例对本公开提供的信息处理方案进行说明。
图1示出根据本公开实施例的信息处理方法的流程图。该信息处理方法可以由终端设备、服务器或其它信息处理设备执行,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该信息处理方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。下面以信息处理设备为例对本公开实施例的信息处理方法进行说明。
如图1所示,所述信息处理方法包括以下步骤:
步骤S11,获取三维点云的三维点信息。
在本公开实施例中,信息处理设备可以获取三维重建的三维点云,并获取该三维点云的三维点信息。三维点云可以是由多个三维点形成的三维点集合,该集合中的三维点可以是根据图像采集装置采集某个场景的图像信息得到的。三维点云的三维点信息可以包括三维点的位置信息,该位置信息可以是三维点在图像采集装置坐标系下的位置信息,可以表示为图像采集装置坐标系下的三维坐标,或者,表示为图像采集装置坐标系下的三维向量。三维点云可以用于表示目标物体所在的三维空间信息,例如,目标物体所在的某个场景的三维空间信息。目标物体可以是场景中存在的任意物体,例如,墙面、立柱、桌椅、标识、建筑物等固定的物体,再例如,目标物体可以是车辆、行人等移动的物体。
这里,三维点云中包含的三维点可以是由一个或多个图像装置采集的图像信息得到的,图像采集装置可以从不同角度拍摄场景中的目标物体,由图像采集装置拍摄的目标物体形成的图像信息可以形成该目标物体对应的三维点,多个三维点可以形成该场景中三维点云。在具体实现中,形成的三维点在三维空间坐标系中带有对应的坐标,这样目标物体的三维点按照对应的坐标在三维空间坐标系中排列,组成一个立体的三维模型,该三维模型即为三维点云。
步骤S12,基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像。
在本公开实施例中,可以基于获取的三维点云的三维点信息,将三维点云投影在水平面上。这里的水平面可以是根据图像采集装置拍摄过程中所在的拍摄平面确定的虚拟平面,将三维点云投影在水平面上可以生成三维点云投影之后的二维点云图像。
这里,在基于三维点信息,生成三维点云投影的二维点云图像之前,可以根据图像采集装置的位姿信息确定图像采集装置拍摄过程中的拍摄平面,然后根据该拍摄平面确定三维点云投影的水平面,从而可以将三维点云投影到确定的水平面上,生成三维点云的二维点云图像。这里的水平面可以是在图像采集装置的坐标系下的平面,与真实三维空间中的水平面可以相同,也可以不同,三维点云在图 像采集装置的坐标系下进行投影,生成投影之后二维点云图像。
图2示出根据本公开实施例的三维点云投影的二维点云图像的框图。如图2所示,三维点云投影的水平面与真实三维空间中的Z轴并不垂直。三维点云经过向水平面的投影后,可以得到三维点云的二维点云图像。
步骤S13,基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,其中,所述参考平面图用于表示目标物体在所述水平面投影的带有参考坐标的投影图,所述三维点云用于表示所述目标物体的三维空间信息。
在本公开实施例中,二维点云图像与参考平面图一致程度可以理解为,在相同的图像区域内,二维点云图像中的二维点与参考平面图中参考点之间的相互匹配程度。根据二维点云图像与参考平面的一致程度,可以确定二维点云图像匹配到参考平面图的相似变换,然后基于确定的相似变换可以将二维点云图与参考平面图进行对齐,得到三维点云中的三维点在参考平面图的参考坐标系下的投影坐标。或者,可以根据基于确定的相似变换,确定三维点云在参考平面图的参考坐标系下进行投影的投影变换,从而可以利用投影变换将三维点云投影在参考平面图上,得到三维点云在参考平面图的参考坐标系下的投影图像。这里,相似变换是指从二维点云图像转换到参考平面图的变换关系。具体的,二维点云图像匹配到参考平面图的相似变换,可以但不限于包括二维点云图像的旋转、平移、放缩等图像变换。通过相似变换可以将二维点云图像匹配到参考平面图的对应位置,使二维点云图像中表示某个目标物体的二维点与参考平面图中表示该目标物体的参考点对齐。
这里,参考平面图可以是目标物体的向水平面投射的平面图,例如,建筑物的平面图、测绘的二维地图等。本公开实施例中可以利用墙面、立柱等显著的结构将三维点云自动匹配到参看平面图中,为了减少无关信息的影响,这里的参考平面图可以是经过简化的平面图,即参考平面图中可以保留表示墙面、立柱等显著结构的参考点或参考线段。在参考平面图中,保留的参考点或参考线段的像素值可以设置为1,其他的像素点可以设置为0,这样可以对参考平面图进行简化。
图3示出根据本公开实施例的三维点云在参考坐标系下的投影图像的框图。如图3所示,三维点云向水平面投影得到二维点云图像后,二维点云图像与参考平面图自动对齐。通过本公开实施例提供的信息处理方案,可以根据二维点云图像与参考平面图的一致程度,将三维点云投影在参考平面图中,使在图像采集装置的坐标系下的三维点云自动变换到参考平面图的参考坐标系下,可以节省大量的人力,提高匹配效率。
本公开实施例中提供了生成三维点云投影的水平面的一种可能的实现方式。上述信息处理方法还包括以下步骤:
步骤S121,获取图像采集装置采集图像信息过程中的至少两个位姿信息,其中,所述图像信息用于构建所述三维点云;
步骤S122,根据所述图像采集装置的至少两个位姿信息,确定所述三维点云投影的水平面。
在该实现方式中,由于三维点云是以图像采集装置的坐标系为基准的,图像采集装置的坐标系与实际三维空间的坐标系可以不同,可以先确定三维点云进行投影的水平面。这里,图像采集装置的坐标系可以是以图像采集装置的图像传感器所在平面建立的坐标系。实际三维空间的坐标系可以是世界坐标系。从而可以获取图像采集装置拍摄过程中至少两个时刻对应的位姿信息,每个时刻的位姿信息可以是一个位姿信息。或者,可以获取两个图像采集装置在拍摄过程中的位姿信息,一个图像采集装置的位姿信息可以是一个位姿信息。位姿信息可以包括图像采集装置的位置信息和朝向信息,这里的位置信息可以是在图像采集装置的坐标系下的位置。根据图像采集装置的至少两个位姿信息,可以确定图像采集装置的拍摄平面,根据该拍摄平面可以确定三维点云投影的水平面。从而可以将三维点云的三维点信息投影在该水平面的坐标系下,生成三维点云的二维点云图像。
在一种可能的实现方式中,上述步骤S122可以包括以下步骤:
步骤S1221,根据所述图像采集装置的至少两个位置信息,确定所述图像采集装置在采集图像信息过程中的任意两个位置之间的位移;
步骤S1222,根据所述图像采集装置的至少两个朝向信息以及在任意两个位置之间的位移,确定 所述三维点云投影的水平面。
在该实现方式中,可以认为三维点云投影的水平面与图像采集装置所在的水平轴平行,并且,该水平面与图像采集装置移动的平面平行。从而图像采集装置的至少两个朝向信息对应的朝向与水平面平行,图像采集装置的至少两个位置信息确定的位移与水平面平行,从而可以根据图像采集装置的至少两个位置信息,确定图像采集装置在采集图像信息过程中的任意两个位置之间的位移,然后根据水平面与图像采集装置的朝向、位移的平行关系,确定三维点云投影的水平面。
这里,上述图像采集装置满足以下至少一个预设的基础条件:所述图像采集装置在采集图像信息过程中所在的水平轴与所述三维点云投影的水平面平行;所述图像采集装置在采集图像信息过程中的采集高度在预设高度范围内变化。其中,图像采集装置在采集图像信息过程中所在的水平轴与三维点云投影的水平面平行,可以表明图像采集装置在拍摄用于重建三维点云的图像信息时,图像采集装置是水平的,即基于图像采集装置的朝向确定的图像采集装置坐标系的x或y轴平行于拍摄的水平面。
这里,可以将图像采集装置的成像平面的中心作为图像采集装置坐标系的原点,垂直于图像采集装置的成像平面且经过上述原点的方向可以作为图像采集装置坐标系的z轴,图像采集装置的成像平面所在平面的任意两个相垂直的方向能可以作为图像采集装置坐标系的x轴或y轴。图像采集装置在采集图像信息过程中的采集高度在预设高度范围内变化,可以表明图像采集装置拍摄的高度可以是大致固定的,从而图像采集装置的位移平行于水平面。这样,可以根据至少一个上述基础条件确定三维点云投影的水平面。即,在图像采集装置所在的水平轴与三维点云投影的水平面平行的情况下,可以通过图像采集装置在采集图像信息过程中至少两个时刻所在的水平轴形成的平面,确定三维点云投影的水平面。或者,在图像采集装置的采集高度在预设高度范围内变化的情况下,可以通过采集图像信息过程中不同方向的位移向量,确定三维点云投影的水平面,从而即使三维点云所在的图像采集装置坐标系与世界坐标系不同,也可以快速地确定三维点云投影的水平面,为三维点云生成向水平面投影的二维点云图像提供投影条件。
举例来说,可以基于上述基础条件先确定三维点云投影的水平面的x轴方向,再确定水平面的y轴与z轴方向。假设A可以是基于图像采集装置的朝向确定的矩阵,A的每行可以表示一个采集时刻图像采集装置的水平轴x轴的方向向量的转置,如果图像采集装置在该采集时刻确定位姿的旋转矩阵为R,则该采集时刻图像采集装置的x轴的方向向量可以为R的第一行行向量(1,0,0) T;假设B可以是基于两个采集时刻图像采集装置的位移得到的矩阵,为了提高稳定性,可以位移大于一定阈值的两个采集时刻图像采集装置,例如,可以将该阈值设置为两个采集时刻图像采集装置的最大位移的0.2倍,从而可以过滤过小的位移。根据上述矩阵A和B,可以建立关于水平面的法向向量的线性关系,如公式(1)所示:
Figure PCTCN2019118453-appb-000001
其中,n是水平面的法向向量。公式(1)可以表示待求取的水平面的法线方向垂直于图像采集装置的x轴,同时,垂直于图像采集装置的位移。
上述公式(1)可以通过奇异值分解(Singular Value Decomposition,SVD)求取n的最小二乘解。假设
Figure PCTCN2019118453-appb-000002
M进行SVD后可以表示为M=UDV T;其中,U是m×m阶的酉矩阵;D是半正定m×3阶的对角矩阵;而V是3×3阶的奇异向量矩阵。V中最小奇异值对应的奇异向量可以为n的最小二乘解,若奇异值按照降序排列,V对应的奇异向量分别为V 1、V 2和V 3,则有n=V 3
为了将三维点云投影到水平面生成二维点云图,除了确定三维点云投影的水平面的法线方向之外,还可以确定三维点云投影的水平面的x轴方向和y轴方向。举例来说,可以将上述V中的其它两个奇异向量,即V 1和V 2分别作为投影的水平面的x轴的方向向量和y轴的方向向量。
通过上述方式,可以确定三维点云投影的水平面所在的坐标系,从而可以进一步将三维点云向确定的水平面进行投影。
在一种可能的实现方式中,上述步骤S12可以包括以下步骤:
步骤S123,根据所述三维点云的三维点信息,确定所述三维点云中包括的至少一个平面;
步骤S124,根据所述至少一个平面中每个平面包括的三维点的数量以及每个平面的法线方向,确 定所述三维点云中的待滤除三维点;
步骤S125,在所述三维点云中删除所述待滤除三维点,得到所述三维点云的剩余三维点;
步骤S126,根据剩余三维点的三维点信息,将剩余三维点投影在所述水平面上,生成所述三维点云投影的二维点云图像。
在该实现方式中,在室内场景下,可以利用墙面、立柱等标识性结构,将三维点云匹配到参考平面图中,由于三维点云可以包括表示室内场景中任意一个或多个目标物体的三维点,如果将三维点云中的三维点向水平面投影,一些非标识性结构对应的三维点向水平面投影之后,会对标识性结构对应的三维点向水平面投影生成的二维点造成干扰,会增加分辨表示墙面、立柱等标识性结构的二维点难度。从而为了提高三维点云的二维点云图像匹配参考平面图的效果,在将三维点云投影到水平面时,可以对三维点云中的三维点进行滤除,例如,可以滤除三维点云中表示天花板、地面等物体的三维点,从而可以减少三维点中大量的非标识性结构对应的三维点。在滤除三维点云的三维点时,可以根据三维点信息中的位置信息,确定三维点云形成的一个或多个平面,然后可以统计每个形成的平面所包括的三维点的数量,并获取每个平面的法线方向。一般认为,天花板和地面所在的平面内包含的三维点数量比较多,且天花板和地面所在的平面的法线方向是垂直于地面的。因此,根据每个平面包括的三维点数量和法线方向,确定天花板、地面等物体的平面,然后可以将天花板、地面等物体的平面包括的三维点确定为待滤除三维点,从而可以将该平面上的三维点从三维点云中滤除,得到三维点云的剩余三维点。然后可以根据剩余三维点的位置信息,将剩余三维点投影在水平面上,生成三维点云投影的二维点云图像。
在该实现方式的一个示例中,上述步骤S124可以包括:根据所述至少一个平面中每个平面包括的三维点的数量,确定所述至少一个平面中三维点的数量最多并且大于第一阈值的第一平面;判断所述第一平面的法线方向是否垂直于所述水平面;在所述第一平面的法线方向垂直于所述水平面的情况下,确定所述第一平面包括的三维点为所述待滤除三维点。
在该示例中,可以根据获取的每个平面包括的三维点的数量,确定三维点集合中的三维点包括的一个或多个平面中,三维点的数量最多并且大于第一阈值的第一平面。然后可以判断第一平面的法线方向是否与水平面垂直,如果第一平面的法向方向与水平面垂直,则可以认为第一平面表示的是天花板或地面所在的平面,第一平面包括的三维点是三维点云中的待滤除三维点,否则可以将第一平面上的三维点由上述三维点集合转移到保留三维点集合中,得到三维点集合中剩余的三维点,然后重复确定三维点集合中的三维点包括的一个或多个平面中,三维点的数量最多并且大于第一阈值的第一平面的步骤,直到上述三维点集合中三维点的数量小于或等于预设的剩余数量阈值。这里,剩余三维点可以是由三维点集合中剩余的三维点以及保留三维点集合中的三维点组成的。第一阈值可以根据实际应用场景进行设置。
在一种可能的实现方式中,上述步骤S12可以包括以下步骤:
步骤S12a,根据所述三维点云的三维坐标向量以及投影的水平面,确定所述三维点云在所述水平面投影的二维点云的坐标信息;
步骤S12b,根据所述二维点云的坐标信息,确定所述二维点云包括的满足直线条件的目标直线;
步骤S12c,根据所述目标直线与所述水平面的坐标轴的位置关系,确定所述二维点云的旋转角;
步骤S12d,按照所述旋转角对所述二维点云进行旋转,得到所述三维点云向所述水平面投影的二维点云图像。
在该实现方式中,三维点信息可以包括三维坐标向量,该三维向量坐标可以是在图像采集装置坐标系下的坐标向量。在生成三维点云投影的二维点云图像时,可以根据在参考平面图中标识性结构的图像位置,将三维点云投影到水平面中。例如,可以根据参考平面图中墙面或立柱等标识性结构,通常平行于参考平面图坐标系的x轴或y轴这一性质,将三维点云投影到水平面中。进而可以将三维点云中每个三维点的三维坐标向量向水平面进行投影,得到投影之后二维点云中二维点的坐标信息,例如一个三维点i的坐标向量为X i,则三维点i向水平面投影后得到的二维点x i的坐标为(x i,y i),x i=V 1·X i,y i=V 2·X i。然后根据二维点云的坐标信息,将二维点云拟合成至少一条直线,在拟合的直线中确定 满足直线条件的目标直线,并将满足直线条件的目标直线上包括的二维点作为表示标识性结构的二维点。然后可以根据目标直线与所述水平面的坐标轴的位置关系,确定目标直线与水平面的x轴或y轴的夹角,并将该夹角作为二维点云的旋转角,将二维点云按照旋转角进行旋转,使满足目标直线平行或垂直于水平面坐标轴的x轴或y轴,得到最终三维点云投影的二维点云图像。
举例来说,假设二维点云的旋转角为r ini,则可以对二维点云的进行旋转角为r ini的旋转,使得目标直线平行于水平面的x轴或y轴。然后可以根据二维点云的坐标信息,确定二维点云的坐标的极值,得到二维点云的极值坐标可以表示为(x l,y t)。二维点云所在的矩形区域的长和宽分别以表示为w和h,该矩形区域可以包含二维点云的至少一个二维点。保持该矩形区域的长宽比不变,对该矩形区域放缩s ini倍,可以生成一幅长为一定数值的二维点云图像。这里,二维点云图像的尺寸可以根据参考平面图的分辨率进行调整。例如,可以将二维点云图像的长度设置为参考平面图中某个区域的长度,二维点云图像中二维点所在位置的像素值可以设置为1,其它位置的像素值可以设置为0,这样,可以得到三维点云投影的二维点云图像。
在该实现方式的一个示例中,根据所述二维点云的坐标信息,确定所述二维点云包括的满足直线条件的目标直线,可以包括:根据所述二维点云的坐标信息,确定所述二维点云中包括的至少一条直线;统计所述至少一条直线中每条直线所包含的二维点的数量,按照所述二维点的数量对所述至少一条直线排序,得到排序结果;根据所述排序结果逐次获取所述至少一条直线中的当前直线,确定所述至少一个直线中与所述当前直线垂直的直线的数量;在与当前直线垂直的直线的数量大于第三阈值的情况下,确定当前直线为满足直线条件的目标直线。其中,所述至少一条直线中每条直线包括的二维点的数量大于第二阈值。在该示例中,室内的参考平面图中,墙面、立柱等显著的结构通常平行于x轴或y轴。基于此,确定二维点云包括的满足直线条件的目标直线可以包括以下步骤:
步骤1,在针对二维点云的二维点集合,可以对二维点进行直线拟合,例如,利用RANSAC算法。获取直线上二维点的数量最多并且大于第二阈值的直线,将该直线放入直线队列,并从二维点集合中去除这条直线上的二维点。这里二维点的数量最多可以理解为二维点的数量达到峰值。
步骤2,若二维点集合中剩下的二维点的数量大于一定的剩余数量阈值,则重复上一步骤。通过这种方式可以确定二维点的数量大于第二阈值的至少一条直线。
步骤3,根据统计的每条直线所包含的二维点的数量,从直线队列中取出排在最靠前的当前直线,排在最靠前的当前直线可以理解为直线中二维点的数量最多的直线。计算当前直线与其他直线的夹角。如果与当前直线垂直的直线的数量大于第三阈值,则可以认为当前直线表示某个平行于x轴或y轴的标识性结构,确定当前直线为满足直线条件的目标直线。否则,重复步骤3直到满足直线条件的目标直线出现或者直线队列为空。如果直线队列中都没找到满足直线条件的直线,则可以将最先加入队列的直线,即直线包括的二维点数量最多的直线作为目标直线。
本公开实施例还提供了确定三维点云中包含的三维点在参考坐标系下投影坐标的一种可能的实现方式。下面对上述步骤S13进行说明,上述步骤S13可以包括以下步骤:
步骤S131,对所述二维点云图像进行至少一次相似变换;
步骤S132,确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度;
步骤S133,根据所述至少一次相似变换后确定的一致程度,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系;
步骤S134,基于所述变换关系,将所述三维点云向所述参考平面图进行投影,得到所述三维点云在所述参考平面图的参考坐标系下的投影坐标。
在该实现方式中,由于二维点云图像与参考平面图可能在尺寸和位置并不匹配,从而需要将二维点云图像经过至少一次相似变换,使二维点云图像与参考平面图表示相同物体的图像对齐。这里,相似变换可以包括旋转、放缩和平移。每次经过相似变换后,可以确定相似变换后的二维点云图像中二维点与参考平面图中参考点的一致程度,一致程度最高的相似变换可以是最终确定的相似变换。由于最终确定的相似变换是二维点云图像匹配到参考平面图的二维相似变换,从而可以根据最终确定的相似变换确定三维点云中三维点匹配到参考平面图中参考点的变换关系,即可以根据二维的相似变换确 定三维的变换关系,根据该三维的变换关系可以将三维点云匹配到参考平面图,得到三维点云在参考坐标系下的投影坐标。
在一种可能的实现方式中,上述步骤S131可以包括以下步骤:
步骤S1311,确定所述二维点云图像进行相似变换的变换范围;
步骤S1312,在所述变换范围内对所述二维点云图像进行至少一次相似变换。
在该实现方式中,可以先确定二维点云图像进行相似变换的变换范围,这里的相似变换的变换范围可以包括旋转角度、放缩尺度以及平移区间,在确定的变化范围内可以对二维点云图像进行一次或多次相似变换,使二维点云图像与参考平面图相匹配。
举例来说,二维点云图像可以经过上述旋转角为r ini的旋转,二维点云图像中表示墙面等标识性结构的二维点平行于x轴或y轴,参考平面图中表示墙面等标识性结构的参考点同样平行于x轴或y轴,从而二维点云图像的旋转角度可以包括4个旋转角度,即旋转角度可以是{0°,90°,180°,270°}。放缩尺度可以在区间[0.55,1.4]进行等间隔变换,间隔可以设置为0.05。
平移区间可以设置为参考平面图中心周围的一个矩形区域内,假设平移向量为(t x,t y),
Figure PCTCN2019118453-appb-000003
平移向量的变化间隔可以为1个像素。其中,w f表示参考平面图的宽;h f表示参考平面图的高;
Figure PCTCN2019118453-appb-000004
表示参考平面图中心的x坐标;
Figure PCTCN2019118453-appb-000005
表示平面图中心的y坐标;
Figure PCTCN2019118453-appb-000006
表示二维点云图像中心的x坐标;
Figure PCTCN2019118453-appb-000007
表示二维点云图像中心的y坐标。此平移区间可以表示将二维点云图像的中心移动至参考平面图中心周围的一个矩形区域内,该矩形区域与参考平面图的大小相同。
在一个可能的实现方式中,所述相似变换包括平移变换;上述步骤S132可以包括:针对每次平移变换后的二维点云图像,对所述二维点云图像进行预设次数的下采样处理,得到每次下采样处理后的第一采样图像;针对每次下采样处理后的第一采样图像,确定该第一采样图像中二维点与第二采样图像中参考点的一致程度;其中,第二采样图像为所述参考平面图经过与该第一采样图像相同的下采样处理得到的;根据第一次下采样处理后确定的第一采样图像与第二采样图像的一致程度,确定每次平移变换后的二维点云图像中二维点与参考平面图的参考点的一致程度。
在该实现方式中,为了提高确定最佳相似变换的效率,在对二维点云图像进行平移变换时,可以采用由粗到细的方式确定每次平移变换后二维点云图像与参考平面图的一致程度。即,针对每次平移变换后的二维点云图像,可以对该二维点云图像进行预设次数的下采样处理,每次下采样处理后可以得到二维点云图像对应的第一采样图像。同时,对参考平面图进行预设次数的下采样处理,每次下采样处理后可以得到参考平面图对应的第二采样图像。多个第一采样图像和二维点云图像可以形成图像金字塔,图像金字塔包括多层,最底层可以表示二维点云图像,其他层可以表示二维点云图像经过下采样处理得到的第一采样图像,例如,表示二维点云图像经过最大池化操作得到的第一采样图像。层数越高,第一采样图像对应的上采样处理次数越多。相应地,多个第二采样图像和参考平面图可以形成图像金字塔,图像金字塔的最底层可以表示参考平面图,其他层可以表示参考平面图经过下采样处理得到的第二采样图像,参考平面图对应的图像金字塔的层数与二维点云图像对应的图像金字塔的层数相同。由图像金字塔的顶端开始,依次确定每层的第一采样图像和第二采样图像的一致程度,即,按照下采样处理次数由大到小的顺序,依次确定相同下采样处理次数的第一采样图像和第二采样图像的一致程度,确定每一层的第一采样图像和第二采样图像中每个像素点位置的一致程度,可以保留最佳的20个候选位置,下一层可以在保留的候选位置周围的7x7像素点位置的邻域,确定该层第一采样图像和第二采样图像的一致程度,直到最底层,即,确定二维点云图像和参考平面图的一致程度。通过这种方式可以提高确定最佳相似变换的效率。
在一种可能的实现方式中,上述步骤S132可以包括以下步骤:
步骤S1321,针对每次相似变换后的二维点云图像,遍历所述二维点云图像的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;
步骤S1322,确定所述参考平面图中对应于所述第一像素点的第一图像区域;
步骤S1323,在所述第一图像区域内存在表示所述参考点的第二像素点的情况下,确定所述第一像素点为第一目标像素点;
步骤S1324,确定所述二维点云图像中包含的第一目标像素点的数量与所述二维点云图像中包含的第一像素点的数量的第一比例;
步骤S1325,根据所述第一比例确定每次相似变换后所述二维点云图像与所述参考平面图的一致程度。
在该实现方式中,可以确定每次经过相似变换后的二维点云图像与参考平面图的一致程度。这里的一致程度可以是在相同的图像区域内,二维点云图像到参考平面图的一致程度。从而可以遍历二维点云图像中每个表示二维点的第一像素点。针对任意一个第一像素点,确定该第一像素点的图像位置,然后在参考平面图的相同图像位置处,确定第一图像区域,例如,可以将相同图像位置的邻域作为第一图像区域。然后判断第一图像区域内是否存在表示参考点的第二像素点,如果存在,可以将第一像素点确定为第一目标像素点,然后计算二维点云图像中的第一目标像素点的数量与第一像素点的数量之间的第一比例,该第一比例可以确定为二维点云图像到参考平面图的一致程度。
举例来说,可以用C p2f表示二维点云图像到参考平面图的一致程度。在二维点云图像中表示二维点的像素点可以是第一像素点,第一像素点可以认为是有意义的像素点。例如可以将二维点云图像中的第一像素点的像素值可以为1,除第一像素点之外的其他像素点的像素值设置为0。在二维点云图像的任意一个第一像素点的图像位置处,判断在该第一像素点的图像位置的相同位置处,参考平面图在该相同位置的像素点是否为第二像素点,如果存在,该第一像素点是第一目标像素点。确定第一目标像素点占第一像素点的第一比例。为了提高容错性,可以将相同位置调整为附近位置,例如,可以将附近位置设置为7×7像素点的邻域。
在一种可能的实现方式中,上述步骤S132可以包括:每次对所述二维点云图像相似变换后,遍历所述参考平面图的第二像素点,其中,所述第二像素点为所述参考平面图中构成所述参考点的像素点;确定所述二维点云图像中对应于所述第二像素点的第二图像区域;在所述第二图像区域内存在表示所述二维点的第一像素点的情况下,确定所述第二像素点为第二目标像素点;确定所述参考平面图中包含的第二目标像素点的数量与所述参考平面中包含的第二像素点的数量的第二比例;根据所述第二比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在该实现方式中,二维点云图像与参考平面图一致程度可以是在相同的图像区域内,参考平面图到二维点云图像的一致程度。从而可以遍历参考平面图中每个表示参考点的第二像素点。针对任意一个第二像素点,确定该第二像素点的图像位置,然后在二维点云图像的相同图像位置处,确定第二图像区域,例如,可以将相同图像位置的邻域作为第二图像区域。然后判断第二图像区域内是否存在表示二维点的第一像素点,如果存在,可以将该第二像素点确定为第二目标像素点,然后计算参考平面图中的第二目标像素点的数量与第一像素点的数量之间的第一比例,该第二比例可以表示参考平面图到二维点云图像的一致程度。相应地,可以用C f2p表示参考平面图到二维点云图像的一致程度。在一些实现方式中,可以用C p2f+C f2p可以表示二维点云图像与参考平面图之间相互的一致程度,一致程度越大,表示二维点云图像与参考平面图的对齐程度越高。
在一种可能的实现方式中,上述步骤S132还可以包括以下步骤:
步骤S132a,每次对所述二维点云图像相似变换后,确定所述二维点云图像中位于非封闭区域内的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;
步骤S132b,确定位于所述非封闭区域的第一像素点的数量与所述二维点云图像中包含的第一像素点的数量的第三比例;
步骤S132c,根据所述第三比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在该实现方式中,为了提高三维点云参考坐标系下投影的鲁棒性,可以在确定二维点云图像与参考平面图的一致程度时,考虑三维点云的投影存在限制区域的约束条件,即可以理解为,三维点云中的三维点不应出现在某些区域,例如,不出现在某些封闭的图像空间。相应地,三维点投影的二维点也不应出现在某些图像区域。从而可以统计二维点云图像中位于非封闭区域内的第一像素点,然后计 算位于非封闭区域的第一像素点的数量与二维点云图像中包含的第一像素点的数量之间的第三比例,该第三比例可以表示二维点云图像与参考平面图的一致程度。
在一种可能的实现方式中,上述步骤S132还可以包括:每次对所述二维点云图像相似变换后,根据所述图像采集装置采集图像信息过程中的位姿信息,确定所述图像采集装置在所述二维点云图像中投影的第三像素点;其中,所述图像信息用于构建所述三维点云;确定位于所述非封闭区域的第三像素点的数量与所述二维点云图像中包含的第三像素点的数量的第四比例;根据所述第四比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
在该实现方式中,在确定二维点云图像与参考平面图的一致程度时,还可以考虑图像采集装置在采集图像信息过程中存在的约束条件,即,图像采集装置在采集图像信息过程中不应出现在某些空间,例如,不应出现在某些封闭空间内。相应地,图像采集装置投影在二维点云图像的二维点不应出现在某些区域。从而可以根据图像采集装置采集图像信息过程中的位姿信息,确定图像采集装置在二维点云图像中投影的第三像素点,然后统计位于非封闭区域内的第三像素点的数量,计算位于非封闭区域内的第三像素点的数量与二维点云图像中包含的第三像素点的第四比例,该第三比例可以表示二维点云图像与参考平面图的一致程度。
在一些实现方式中,为了更加全面地确定二维点云图像与参考平面图的一致程度,还可以考虑上述第一比例、第二比例、第三比例、第四比例中一个或多个比例,确定二维点云图像与参考平面图的一致程度。该一致程度越大,表示二维点云图像与参考平面图的对齐程度越高。基于上述各个实现方式,在一个示例中,在确定每次相似变换后二维点云图像与参考平面图的一致程度时,可以基于上述第一比例、第二比例、第三比例和第四比例,共同确定每次相似变换后二维点云图像与参考平面图的一致程度,相应的一致程度的表达式可以由公式(2)所示:
C=C p2f+C f2p+C lc+C lp   公式(2);其中,C可以是二维点云图像与参考平面图的一致程度,一致程度越大,表示二维点云图像与参考平面图的对齐程度越高;C p2f可以表示上述第一比例;C f2p可以表示上述第二比例;C lc可以表示上述第三比例;C lp可以表示上述第四比例。
在一种可能的实现方式中,上述步骤S133可以包括以下步骤:
步骤S1331,根据所述至少一次相似变换后确定的一致程度,确定所述二维点云图像中二维点匹配到所述参考平面图中参考点的二维变换矩阵;
步骤S1332,基于所述二维变换矩阵,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系。
在该种可能的实现方式中,可以将至少一次相似变换中一致程度最高的相似变换作为最终的相似变换,根据最终的相似变换可以确定二维点云图像匹配到参考平面图的二维变换矩阵。然后基于二维变换矩阵可以得到由三维点云中三维点匹配到参考平面图中参考点的变换关系,该变换关系可以用三维变换矩阵进行表征。举例来说,一致程度最高的相似变换对应的旋转角可以为r best,放缩尺度可以为s best。其中,r best可以已经包括初始的旋转角r ini,s best可以已经包括初始的放缩尺度s ini。该旋转角和缩放尺度下的最佳平移可以为t best。从而可以得到由二维点云图中二维点到参考平面图中参考点相似变换的二维变换矩阵,该二维变换矩阵S 2D可以如公式(3)所示:
Figure PCTCN2019118453-appb-000008
其中,R(r best)可以表示旋转角为r best的2×2的旋转矩阵。
得到二维相似矩阵之后,可以由二维相似矩阵得到三维变换矩阵,三维变换矩阵S 3D可以如公式(4)所示:
Figure PCTCN2019118453-appb-000009
其中,R z(r best)可以表示以z轴为旋转轴旋转r best的三维旋转矩阵,V可以是上述步骤S1222中的奇异向量矩阵,V三个列向量V 1、V 2和V 3可以分别为投影的水平面的x轴、y轴和z轴。
通过上述三维变换矩阵,可以根据三维点云的三维点信息,得到三维点云中任意一个三维点在参 考平面图中的投影坐标,可以提高将三维点云匹配到参考平面图上的效率和精度。
基于本公开实施例提供的上述信息处理方法,本公开实施例还提供了一种定位方法。图4示出根据本公开实施例的定位方法的流程图。该定位方法可以由终端设备、服务器或其它信息处理设备执行,其中,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字处理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等。在一些可能的实现方式中,该定位方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
如图4所示,所述定位方法包括以下步骤:
步骤S21,获取图像采集装置对目标物体采集的目标图像信息;
步骤S22,将采集的所述目标图像信息与三维点云中的三维点进行比对,其中,所述三维点云用于表示所述目标物体的三维空间信息,所述三维点云中的三维点与投影坐标对应,所述投影坐标是基于二维点云图像与参考平面图的一致性确定的,所述二维点云图像为所述三维点云向水平面投影的向水平面投影生成的,所述参考平面图用于表示所述目标物体在所述水平面投影的带有参考坐标的投影图;
步骤S23,根据与所述目标图像信息相匹配的三维点所对应的投影坐标,对所述图像采集装置进行定位。
在本公开实施例中,定位装置可以获取图像采集装置采集当前场景中目标物体的目标图像信息,然后可以将采集的目标图像与当前场景的三维点云中的三维点进行对比,确定采集的目标图像信息相匹配的三维点。然后可以根据该三维点的三维点信息,确定该三维点在参考平面图中的投影坐标,例如,利用上述三维投影矩阵,确定该三维点在参考平面图中的投影坐标。然后根据该三维点的投影坐标,可以确定图像采集装置在当前场景中的位置。举例来说,用户可以使用图像采集装置采集对目标物体进行拍照,定位装置可以根据图像采集装置拍摄的目标图像信息,确定用户在当前场景的参考平面图中的位置,实现对用户进行定位。
可以理解,本公开提及的上述各个方法实施例,在不违背原理逻辑的情况下,均可以彼此相互结合形成结合后的实施例,限于篇幅,本公开不再赘述。
此外,本公开还提供了信息处理装置、电子设备、计算机可读存储介质、程序,上述均可用来实现本公开提供的任一种信息处理方法,相应技术方案和描述和参见方法部分的相应记载,不再赘述。
本公开实施例提供的信息处理方案,可以根据二维点云图像与参考平面图的一致程度,将三维点云投影在参考平面图中,使在图像采集装置的坐标系下的三维点云自动变换到参考平面图的参考坐标系下,可以节省大量的人力,提高匹配效率。并且通过对三维点云进行滤除的预处理以及多种约束条件结合的方式来进行自动匹配注册,提高匹配的准确率。对于某个大型建筑室内场景,可以先采集场景的图像进行三维重建,然后使用本公开实施例提供的信息处理方案将三维重建得到的三维点云自动匹配到建筑物的平面图上。基于匹配后得到的投影图像,用户可以通过手机等设备拍摄图像估计出用户在建筑物的平面图中的位置,即在当前场景中的位置,实现视觉定位。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。
图5示出根据本公开实施例的信息处理装置的框图,如图5所示,所述信息处理装置包括:
获取模块31,用于获取三维点云的三维点信息;生成模块32,用于基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像;确定模块33,用于基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,其中,所述参考平面图用于表示目标物体在所述水平面投影的带有参考坐标的投影图,所述三维点云用于表示所述目标物体的三维空间信息。
图6示出根据本公开实施例的定位装置的框图,所述定位装置包括:获取模块41,用于获取图像采集装置对目标物体采集的目标图像信息;对比模块42,用于将采集的所述目标图像信息与三维点云 中的三维点进行比对,其中,所述三维点云用于表示所述目标物体的三维空间信息,所述三维点云中的三维点与投影坐标对应,所述投影坐标是基于二维点云图像与参考平面图的一致性确定的,所述二维点云图像为所述三维点云向水平面投影的向水平面投影生成的,所述参考平面图用于表示所述目标物体在所述水平面投影的带有参考坐标的投影图;定位模块43,用于根据与所述目标图像信息相匹配的三维点所对应的投影坐标,对所述图像采集装置进行定位。
在一些实施例中,本公开实施例提供的装置具有的功能或包含的模块可以用于执行上文方法实施例描述的方法,其具体实现可以参照上文方法实施例的描述,为了简洁,这里不再赘述。
本公开实施例还提出一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为上述方法。
电子设备可以被提供为终端、服务器或其它形态的设备。
图7是根据一示例性实施例示出的一种电子设备1900的框图。例如,电子设备1900可以被提供为一服务器。参照图7,电子设备1900包括处理组件1922,其进一步包括一个或多个处理器,以及由存储器1932所代表的存储器资源,用于存储可由处理组件1922的执行的指令,例如应用程序。存储器1932中存储的应用程序可以包括一个或一个以上的每一个对应于一组指令的模块。此外,处理组件1922被配置为执行指令,以执行上述方法。
电子设备1900还可以包括一个电源组件1926被配置为执行电子设备1900的电源管理,一个有线或无线网络接口1950被配置为将电子设备1900连接到网络,和一个输入输出(I/O)接口1958。电子设备1900可以操作基于存储在存储器1932的操作***,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM或类似。
在示例性实施例中,还提供了一种非易失性计算机可读存储介质,例如包括计算机程序指令的存储器1932,上述计算机程序指令可由电子设备1900的处理组件1922执行以完成上述方法。
本公开可以是***、方法和/或计算机程序产品。计算机程序产品可以包括计算机可读存储介质,其上载有用于使处理器实现本公开的各个方面的计算机可读程序指令。
计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是――但不限于――电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。
这里所描述的计算机可读程序指令可以从计算机可读存储介质下载到各个计算/处理设备,或者通过网络、例如因特网、局域网、广域网和/或无线网下载到外部计算机或外部存储设备。网络可以包括铜传输电缆、光纤传输、无线传输、路由器、防火墙、交换机、网关计算机和/或边缘服务器。每个计算/处理设备中的网络适配卡或者网络接口从网络接收计算机可读程序指令,并转发该计算机可读程序指令,以供存储在各个计算/处理设备中的计算机可读存储介质中。
用于执行本公开操作的计算机程序指令可以是汇编指令、指令集架构(ISA)指令、机器指令、机器相关指令、微代码、固件指令、状态设置数据、或者以一种或多种编程语言的任意组合编写的源代码或目标代码,所述编程语言包括面向对象的编程语言—诸如Smalltalk、C++等,以及常规的过程式编程语言—诸如“C”语言或类似的编程语言。计算机可读程序指令可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络—包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。在一些实施例中,通过利用计算机可读程 序指令的状态信息来个性化定制电子电路,例如可编程逻辑电路、现场可编程门阵列(FPGA)或可编程逻辑阵列(PLA),该电子电路可以执行计算机可读程序指令,从而实现本公开的各个方面。
这里参照根据本公开实施例的方法、装置(***)和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。
这些计算机可读程序指令可以提供给通用计算机、专用计算机或其它可编程数据处理装置的处理器,从而生产出一种机器,使得这些指令在通过计算机或其它可编程数据处理装置的处理器执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。
也可以把计算机可读程序指令加载到计算机、其它可编程数据处理装置、或其它设备上,使得在计算机、其它可编程数据处理装置或其它设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其它可编程数据处理装置、或其它设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。
附图中的流程图和框图显示了根据本公开的多个实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,所述模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
以上已经描述了本公开的各实施例,上述说明是示例性的,并非穷尽性的,并且也不限于所披露的各实施例。在不偏离所说明的各实施例的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实施例的原理、实际应用或对市场中技术的技术改进,或者使本技术领域的其它普通技术人员能理解本文披露的各实施例。

Claims (37)

  1. 一种信息处理方法,包括:
    获取三维点云的三维点信息;
    基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像;
    基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,其中,所述参考平面图用于表示目标物体在所述水平面投影的带有参考坐标的投影图,所述三维点云用于表示所述目标物体的三维空间信息。
  2. 根据权利要求1所述的方法,其特征在于,所述基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像之前,还包括:
    获取图像采集装置采集图像信息过程中的至少两个位姿信息,其中,所述图像信息用于构建所述三维点云;
    根据所述图像采集装置的至少两个位姿信息,确定所述三维点云投影的水平面。
  3. 根据权利要求2所述的方法,其特征在于,所述位姿信息包括朝向信息和位置信息;
    所述根据所述图像采集装置的至少两个位姿信息,确定所述三维点云投影的水平面,包括:
    根据所述图像采集装置的至少两个位置信息,确定所述图像采集装置在采集图像信息过程中的任意两个位置之间的位移;
    根据所述图像采集装置的至少两个朝向信息以及在任意两个位置之间的位移,确定所述三维点云投影的水平面。
  4. 根据权利要求2或3所述的方法,其特征在于,所述图像采集装置满足以下至少一个预设的基础条件:
    所述图像采集装置在采集图像信息过程中所在的水平轴与所述三维点云投影的水平面平行;
    在采集图像信息过程中所述图像采集装置到地面的高度在预设高度范围内变化。
  5. 根据权利要求1至4任意一项所述的方法,其特征在于,所述基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像,包括:
    根据所述三维点云的三维点信息,确定所述三维点云中包括的至少一个平面;
    根据所述至少一个平面中每个平面包括的三维点的数量以及每个平面的法线方向,确定所述三维点云中的待滤除三维点;
    在所述三维点云中删除所述待滤除三维点,得到所述三维点云的剩余三维点;
    根据剩余三维点的三维点信息,将剩余三维点投影在所述水平面上,生成所述三维点云投影的二维点云图像。
  6. 根据权利要求5所述的方法,其特征在于,所述根据所述至少一个平面中每个平面包括的三维点的数量以及每个平面的法线方向,确定所述三维点云中的待滤除三维点,包括:
    根据所述至少一个平面中每个平面包括的三维点的数量,确定所述至少一个平面中三维点的数量最多并且大于第一阈值的第一平面;
    判断所述第一平面的法线方向是否垂直于所述水平面;
    在所述第一平面的法线方向垂直于所述水平面的情况下,确定所述第一平面包括的三维点为所述待滤除三维点。
  7. 根据权利要求2所述的方法,其特征在于,所述三维点信息包括三维坐标向量;所述基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像,包括:
    根据所述三维点云的三维坐标向量以及投影的水平面,确定所述三维点云在所述水平面投影的二维点云的坐标信息;
    根据所述二维点云的坐标信息,确定所述二维点云包括的满足直线条件的目标直线;
    根据所述目标直线与所述水平面的坐标轴的位置关系,确定所述二维点云的旋转角;
    按照所述旋转角对所述二维点云进行旋转,得到所述三维点云向所述水平面投影的二维点云图像。
  8. 根据权利要求7所述的方法,其特征在于,所述根据所述二维点云的坐标信息,确定所述二维点云包括的满足直线条件的目标直线,包括:
    根据所述二维点云的坐标信息,确定所述二维点云中包括的至少一条直线;其中,所述至少一条直线中每条直线包括的二维点的数量大于第二阈值;
    统计所述至少一条直线中每条直线所包含的二维点的数量,按照所述二维点的数量对所述至少一条直线排序,得到排序结果;
    根据所述排序结果逐次获取所述至少一条直线中的当前直线,确定所述至少一个直线中与所述当前直线垂直的直线的数量;
    在与当前直线垂直的直线的数量大于第三阈值的情况下,确定当前直线为满足直线条件的目标直线。
  9. 根据权利要求1至8任意一项所述的方法,其特征在于,所述基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,包括:
    对所述二维点云图像进行至少一次相似变换;
    确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度;
    根据所述至少一次相似变换后确定的一致程度,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系;
    基于所述变换关系,将所述三维点云向所述参考平面图进行投影,得到所述三维点云在所述参考平面图的参考坐标系下的投影坐标。
  10. 根据权利要求9所述的方法,其特征在于,所述对所述二维点云图像进行至少一次相似变换,包括:
    确定所述二维点云图像进行相似变换的变换范围;
    在所述变换范围内对所述二维点云图像进行至少一次相似变换。
  11. 根据权利要求9所述的方法,其特征在于,所述相似变换包括平移变换;所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:
    针对每次平移变换后的二维点云图像,对所述二维点云图像进行预设次数的下采样处理,得到每次下采样处理后的第一采样图像;
    针对每次下采样处理后的第一采样图像,确定该第一采样图像中二维点与第二采样图像中参考点的一致程度;其中,第二采样图像为所述参考平面图经过与该第一采样图像相同的下采样处理得到的;
    根据第一次下采样处理后确定的第一采样图像与第二采样图像的一致程度,确定每次平移变换后的二维点云图像中二维点与参考平面图的参考点的一致程度。
  12. 根据权利要求9所述的方法,其特征在于,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:
    针对每次相似变换后的二维点云图像,遍历所述二维点云图像的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;
    确定所述参考平面图中对应于所述第一像素点的第一图像区域;
    在所述第一图像区域内存在表示所述参考点的第二像素点的情况下,确定所述第一像素点为第一目标像素点;
    确定所述二维点云图像中包含的第一目标像素点的数量与所述二维点云图像中包含的第一像素点的数量的第一比例;
    根据所述第一比例确定每次相似变换后所述二维点云图像与所述参考平面图的一致程度。
  13. 根据权利要求9所述的方法,其特征在于,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:
    每次对所述二维点云图像相似变换后,遍历所述参考平面图的第二像素点,其中,所述第二像素点为所述参考平面图中构成所述参考点的像素点;
    确定所述二维点云图像中对应于所述第二像素点的第二图像区域;
    在所述第二图像区域内存在表示所述二维点的第一像素点的情况下,确定所述第二像素点为第二 目标像素点;
    确定所述参考平面图中包含的第二目标像素点的数量与所述参考平面中包含的第二像素点的数量的第二比例;
    根据所述第二比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
  14. 根据权利要求9所述的方法,其特征在于,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:
    每次对所述二维点云图像相似变换后,确定所述二维点云图像中位于非封闭区域内的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;
    确定位于所述非封闭区域的第一像素点的数量与所述二维点云图像中包含的第一像素点的数量的第三比例;
    根据所述第三比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
  15. 根据权利要求9所述的方法,其特征在于,所述确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度,包括:
    每次对所述二维点云图像相似变换后,根据所述图像采集装置采集图像信息过程中的位姿信息,确定所述图像采集装置在所述二维点云图像中投影的第三像素点;其中,所述图像信息用于构建所述三维点云;
    确定位于非封闭区域的第三像素点的数量与所述二维点云图像中包含的第三像素点的数量的第四比例;
    根据所述第四比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
  16. 根据权利要求9所述的方法,其特征在于,所述根据所述至少一次相似变换后确定的一致程度,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系,包括:
    根据所述至少一次相似变换后确定的一致程度,确定所述二维点云图像中二维点匹配到所述参考平面图中参考点的二维变换矩阵;
    基于所述二维变换矩阵,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系。
  17. 一种定位方法,所述方法包括:
    获取图像采集装置对目标物体采集的目标图像信息;
    将采集的所述目标图像信息与三维点云中的三维点进行比对,其中,所述三维点云用于表示所述目标物体的三维空间信息,所述三维点云中的三维点与投影坐标对应,所述投影坐标是基于二维点云图像与参考平面图的一致性确定的,所述二维点云图像为所述三维点云向水平面投影的向水平面投影生成的,所述参考平面图用于表示所述目标物体在所述水平面投影的带有参考坐标的投影图;
    根据与所述目标图像信息相匹配的三维点所对应的投影坐标,对所述图像采集装置进行定位。
  18. 一种信息处理装置,包括:
    获取模块,用于获取三维点云的三维点信息;
    生成模块,用于基于所述三维点信息,生成所述三维点云向水平面投影的二维点云图像;
    确定模块,用于基于所述二维点云图像与参考平面图的一致程度,确定所述三维点云中包含的三维点在所述参考平面图的参考坐标系下的投影坐标,其中,所述参考平面图用于表示目标物体在所述水平面投影的带有参考坐标的投影图,所述三维点云用于表示所述目标物体的三维空间信息。
  19. 根据权利要求18所述的装置,其特征在于,所述装置还包括:
    位姿获取模块,用于获取图像采集装置采集图像信息过程中的至少两个位姿信息,其中,所述图像信息用于构建所述三维点云;
    平面确定模块,用于根据所述图像采集装置的至少两个位姿信息,确定所述三维点云投影的水平面。
  20. 根据权利要求19所述的装置,其特征在于,所述位姿信息包括朝向信息和位置信息;所述平面确定模块,具体用于,
    根据所述图像采集装置的至少两个位置信息,确定所述图像采集装置在采集图像信息过程中的任 意两个位置之间的位移;
    根据所述图像采集装置的至少两个朝向信息以及在任意两个位置之间的位移,确定所述三维点云投影的水平面。
  21. 根据权利要求19或20所述的装置,其特征在于,所述图像采集装置满足以下至少一个预设的基础条件:
    所述图像采集装置在采集图像信息过程中所在的水平轴与所述三维点云投影的水平面平行;
    在采集图像信息过程中所述图像采集装置到地面的高度在预设高度范围内变化。
  22. 根据权利要求18至21任意一项所述的装置,其特征在于,所述生成模块,具体用于,
    根据所述三维点云的三维点信息,确定所述三维点云中包括的至少一个平面;
    根据所述至少一个平面中每个平面包括的三维点的数量以及每个平面的法线方向,确定所述三维点云中的待滤除三维点;
    在所述三维点云中删除所述待滤除三维点,得到所述三维点云的剩余三维点;
    根据剩余三维点的三维点信息,将剩余三维点投影在所述水平面上,生成所述三维点云投影的二维点云图像。
  23. 根据权利要求22所述的装置,其特征在于,所述生成模块,具体用于,
    根据所述至少一个平面中每个平面包括的三维点的数量,确定所述至少一个平面中三维点的数量最多并且大于第一阈值的第一平面;
    判断所述第一平面的法线方向是否垂直于所述水平面;
    在所述第一平面的法线方向垂直于所述水平面的情况下,确定所述第一平面包括的三维点为所述待滤除三维点。
  24. 根据权利要求19所述的装置,其特征在于,所述三维点信息包括三维坐标向量;所述生成模块,具体用于,
    根据所述三维点云的三维坐标向量以及投影的水平面,确定所述三维点云在所述水平面投影的二维点云的坐标信息;
    根据所述二维点云的坐标信息,确定所述二维点云包括的满足直线条件的目标直线;
    根据所述目标直线与所述水平面的坐标轴的位置关系,确定所述二维点云的旋转角;
    按照所述旋转角对所述二维点云进行旋转,得到所述三维点云向所述水平面投影的二维点云图像。
  25. 根据权利要求24所述的装置,其特征在于,所述生成模块,具体用于,
    根据所述二维点云的坐标信息,确定所述二维点云中包括的至少一条直线;其中,所述至少一条直线中每条直线包括的二维点的数量大于第二阈值;
    统计所述至少一条直线中每条直线所包含的二维点的数量,按照所述二维点的数量对所述至少一条直线排序,得到排序结果;
    根据所述排序结果逐次获取所述至少一条直线中的当前直线,确定所述至少一个直线中与所述当前直线垂直的直线的数量;
    在与当前直线垂直的直线的数量大于第三阈值的情况下,确定当前直线为满足直线条件的目标直线。
  26. 根据权利要求18至25任意一项所述的装置,其特征在于,所述确定模块,具体用于,
    对所述二维点云图像进行至少一次相似变换;
    确定每次相似变换后所述二维点云图像中二维点与参考平面图的参考点的一致程度;
    根据所述至少一次相似变换后确定的一致程度,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系;
    基于所述变换关系,将所述三维点云向所述参考平面图进行投影,得到所述三维点云在所述参考平面图的参考坐标系下的投影坐标。
  27. 根据权利要求26所述的装置,其特征在于,所述确定模块,具体用于,
    确定所述二维点云图像进行相似变换的变换范围;
    在所述变换范围内对所述二维点云图像进行至少一次相似变换。
  28. 根据权利要求27所述的装置,其特征在于,所述相似变换包括平移变换;所述确定模块,具体用于,
    针对每次平移变换后的二维点云图像,对所述二维点云图像进行预设次数的下采样处理,得到每次下采样处理后的第一采样图像;
    按照下采样处理次数由大到小的顺序,依次针对每次下采样处理后的第一采样图像,确定该第一采样图像中二维点与第二采样图像中参考点的一致程度;其中,第二采样图像为所述参考平面图经过与该第一采样图像相同的下采样处理得到的;
    根据第一次下采样处理后确定的第一采样图像与第二采样图像的一致程度,确定每次平移变换后的二维点云图像中二维点与参考平面图的参考点的一致程度。
  29. 根据权利要求27所述的装置,其特征在于,所述确定模块,具体用于,
    针对每次相似变换后的二维点云图像,遍历所述二维点云图像的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;
    确定所述参考平面图中对应于所述第一像素点的第一图像区域;
    在所述第一图像区域内存在表示所述参考点的第二像素点的情况下,确定所述第一像素点为第一目标像素点;
    确定所述二维点云图像中包含的第一目标像素点的数量与所述二维点云图像中包含的第一像素点的数量的第一比例;
    根据所述第一比例确定每次相似变换后所述二维点云图像与所述参考平面图的一致程度。
  30. 根据权利要求27所述的装置,其特征在于,所述确定模块,具体用于,
    每次对所述二维点云图像相似变换后,遍历所述参考平面图的第二像素点,其中,所述第二像素点为所述参考平面图中构成所述参考点的像素点;
    确定所述二维点云图像中对应于所述第二像素点的第二图像区域;
    在所述第二图像区域内存在表示所述二维点的第一像素点的情况下,确定所述第二像素点为第二目标像素点;
    确定所述参考平面图中包含的第二目标像素点的数量与所述参考平面中包含的第二像素点的数量的第二比例;
    根据所述第二比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
  31. 根据权利要求27所述的装置,其特征在于,所述确定模块,具体用于,
    每次对所述二维点云图像相似变换后,确定所述二维点云图像中位于非封闭区域内的第一像素点,其中,所述第一像素点为所述二维点云图像中构成所述二维点的像素点;
    确定位于所述非封闭区域的第一像素点的数量与所述二维点云图像中包含的第一像素点的数量的第三比例;
    根据所述第三比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
  32. 根据权利要求27所述的装置,其特征在于,所述确定模块,具体用于,
    每次对所述二维点云图像相似变换后,根据所述图像采集装置采集图像信息过程中的位姿信息,确定所述图像采集装置在所述二维点云图像中投影的第三像素点;其中,所述图像信息用于构建所述三维点云;
    确定位于非封闭区域的第三像素点的数量与所述二维点云图像中包含的第三像素点的数量的第四比例;
    根据所述第四比例确定每次相似变换后所述二维点云图像与参考平面图的一致程度。
  33. 根据权利要求27所述的装置,其特征在于,所述确定模块,具体用于,
    根据所述至少一次相似变换后确定的一致程度,确定所述二维点云图像中二维点匹配到所述参考平面图中参考点的二维变换矩阵;
    基于所述二维变换矩阵,确定所述三维点云中三维点匹配到所述参考平面图中参考点的变换关系。
  34. 一种定位装置,所述装置包括:
    获取模块,用于获取图像采集装置对目标物体采集的目标图像信息;
    对比模块,用于将采集的所述目标图像信息与三维点云中的三维点进行比对,其中,所述三维点云用于表示所述目标物体的三维空间信息,所述三维点云中的三维点与投影坐标对应,所述投影坐标是基于二维点云图像与参考平面图的一致性确定的,所述二维点云图像为所述三维点云向水平面投影的向水平面投影生成的,所述参考平面图用于表示所述目标物体在所述水平面投影的带有参考坐标的投影图;
    定位模块,用于根据与所述目标图像信息相匹配的三维点所对应的投影坐标,对所述图像采集装置进行定位。
  35. 一种电子设备,包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至16中任意一项所述的方法,或者,以执行权利要求17所述的方法。
  36. 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至16中任意一项所述的方法,或者,所述计算机程序指令被处理器执行时实现权利要求17所述的方法。
  37. 一种计算机程序,所述计算机程序包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1至权利要求16中任意一项所述的方法,或者,用于实现权利要求17所述的方法。
PCT/CN2019/118453 2019-07-29 2019-11-14 信息处理方法、定位方法及装置、电子设备和存储介质 WO2021017314A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021574903A JP7328366B2 (ja) 2019-07-29 2019-11-14 情報処理方法、測位方法及び装置、電子機器並びに記憶媒体
US17/551,865 US11983820B2 (en) 2019-07-29 2021-12-15 Information processing method and device, positioning method and device, electronic device and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910690235.0 2019-07-29
CN201910690235.0A CN112381919B (zh) 2019-07-29 2019-07-29 信息处理方法、定位方法及装置、电子设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/551,865 Continuation US11983820B2 (en) 2019-07-29 2021-12-15 Information processing method and device, positioning method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
WO2021017314A1 true WO2021017314A1 (zh) 2021-02-04

Family

ID=74228306

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118453 WO2021017314A1 (zh) 2019-07-29 2019-11-14 信息处理方法、定位方法及装置、电子设备和存储介质

Country Status (5)

Country Link
US (1) US11983820B2 (zh)
JP (1) JP7328366B2 (zh)
CN (1) CN112381919B (zh)
TW (1) TWI743645B (zh)
WO (1) WO2021017314A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077500A (zh) * 2021-03-12 2021-07-06 上海杰图天下网络科技有限公司 基于平面图的全景视点定位定姿方法、***、设备及介质
CN113223179A (zh) * 2021-05-12 2021-08-06 武汉中仪物联技术股份有限公司 管道选定线路长度的确定方法和装置
CN113657074A (zh) * 2021-08-13 2021-11-16 杭州安恒信息技术股份有限公司 三维空间内的线性文本布局方法、电子装置及存储介质
CN113970295A (zh) * 2021-09-28 2022-01-25 湖南三一中益机械有限公司 一种摊铺厚度测量方法、装置及摊铺机
CN114442101A (zh) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 基于成像毫米波雷达的车辆导航方法、装置、设备及介质
CN114781056A (zh) * 2022-04-13 2022-07-22 南京航空航天大学 一种基于特征匹配的飞机整机外形测量方法
WO2023045271A1 (zh) * 2021-09-24 2023-03-30 奥比中光科技集团股份有限公司 一种二维地图生成方法、装置、终端设备及存储介质

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11574485B2 (en) * 2020-01-17 2023-02-07 Apple Inc. Automatic measurements based on object classification
JP2023525538A (ja) * 2020-05-11 2023-06-16 コグネックス・コーポレイション 3d画像の体積を決定する方法及び装置
CN113744409B (zh) * 2021-09-09 2023-08-15 上海柏楚电子科技股份有限公司 工件定位方法、装置、***、设备与介质
CN113607166B (zh) * 2021-10-08 2022-01-07 广东省科学院智能制造研究所 基于多传感融合的自主移动机器人室内外定位方法及装置
CN113587930B (zh) * 2021-10-08 2022-04-05 广东省科学院智能制造研究所 基于多传感融合的自主移动机器人室内外导航方法及装置
CN114202684B (zh) * 2021-11-29 2023-06-16 哈尔滨工程大学 一种适用于水面环境的点云数据投影方法、***及装置
US12017657B2 (en) * 2022-01-07 2024-06-25 Ford Global Technologies, Llc Vehicle occupant classification using radar point cloud
CN114963025B (zh) * 2022-04-19 2024-03-26 深圳市城市公共安全技术研究院有限公司 泄漏点定位方法、装置、电子设备及可读存储介质
CN115423933B (zh) * 2022-08-12 2023-09-29 北京城市网邻信息技术有限公司 户型图生成方法、装置、电子设备及存储介质
CN115330652B (zh) * 2022-08-15 2023-06-16 北京城市网邻信息技术有限公司 点云拼接方法、设备及存储介质
CN116030103B (zh) * 2023-03-07 2024-02-27 北京百度网讯科技有限公司 确定砌筑质量的方法、装置、设备和介质
CN116542659B (zh) * 2023-04-10 2024-06-04 北京城市网邻信息技术有限公司 资源分配方法、装置、电子设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217458A (zh) * 2014-08-22 2014-12-17 长沙中科院文化创意与科技产业研究院 一种三维点云的快速配准方法
CN109035329A (zh) * 2018-08-03 2018-12-18 厦门大学 基于深度特征的相机姿态估计优化方法
CN109872350A (zh) * 2019-02-18 2019-06-11 重庆市勘测院 一种新的点云自动配准方法
US10353073B1 (en) * 2019-01-11 2019-07-16 Nurulize, Inc. Point cloud colorization system with real-time 3D visualization

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5471626B2 (ja) * 2010-03-09 2014-04-16 ソニー株式会社 情報処理装置、マップ更新方法、プログラム及び情報処理システム
US9025861B2 (en) * 2013-04-09 2015-05-05 Google Inc. System and method for floorplan reconstruction and three-dimensional modeling
CN104422406A (zh) * 2013-08-30 2015-03-18 鸿富锦精密工业(深圳)有限公司 平面度量测***及方法
US20150279075A1 (en) * 2014-03-27 2015-10-01 Knockout Concepts, Llc Recording animation of rigid objects using a single 3d scanner
TWI550425B (zh) * 2014-12-24 2016-09-21 財團法人工業技術研究院 三維點雲融合二維影像的方法、裝置與儲存媒體
CN105469388B (zh) * 2015-11-16 2019-03-15 集美大学 基于降维的建筑物点云配准方法
CN108472089B (zh) * 2015-12-15 2021-08-17 圣犹达医疗用品国际控股有限公司 电磁传感器跟踪***的运动框可视化
US10724848B2 (en) * 2016-08-29 2020-07-28 Beijing Qingying Machine Visual Technology Co., Ltd. Method and apparatus for processing three-dimensional vision measurement data
CN108932475B (zh) * 2018-05-31 2021-11-16 中国科学院西安光学精密机械研究所 一种基于激光雷达和单目视觉的三维目标识别***及方法
CN108830894B (zh) * 2018-06-19 2020-01-17 亮风台(上海)信息科技有限公司 基于增强现实的远程指导方法、装置、终端和存储介质
CN109900338B (zh) 2018-12-25 2020-09-01 西安中科天塔科技股份有限公司 一种路面坑槽体积测量方法及装置
CN109993793B (zh) * 2019-03-29 2021-09-07 北京易达图灵科技有限公司 视觉定位方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104217458A (zh) * 2014-08-22 2014-12-17 长沙中科院文化创意与科技产业研究院 一种三维点云的快速配准方法
CN109035329A (zh) * 2018-08-03 2018-12-18 厦门大学 基于深度特征的相机姿态估计优化方法
US10353073B1 (en) * 2019-01-11 2019-07-16 Nurulize, Inc. Point cloud colorization system with real-time 3D visualization
CN109872350A (zh) * 2019-02-18 2019-06-11 重庆市勘测院 一种新的点云自动配准方法

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113077500A (zh) * 2021-03-12 2021-07-06 上海杰图天下网络科技有限公司 基于平面图的全景视点定位定姿方法、***、设备及介质
CN113223179A (zh) * 2021-05-12 2021-08-06 武汉中仪物联技术股份有限公司 管道选定线路长度的确定方法和装置
CN113657074A (zh) * 2021-08-13 2021-11-16 杭州安恒信息技术股份有限公司 三维空间内的线性文本布局方法、电子装置及存储介质
WO2023045271A1 (zh) * 2021-09-24 2023-03-30 奥比中光科技集团股份有限公司 一种二维地图生成方法、装置、终端设备及存储介质
CN113970295A (zh) * 2021-09-28 2022-01-25 湖南三一中益机械有限公司 一种摊铺厚度测量方法、装置及摊铺机
CN113970295B (zh) * 2021-09-28 2024-04-16 湖南三一中益机械有限公司 一种摊铺厚度测量方法、装置及摊铺机
CN114442101A (zh) * 2022-01-28 2022-05-06 南京慧尔视智能科技有限公司 基于成像毫米波雷达的车辆导航方法、装置、设备及介质
CN114442101B (zh) * 2022-01-28 2023-11-14 南京慧尔视智能科技有限公司 基于成像毫米波雷达的车辆导航方法、装置、设备及介质
CN114781056A (zh) * 2022-04-13 2022-07-22 南京航空航天大学 一种基于特征匹配的飞机整机外形测量方法
CN114781056B (zh) * 2022-04-13 2023-02-03 南京航空航天大学 一种基于特征匹配的飞机整机外形测量方法

Also Published As

Publication number Publication date
TW202105328A (zh) 2021-02-01
TWI743645B (zh) 2021-10-21
CN112381919A (zh) 2021-02-19
JP2022537984A (ja) 2022-08-31
US20220108528A1 (en) 2022-04-07
US11983820B2 (en) 2024-05-14
CN112381919B (zh) 2022-09-27
JP7328366B2 (ja) 2023-08-16

Similar Documents

Publication Publication Date Title
WO2021017314A1 (zh) 信息处理方法、定位方法及装置、电子设备和存储介质
US20220262039A1 (en) Positioning method, electronic device, and storage medium
US20240169660A1 (en) Visual localisation
CN108986161B (zh) 一种三维空间坐标估计方法、装置、终端和存储介质
TWI434567B (zh) An image processing apparatus, an image processing method, an image processing program, and a recording medium
TW202143100A (zh) 圖像處理方法、電子設備及電腦可讀儲存介質
US20150227808A1 (en) Constructing Contours from Imagery
TWI587241B (zh) Method, device and system for generating two - dimensional floor plan
CN106462943A (zh) 将全景成像与航拍成像对齐
WO2023280038A1 (zh) 一种三维实景模型的构建方法及相关装置
CN108801225B (zh) 一种无人机倾斜影像定位方法、***、介质及设备
WO2021142843A1 (zh) 图像扫描方法及装置、设备、存储介质
CN113298871B (zh) 地图生成方法、定位方法及其***、计算机可读存储介质
CN113808269A (zh) 地图生成方法、定位方法、***及计算机可读存储介质
WO2021170051A1 (zh) 一种数字摄影测量方法、电子设备及***
CN113902802A (zh) 视觉定位方法及相关装置、电子设备和存储介质
US9852542B1 (en) Methods and apparatus related to georeferenced pose of 3D models
CN110135474A (zh) 一种基于深度学习的倾斜航空影像匹配方法和***
KR102146839B1 (ko) 실시간 가상현실 구축을 위한 시스템 및 방법
CN117635875B (zh) 一种三维重建方法、装置及终端
Sahin The geometry and usage of the supplementary fisheye lenses in smartphones
Chen et al. The power of indoor crowd: Indoor 3D maps from the crowd
CN109029365B (zh) 电力走廊异侧影像连接点提取方法、***、介质及设备
Alsadik Targetless Coregistration of Terrestrial Laser Scanning Point Clouds Using a Multi Surrounding Scan Image-Based Technique
Liu et al. An automated 3D reconstruction method of UAV images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19939788

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021574903

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19939788

Country of ref document: EP

Kind code of ref document: A1