CN112184815A - Method and device for determining position and posture of panoramic image in three-dimensional model - Google Patents
Method and device for determining position and posture of panoramic image in three-dimensional model Download PDFInfo
- Publication number
- CN112184815A CN112184815A CN202011035368.3A CN202011035368A CN112184815A CN 112184815 A CN112184815 A CN 112184815A CN 202011035368 A CN202011035368 A CN 202011035368A CN 112184815 A CN112184815 A CN 112184815A
- Authority
- CN
- China
- Prior art keywords
- panoramic image
- information
- dimensional model
- virtual
- real
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000009877 rendering Methods 0.000 claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 4
- 238000005259 measurement Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention provides a method and a device for determining the position and the posture of a panoramic image in a three-dimensional model, wherein the method comprises the following steps: acquiring position point information in a three-dimensional model, and rendering virtual panorama and virtual depth map information of the position point information; extracting the characteristic information of a real panoramic image and the characteristic information of a virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information; and determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information. The method comprises the steps of selecting a viewpoint in a model, rendering a panoramic image, obtaining a corresponding depth map, obtaining matching points with the panoramic image collected in reality in an image registration mode, and knowing coordinate values of corresponding points of each matching point in a three-dimensional model, so that the position and posture information of the panoramic image in the three-dimensional model is resolved in a graphic image processing mode under the condition that a measuring system does not need to be moved.
Description
Technical Field
The invention relates to the technical field of panoramic images, in particular to a method and a device for determining the position and the posture of a panoramic image in a three-dimensional model.
Background
The existing space display method mainly comprises a panorama display mode and a three-dimensional model display mode. Although the constructed scene is more realistic due to the image of the real scene adopted by the panorama, the panoramic picture lacks three-dimensional data, and the scene representation lacks global display. Although the space structural component of the three-dimensional model has reality, the texture details are not perfect.
In the prior art, a display mode combining a space panoramic 3D view and a panoramic image is often used, so that the detail display of the space is fine and real, and the display of the space has integral global display. However, this display method needs to record the accurate spatial 3D position information of each panoramic acquisition point and the pose information of the corresponding panoramic picture in the three-dimensional model when acquiring the three-dimensional model and the panorama.
Currently, in order to acquire such information, a panoramic three-dimensional all-in-one machine is generally used for acquiring or other hardware devices including an inertial measurement unit are used for determining the position and the posture. Is not suitable for large-scale popularization.
Therefore, how to better realize the determination of the position and the posture of the panoramic image in the three-dimensional model has become an urgent problem to be solved in the industry.
Disclosure of Invention
Embodiments of the present invention provide a method and an apparatus for determining a position and a pose of a panoramic image in a three-dimensional model, so as to solve the technical problems mentioned in the above background art, or at least partially solve the technical problems mentioned in the above background art.
In a first aspect, an embodiment of the present invention provides a method for determining a position and a pose of a panoramic image in a three-dimensional model, including:
acquiring position point information in a three-dimensional model, and rendering virtual panorama and virtual depth map information of the position point information;
extracting the characteristic information of a real panoramic image and the characteristic information of a virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information;
and determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
More specifically, the target matching point information includes: the virtual panorama target matching point and the real panorama target matching point.
More specifically, the step of determining the position information and the pose information of the panoramic image in the three-dimensional model according to the target matching point information specifically includes:
acquiring the space coordinates of the target matching points of the virtual panoramic image under the three-dimensional model coordinates according to the pixel coordinates of the target matching points of the virtual panoramic image and the information of the virtual depth image, and acquiring the space coordinates of the target matching points of the real panoramic image under a local coordinate system according to the pixel coordinates of the target matching points of the real panoramic image and the information of the virtual depth image;
rotating the space coordinates of the target matching points of the real panoramic image under the local coordinate system according to a preset rotation matrix to obtain the rotation space coordinates of the real panoramic image;
and determining the position information and the posture information of the real panoramic image in the three-dimensional model according to the space coordinate of the virtual panoramic image target matching point under the three-dimensional model coordinate and the real panoramic image rotation space coordinate.
More specifically, the step of determining the position information and the posture information of the real panorama in the three-dimensional model according to the space coordinate of the virtual panorama target matching point in the three-dimensional model coordinate and the real panorama rotation space coordinate specifically includes:
calculating a distance value between a space coordinate of the virtual panorama target matching point under the three-dimensional model coordinate and the real panorama rotation space coordinate;
calculating a coordinate difference value between the real panorama rotating space coordinate and the space coordinate of the real panorama target matching point under the local coordinate system according to the distance value;
and obtaining a preset rotation matrix and a three-dimensional model center coordinate according to the coordinate difference value by combining a least square method, so as to determine attitude information and position information of the three-dimensional model according to the preset rotation matrix and the three-dimensional model center coordinate.
More specifically, the step of obtaining the spatial coordinates of the target matching point of the virtual panorama under the three-dimensional model coordinates according to the pixel coordinates of the target matching point of the virtual panorama and the information of the virtual depth map specifically includes:
and converting the pixel coordinates of the target matching points of the virtual panoramic image into spherical coordinates according to the line number of the panoramic image, converting the spherical coordinates into rectangular coordinate system coordinates, and converting the rectangular coordinate system coordinates into coordinates under a world coordinate system to obtain the space coordinates of the target matching points of the virtual panoramic image under the three-dimensional model coordinates.
More specifically, the step of obtaining the location point information in the three-dimensional model specifically includes:
acquiring bounding box size information of a three-dimensional model, selecting uniformly distributed viewpoints in the three-dimensional model as position points, and determining position point information according to the bounding box size information of the three-dimensional model.
More specifically, the step of performing feature matching on the feature information of the virtual panorama and the feature information of the real panorama to obtain target matching point information specifically includes:
and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain matching result information, and performing correctness evaluation on the matching result information to obtain target matching point information.
In a second aspect, an embodiment of the present invention provides an apparatus for determining a position and a pose of a panoramic image in a three-dimensional model, including:
the rendering module is used for acquiring position point information in the three-dimensional model and rendering virtual panoramagram and virtual depth map information of the position point information;
the matching module is used for extracting the characteristic information of the real panoramic image and the characteristic information of the virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information;
and the determining module is used for determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method for determining the position and orientation of a panoramic image in a three-dimensional model according to the first aspect when executing the program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method for determining the position and orientation of a panoramic image in a three-dimensional model according to the first aspect.
According to the method and the device for determining the position and the posture of the panoramic image in the three-dimensional model, provided by the embodiment of the invention, the viewpoint is selected in the model, the panoramic image is rendered, the corresponding depth image is obtained, the matching points are obtained by the panoramic image which is actually acquired in an image registration mode, and the coordinate value of the corresponding point of each matching point in the three-dimensional model is known from the matching points, so that the position and the posture information of the panoramic image in the three-dimensional model are solved in a graphic image processing mode under the condition that a measuring system does not need to be moved. Providing necessary basic pose information for a space display method combining a three-dimensional model capable of displaying a space global 3D view and a panorama
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flow chart illustrating a method for determining a position and a pose of a panoramic image in a three-dimensional model according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a coordinate mapping according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a position and orientation determining apparatus in a three-dimensional model according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a method for determining a position and a pose of a panoramic image in a three-dimensional model according to an embodiment of the present invention, as shown in fig. 1, including:
step S1, position point information in the three-dimensional model is obtained, and a virtual panoramic image and virtual depth map information of the position point information are rendered;
specifically, in the embodiment of the present invention, values are uniformly taken on the rectangular bounding box of the bounding box in the single space of the three-dimensional model, for example, two points at the rectangular bounding box 1/3 and at the rectangular bounding box 2/3 are taken as the position point information in the three-dimensional model.
The virtual depth map information described in the embodiment of the invention reflects each pixel point in the panoramic image, and the smaller the value of the depth information from the viewpoint during rendering to the corresponding position of the three-dimensional model is, the closer the depth information is reflected to black on the image, and the virtual depth map information can help to determine the position relationship from the position point to the coordinate center point of the three-dimensional model.
In the embodiment of the invention, the position point cx is used during renderingv,cyv,czvAnd rendering by the camera according to the sequence of the z direction, the x direction, the z direction and the x direction in the three-dimensional model.
Step S2, extracting the characteristic information of the real panoramic image and the characteristic information of the virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information;
specifically, the Feature extraction described in the embodiment of the present invention is to extract Feature points from an image by using a Feature extraction algorithm, for example, a surf (speed Up Robust Feature) operator, a sift (scale artifact Feature transform) operator, record the Feature point positions and extract the Feature points from which image, obtain Feature information of a real panorama and Feature information of a virtual panorama, perform Feature matching on the Feature information of the real panorama and the Feature information of the virtual panorama, that is, find a best matching point for each extracted Feature information, and obtain target matching point information, where the target matching points may exist in plural numbers, and the target matching point information includes: the virtual panorama target matching point and the real panorama target matching point.
And step S3, determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
Specifically, the spatial coordinates of the target matching point of the virtual panorama under the three-dimensional model coordinates are obtained according to the pixel coordinates of the target matching point of the virtual panorama and the information of the virtual depth map, and the direction vector of the target matching point under the local coordinate system of the real panorama or the coordinates of the target matching point under the unit sphere coordinates are determined according to the spatial coordinates of the target matching point under the three-dimensional model coordinates;
rotating the space coordinates of the real panorama target matching point under the local coordinate system according to a preset rotation matrix to obtain real panorama rotation space coordinates, and calculating the distance value between the space coordinates of the virtual panorama target matching point under the three-dimensional model coordinates and the real panorama center coordinates;
multiplying the distance value by a normal vector under a real panoramic image rotation coordinate system, and obtaining a difference value between a coordinate of a matching point under a three-dimensional model coordinate system and a coordinate of a central point of the real panoramic image; and obtaining a preset rotation matrix and a three-dimensional model center coordinate according to the coordinate difference value by combining a least square method, so as to determine attitude information and position information of the three-dimensional model according to the preset rotation matrix and the three-dimensional model center coordinate.
According to the embodiment of the invention, the viewpoint is selected in the model, the panorama is rendered, the corresponding depth map is obtained, the matching points are obtained in an image registration mode with the actually acquired panorama, and the coordinate value of the corresponding point of each matching point in the three-dimensional model is known from the matching points, so that the position and posture information of the panorama in the three-dimensional model is solved in a graphic image processing mode without moving a measurement system. Providing necessary basic pose information for a space display method combining a three-dimensional model capable of displaying a space global 3D view and a panorama
On the basis of the above embodiment, the target matching point information includes: virtual panorama target matching point and real panorama target matching point
The step of determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information specifically includes:
acquiring the space coordinates of the target matching points of the virtual panoramic image under the three-dimensional model coordinates according to the pixel coordinates of the target matching points of the virtual panoramic image and the information of the virtual depth image, and acquiring the space coordinates of the target matching points of the real panoramic image under a local coordinate system according to the pixel coordinates of the target matching points of the real panoramic image and the information of the virtual depth image;
rotating the space coordinates of the target matching points of the real panoramic image under the local coordinate system according to a preset rotation matrix to obtain the rotation space coordinates of the real panoramic image;
and determining the position information and the posture information of the real panoramic image in the three-dimensional model according to the space coordinate of the virtual panoramic image target matching point under the three-dimensional model coordinate and the real panoramic image rotation space coordinate.
Specifically, fig. 2 is a schematic diagram of coordinate mapping according to an embodiment of the present invention, and as shown in fig. 2, a panorama according to an embodiment of the present invention is a mapping of a three-dimensional scene, where each point on the panorama corresponds to a point p in a three-dimensional space. Each pixel point on the panorama can be converted into an angle theta and an angle psi under a spherical coordinate system. The depth map provides the distance r of the panorama center to a three-dimensional point in space p. Theta is the included angle between the op connecting line and the oxy plane, and the variation range is-90 degrees to 90 degrees. Phi is an included angle between op and the z-axis from the counterclockwise direction of the z-axis viewed from the positive y-direction, and the change range is 0-360 degrees. Feature points on the virtual panorama are converted from image coordinates to coordinates in a three-dimensional model coordinate system.
The image coordinates of the matching points in the virtual panorama are xcoord, ycoord, the number of lines of the panorama is imghight, and the images are converted into spherical coordinates ψ ═ float (xcoord)/(2 × imghight) × pi 2+ pi, θ ═ float (ycoord/imghight-0.5) |.
The coordinate system is converted into a rectangular coordinate system, the coordinate is,yv=rsinθ,the coordinates in the world coordinate system are converted into coordinates,yv=rsinθ+cyv,and finally obtaining the space coordinates of the target matching points of the virtual panoramic image under the three-dimensional model coordinates.
The real panorama defines a local coordinate system as a right-hand coordinate system, the direction from the center of the panorama sphere to the center point of the panorama is defined as a z direction, and the upward direction is a y direction. The image coordinates of the matching points in the real panorama are xcoord, ycoord, the number of lines of the panorama is imghight, and the matching points are converted into spherical coordinates ψ ═ float (xcoord)/(2 × imghight) × pi 2+ pi, θ ═ float (ycoord/imghight-0.5) |.
Let the center coordinates of the real panorama in the three-dimensional model be (cx, cy, cz). And rotating the real panoramic image coordinate system to enable the real panoramic image coordinate system to be consistent with the axial direction of the three-dimensional model, expressing the Euler angle of a rotation matrix as (rx, ry, rz), obtaining a real panoramic image rotation space coordinate, converting the problem of obtaining a central coordinate and the rotation matrix into a least square problem, and finally determining the position information and the posture information of the real panoramic image in the three-dimensional model according to the space coordinate of the virtual panoramic image target matching point under the three-dimensional model coordinate and the real panoramic image rotation space coordinate.
Wherein, the preset rotation matrix is:
s1=sin(ry),s2=sin(rz),s3=sin(rx)
c1=cos(ry),c2=cos(rz),c3=cos(rx)
the characteristic points on the virtual panoramic image are mapped to the space coordinate x under the three-dimensional model coordinate system by the image coordinatev,yv,zv。
The characteristic point on the real panoramic picture is mapped to the space coordinate x under the local coordinate system by the image coordinater,yr,zr。
For each pair of matching points, there is a relationship,
rotating the space coordinates of the mapping of the characteristic points of the real panorama (x)t,yt,zt)T=rotmatrix·(xr,yr,zr)。
On the basis of the above embodiment, the step of determining the position information and the pose information of the real panorama in the three-dimensional model according to the space coordinate of the virtual panorama target matching point in the three-dimensional model coordinate and the real panorama rotation space coordinate specifically includes:
calculating a distance value between a space coordinate of the virtual panorama target matching point under the three-dimensional model coordinate and the real panorama rotation space coordinate;
calculating a coordinate difference value between the real panorama rotating space coordinate and the space coordinate of the real panorama target matching point under the local coordinate system according to the distance value;
and obtaining a preset rotation matrix and a three-dimensional model center coordinate according to the coordinate difference value by combining a least square method, so as to determine attitude information and position information of the three-dimensional model according to the preset rotation matrix and the three-dimensional model center coordinate.
Specifically, calculating a distance value between a space coordinate of the virtual panorama target matching point under the three-dimensional model coordinate and the real panorama rotation space coordinate specifically includes:
distance=sqrt((xv-cx)*(xv-cx)+(yv-cy)*(yv-cy)+(zv-cz)*(zv-cz))
calculating a coordinate difference value between the real panorama rotating space coordinate and the space coordinate of the real panorama target matching point under the local coordinate system according to the distance value, wherein the coordinate difference value is specifically as follows:
dx=xt*distance-(xv-cx),
dy=yt*distance-(yv-cy),
dz=zt*distance-(zv-cz)
distance difference is diff ═ dx2+dy2+dz2
The method for obtaining the preset rotation matrix and the three-dimensional model center coordinate by combining the coordinate difference value with the least square method specifically comprises the following steps:
if there are N pairs of matching points, then the obtained result is obtainedThe minimum preset rotation matrix and the three-dimensional model center coordinate, and the preset rotation obtained at the momentThe matrix is a preset rotation matrix and a three-dimensional model center coordinate, and the obtained three-dimensional model center coordinate is position information of the three-dimensional model.
On the basis of the above embodiment, the step of obtaining the location point information in the three-dimensional model specifically includes:
acquiring bounding box size information of a three-dimensional model, selecting uniformly distributed viewpoints in the three-dimensional model as position points, and determining position point information according to the bounding box size information of the three-dimensional model.
Specifically, in the embodiment of the present invention, values are uniformly taken on the rectangular bounding box of the bounding box in the single space of the three-dimensional model, for example, two points at the rectangular bounding box 1/3 and at the rectangular bounding box 2/3 are taken as the position point information in the three-dimensional model.
According to the embodiment of the invention, the position point information can be more effectively obtained by uniformly selecting the position point information on the bounding box, and the realization of the subsequent steps is facilitated.
On the basis of the above embodiment, the step of performing feature matching on the feature information of the virtual panorama and the feature information of the real panorama to obtain target matching point information specifically includes:
and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain matching result information, and performing correctness evaluation on the matching result information to obtain target matching point information.
Specifically, the Feature extraction described in the embodiment of the present invention is to extract Feature points from an image by using a Feature extraction algorithm, for example, a surf (speed Up Robust Feature) operator, a sift (scale artifact Feature transform) operator, record the Feature point positions and what images are extracted from the Feature points, obtain Feature information of a real panorama and Feature information of a virtual panorama, perform Feature matching on the Feature information of the real panorama and the Feature information of the virtual panorama, that is, find the best matching point for each extracted Feature information, and obtain target matching point information, where multiple target matching points may exist.
Fig. 3 is a schematic diagram of a position and orientation determining apparatus in a three-dimensional model according to an embodiment of the present invention, as shown in fig. 3, including: a rendering module 310, a matching module 320, and a determining module 330; the rendering module 310 is configured to obtain location point information in a three-dimensional model, and render a virtual panorama and virtual depth map information of the location point information; the matching module 320 is configured to extract feature information of a real panorama and feature information of a virtual panorama, perform feature matching on the feature information of the virtual panorama and the feature information of the real panorama, and obtain target matching point information; the determining module 330 is configured to determine position information and pose information of the panoramic image in the three-dimensional model according to the target matching point information.
The apparatus provided in the embodiment of the present invention is used for executing the above method embodiments, and for details of the process and the details, reference is made to the above embodiments, which are not described herein again.
According to the embodiment of the invention, the viewpoint is selected from the model, the panorama is rendered, the corresponding depth map is obtained, the matching points are obtained by image registration with the actually collected panorama, and the coordinate value of the corresponding point of each matching point in the three-dimensional model is known from the matching points, so that the position and posture information of the panorama in the three-dimensional model is solved by a graphic image processing mode without moving a measurement system. Providing necessary basic pose information for a space display method combining a three-dimensional model capable of displaying a space global 3D view and a panorama
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 4, the electronic device may include: a processor (processor)410, a communication Interface 420, a memory (memory)430 and a communication bus 440, wherein the processor 410, the communication Interface 420 and the memory 430 are communicated with each other via the communication bus 440. The processor 410 may call logic instructions in the memory 430 to perform the following method: acquiring position point information in a three-dimensional model, and rendering virtual panorama and virtual depth map information of the position point information; extracting the characteristic information of a real panoramic image and the characteristic information of a virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information; and determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
In addition, the logic instructions in the memory 430 may be implemented in the form of software functional units and stored in a computer readable storage medium when the software functional units are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
An embodiment of the present invention discloses a computer program product, which includes a computer program stored on a non-transitory computer readable storage medium, the computer program including program instructions, when the program instructions are executed by a computer, the computer can execute the methods provided by the above method embodiments, for example, the method includes: acquiring position point information in a three-dimensional model, and rendering virtual panorama and virtual depth map information of the position point information; extracting the characteristic information of a real panoramic image and the characteristic information of a virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information; and determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
Embodiments of the present invention provide a non-transitory computer-readable storage medium storing server instructions, where the server instructions cause a computer to execute the method provided in the foregoing embodiments, for example, the method includes: acquiring position point information in a three-dimensional model, and rendering virtual panorama and virtual depth map information of the position point information; extracting the characteristic information of a real panoramic image and the characteristic information of a virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information; and determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method for determining a position and a pose of a panoramic image in a three-dimensional model, comprising:
acquiring position point information in a three-dimensional model, and rendering virtual panorama and virtual depth map information of the position point information;
extracting the characteristic information of a real panoramic image and the characteristic information of a virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information;
and determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
2. The method of determining the position and orientation of the panoramic image in the three-dimensional model according to claim 1, wherein the target matching point information includes: the virtual panorama target matching point and the real panorama target matching point.
3. The method for determining the position and orientation of the panoramic image in the three-dimensional model according to claim 2, wherein the step of determining the position information and the orientation information of the panoramic image in the three-dimensional model according to the target matching point information specifically comprises:
acquiring the space coordinates of the target matching points of the virtual panoramic image under the three-dimensional model coordinates according to the pixel coordinates of the target matching points of the virtual panoramic image and the information of the virtual depth image, and acquiring the space coordinates of the target matching points of the real panoramic image under a local coordinate system according to the pixel coordinates of the target matching points of the real panoramic image and the information of the virtual depth image;
rotating the space coordinates of the target matching points of the real panoramic image under the local coordinate system according to a preset rotation matrix to obtain the rotation space coordinates of the real panoramic image;
and determining the position information and the posture information of the real panoramic image in the three-dimensional model according to the space coordinate of the virtual panoramic image target matching point under the three-dimensional model coordinate and the real panoramic image rotation space coordinate.
4. The method for determining the position and orientation of the panoramic image in the three-dimensional model according to claim 3, wherein the step of determining the position information and the orientation information of the real panoramic image in the three-dimensional model according to the spatial coordinates of the target matching point of the virtual panoramic image in the three-dimensional model coordinates and the rotation spatial coordinates of the real panoramic image comprises:
calculating a distance value between a space coordinate of the virtual panorama target matching point under the three-dimensional model coordinate and the real panorama rotation space coordinate;
calculating a coordinate difference value between the real panorama rotating space coordinate and the space coordinate of the real panorama target matching point under the local coordinate system according to the distance value;
and obtaining a preset rotation matrix and a three-dimensional model center coordinate according to the coordinate difference value by combining a least square method, so as to determine attitude information and position information of the three-dimensional model according to the preset rotation matrix and the three-dimensional model center coordinate.
5. The method for determining the position and orientation of the panoramic image in the three-dimensional model according to claim 3, wherein the step of obtaining the spatial coordinates of the target matching point of the virtual panoramic image under the three-dimensional model coordinates according to the pixel coordinates of the target matching point of the virtual panoramic image and the information of the virtual depth map specifically comprises:
and converting the pixel coordinates of the target matching points of the virtual panoramic image into spherical coordinates according to the line number of the panoramic image, converting the spherical coordinates into rectangular coordinate system coordinates, and converting the rectangular coordinate system coordinates into coordinates under a world coordinate system to obtain the space coordinates of the target matching points of the virtual panoramic image under the three-dimensional model coordinates.
6. The method for determining the position and the posture of the panoramic image in the three-dimensional model according to claim 1, wherein the step of obtaining the position point information in the three-dimensional model specifically comprises:
acquiring bounding box size information of a three-dimensional model, selecting uniformly distributed viewpoints in the three-dimensional model as position points, and determining position point information according to the bounding box size information of the three-dimensional model.
7. The method for determining the position and the posture of the panoramic image in the three-dimensional model according to claim 1, wherein the step of performing feature matching on the feature information of the virtual panoramic image and the feature information of the real panoramic image to obtain target matching point information specifically comprises:
and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain matching result information, and performing correctness evaluation on the matching result information to obtain target matching point information.
8. An apparatus for determining a position and orientation of a panoramic image in a three-dimensional model, comprising:
the rendering module is used for acquiring position point information in the three-dimensional model and rendering virtual panoramagram and virtual depth map information of the position point information;
the matching module is used for extracting the characteristic information of the real panoramic image and the characteristic information of the virtual panoramic image, and performing characteristic matching on the characteristic information of the virtual panoramic image and the characteristic information of the real panoramic image to obtain target matching point information;
and the determining module is used for determining the position information and the posture information of the panoramic image in the three-dimensional model according to the target matching point information.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor, when executing the program, carries out the steps of a method for determining the position and orientation of a panoramic image in a three-dimensional model according to any one of claims 1 to 7.
10. A non-transitory computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for determining the position and orientation of a panoramic image in a three-dimensional model according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011035368.3A CN112184815A (en) | 2020-09-27 | 2020-09-27 | Method and device for determining position and posture of panoramic image in three-dimensional model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011035368.3A CN112184815A (en) | 2020-09-27 | 2020-09-27 | Method and device for determining position and posture of panoramic image in three-dimensional model |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112184815A true CN112184815A (en) | 2021-01-05 |
Family
ID=73944645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011035368.3A Pending CN112184815A (en) | 2020-09-27 | 2020-09-27 | Method and device for determining position and posture of panoramic image in three-dimensional model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112184815A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554577A (en) * | 2021-07-13 | 2021-10-26 | 咪咕文化科技有限公司 | Three-dimensional image synthesis method, three-dimensional image synthesis device, storage medium and storage apparatus |
CN114895796A (en) * | 2022-07-15 | 2022-08-12 | 杭州易绘科技有限公司 | Space interaction method and device based on panoramic image and application |
CN116778127A (en) * | 2023-07-05 | 2023-09-19 | 广州视景医疗软件有限公司 | Panoramic view-based three-dimensional digital scene construction method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103017653A (en) * | 2012-11-27 | 2013-04-03 | 武汉海达数云技术有限公司 | Registration and measurement method of spherical panoramic image and three-dimensional laser scanning point cloud |
KR101770648B1 (en) * | 2016-05-18 | 2017-08-23 | 주식회사 큐픽스 | A method for providing indoor virtual experience based on panorama pictures and a 3d floor plan, a portable terminal and a method for providing floor plans using the portable terminal |
CN110458871A (en) * | 2019-08-14 | 2019-11-15 | 上海霁目信息科技有限公司 | The method for registering of model and panorama sketch, system, equipment and medium and map |
-
2020
- 2020-09-27 CN CN202011035368.3A patent/CN112184815A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103017653A (en) * | 2012-11-27 | 2013-04-03 | 武汉海达数云技术有限公司 | Registration and measurement method of spherical panoramic image and three-dimensional laser scanning point cloud |
KR101770648B1 (en) * | 2016-05-18 | 2017-08-23 | 주식회사 큐픽스 | A method for providing indoor virtual experience based on panorama pictures and a 3d floor plan, a portable terminal and a method for providing floor plans using the portable terminal |
CN110458871A (en) * | 2019-08-14 | 2019-11-15 | 上海霁目信息科技有限公司 | The method for registering of model and panorama sketch, system, equipment and medium and map |
Non-Patent Citations (3)
Title |
---|
JINDONG ZHANG 等: "An improved vehicle panoramic image generation algorithm", 《MULTIMEDIA TOOLS AND APPLICATIONS》, vol. 78, 25 June 2019 (2019-06-25), pages 27663, XP036882824, DOI: 10.1007/s11042-019-07890-w * |
刘全海 等: "激光点云与全景影像配准方法研究", 《现代测绘》, no. 1, 25 January 2016 (2016-01-25), pages 24 - 28 * |
王竞雪 等: "局部仿射变换的自适应窗口立体影像匹配算法", 《测绘科学》, 7 May 2019 (2019-05-07), pages 125 - 131 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113554577A (en) * | 2021-07-13 | 2021-10-26 | 咪咕文化科技有限公司 | Three-dimensional image synthesis method, three-dimensional image synthesis device, storage medium and storage apparatus |
CN114895796A (en) * | 2022-07-15 | 2022-08-12 | 杭州易绘科技有限公司 | Space interaction method and device based on panoramic image and application |
CN116778127A (en) * | 2023-07-05 | 2023-09-19 | 广州视景医疗软件有限公司 | Panoramic view-based three-dimensional digital scene construction method and system |
CN116778127B (en) * | 2023-07-05 | 2024-01-05 | 广州视景医疗软件有限公司 | Panoramic view-based three-dimensional digital scene construction method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107223269B (en) | Three-dimensional scene positioning method and device | |
CN112184815A (en) | Method and device for determining position and posture of panoramic image in three-dimensional model | |
CN112581629A (en) | Augmented reality display method and device, electronic equipment and storage medium | |
CN112686877B (en) | Binocular camera-based three-dimensional house damage model construction and measurement method and system | |
CN109711246B (en) | Dynamic object recognition method, computer device and readable storage medium | |
CN108230435B (en) | Graphics processing using cube map textures | |
CN110276774B (en) | Object drawing method, device, terminal and computer-readable storage medium | |
CN111724481A (en) | Method, device, equipment and storage medium for three-dimensional reconstruction of two-dimensional image | |
CN111459269B (en) | Augmented reality display method, system and computer readable storage medium | |
WO2015179216A1 (en) | Orthogonal and collaborative disparity decomposition | |
US20230298280A1 (en) | Map for augmented reality | |
CN114862973B (en) | Space positioning method, device and equipment based on fixed point location and storage medium | |
CN113361365B (en) | Positioning method, positioning device, positioning equipment and storage medium | |
CN115439607A (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium | |
CN113379815A (en) | Three-dimensional reconstruction method and device based on RGB camera and laser sensor and server | |
CN111583381A (en) | Rendering method and device of game resource map and electronic equipment | |
CN115830135A (en) | Image processing method and device and electronic equipment | |
EP4058874A1 (en) | Method and system for associating device coordinate systems in a multi-person ar system | |
CN111583398A (en) | Image display method and device, electronic equipment and computer readable storage medium | |
CN113763478A (en) | Unmanned vehicle camera calibration method, device, equipment, storage medium and system | |
CN108305281A (en) | Calibration method, device, storage medium, program product and the electronic equipment of image | |
CN114037987A (en) | Intelligent identification method, device, medium and equipment for scrap steel | |
CN113570725A (en) | Three-dimensional surface reconstruction method and device based on clustering, server and storage medium | |
CN114004890A (en) | Attitude determination method and apparatus, electronic device, and storage medium | |
CN113793392A (en) | Camera parameter calibration method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |