CN111145095A - VR (virtual reality) diagram generation method with scale measurement and data acquisition device - Google Patents

VR (virtual reality) diagram generation method with scale measurement and data acquisition device Download PDF

Info

Publication number
CN111145095A
CN111145095A CN201911362662.2A CN201911362662A CN111145095A CN 111145095 A CN111145095 A CN 111145095A CN 201911362662 A CN201911362662 A CN 201911362662A CN 111145095 A CN111145095 A CN 111145095A
Authority
CN
China
Prior art keywords
frame
point cloud
coordinate system
panorama
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911362662.2A
Other languages
Chinese (zh)
Other versions
CN111145095B (en
Inventor
陈诺
洪涛
卢雄辉
欧阳若愚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Nuoda Communication Technology Co ltd
Original Assignee
Shenzhen Wujing Intelligent Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wujing Intelligent Robot Co ltd filed Critical Shenzhen Wujing Intelligent Robot Co ltd
Priority to CN201911362662.2A priority Critical patent/CN111145095B/en
Publication of CN111145095A publication Critical patent/CN111145095A/en
Application granted granted Critical
Publication of CN111145095B publication Critical patent/CN111145095B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a VR diagram generation method with scale measurement and a data acquisition device, wherein the method comprises the following steps: s1, collecting the VR panorama and the three-dimensional laser point set by using a data collection device, so that the corresponding three-dimensional laser point can be found for the pixel of each angle of the VR panorama; s2, generating a local point cloud map; and S3, generating a depth map capable of carrying out scale measurement according to the VR panoramic image and the local point cloud map. The invention designs a set of portable data acquisition device with strong operability; fusion of the point cloud data and the VR panorama is achieved; and the measurement of the scale in the VR panorama is realized.

Description

VR (virtual reality) diagram generation method with scale measurement and data acquisition device
Technical Field
The invention relates to the technical field of VR (virtual reality), in particular to a VR (virtual reality) diagram generation method with scale measurement and a data acquisition device.
Background
In recent years, VR technology has been developed rapidly, and is used for on-line house-watching in the real estate industry, for restoration of crime scenes in the criminal investigation industry, for increasing the sense of reality of games in the game industry, and the like. However, because VR is collected based on a panoramic camera, and the environmental scale information is lacked, the application of the technology in some industries is greatly restricted, especially in the criminal investigation industry. In the criminal investigation industry, the reduction of a case site is a key for solving a case, at present, although scene information of the case site can be obtained through a VR technology, dimension information cannot be obtained, so that critical solution information such as the size of footmarks, the distance between the footmarks, the size of bloodstains, the distance between a weapon and a target object and the like cannot be effectively obtained, measurement must be carried out through manpower, time and labor are wasted, more data are reserved, careless mistakes are easy to occur, if a new person follows to take over a previous case, the search is continued, and the search and arrangement of the data are also a small task. For example, in the real estate industry, when a client watches a house on line, only the scene information of the environment can be seen, and the scale information of each room, each window and the like cannot be effectively obtained, so that the client cannot really know the house source information.
The existing VR technology is mainly used for acquiring scene information of an environment and does not have scale information. Therefore, comprehensive environmental information cannot be provided in the industries such as criminal investigation and real estate, and the use of VR technology is greatly limited.
Disclosure of Invention
Aiming at the defects, the invention designs the data acquisition device, acquires the three-dimensional scale information of the environment while acquiring the VR panoramic information of the environment, and then combines the VR panoramic information with the three-dimensional scale information to realize the generation of the scale VR panoramic image and the measurement of the scale of the object in the panoramic image.
In order to achieve the purpose, the specific technical scheme of the invention is as follows:
the invention firstly designs a data acquisition device, which comprises: panorama VR camera, three-dimensional laser radar, A-frame, revolving stage, step motor, panorama VR camera sets up at the A-frame top, carry on three-dimensional laser radar on the revolving stage and set up on the A-frame, this revolving stage is driven by step motor.
The invention provides a VR image generation method with scale measurement based on the data acquisition device, which comprises the following steps:
s1, collecting the VR panorama and the three-dimensional laser point set by using a data collection device, so that the corresponding three-dimensional laser point can be found for the pixel of each angle of the VR panorama;
s2, generating a local point cloud map;
and S3, generating a depth map capable of carrying out scale measurement according to the VR panoramic image and the local point cloud map.
Further, in step S1, the method for enabling the pixel at each angle of the VR panorama to find the corresponding three-dimensional laser point is as follows:
selecting a speed of the stepper motor and determining a stepper motor rotation time, wherein:
the stepper motor speed calculation formula is as follows:
vM=360÷W÷fl(1)
where W is the horizontal pixel of the VR panorama, f1Updating frequency of the three-dimensional radar; by the formula, the panoramic image pixel corresponding to the rotating angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser point;
the calculation formula of the rotating angle of the stepping motor is as follows:
tM>360/vm(2)
in the formula tMThe time can effectively ensure that the three-dimensional radar rotates by more than 360 degrees for the rotation duration of the motor, thereby ensuring the VR panoramaThe corresponding three-dimensional laser spot can be found for each angular pixel of (a).
Preferably, the method of generating the local point cloud map in step S2 is as follows:
s21, rotating the first frame by angle theta0Setting the pose of the rotary table as a fixed coordinate system and the pose as a fixed coordinate system serving as a reference angle, namely setting the pose as the origin of the coordinate system of the local point cloud map, and splicing all subsequent point clouds into the local point cloud map based on the reference coordinate system;
s22, aligning the current frame point cloud and the corner frame time stamp to ensure that the corner and current frame point cloud are obtained simultaneously;
s23, determining the conversion relation between the current radar frame and the fixed coordinate system according to the relation between the current rotation angle and the reference rotation angle, as shown in the following formula:
Figure BDA0002335509200000031
in the formula [ theta ]tFor the current corner frame corner, TtA rotation matrix between a current radar frame and a fixed coordinate system;
after the rotation matrix is obtained, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(pfxpfypfz)T=Tt*(pxpypz)T(4)
in the formula (p)fx,pfy,pfz) For the coordinates of the transformed point in the fixed coordinate system, (p)x,py,pz) Coordinates of a point in a current radar coordinate system;
and S24, converting all current radar frame point clouds into a fixed coordinate system according to the conversion relation of the formula (4) in the step S23, then directly splicing the converted current frame point clouds into an existing local point cloud map according to the position relation, and finally, recycling the process until the motor stops rotating, thereby completing the complete local point cloud map.
Preferably, in step S22, the specific method for aligning the current frame point cloud and the corner frame timestamp is as follows:
the method comprises the steps of firstly comparing a turn frame timestamp in a turn container with a current radar frame timestamp, finding out a first turn frame larger than the current radar frame as a current turn frame, then comparing timestamps of the current turn frame and a previous frame of the current turn frame with the timestamp of the current radar frame, and selecting a turn frame close to the current radar frame as a current turn frame to be used finally.
Preferably, in step S3, the method for generating a depth image capable of being scaled according to the VR panorama and the local point cloud map is as follows:
s31, obtaining mapping between local point cloud map coordinates and spherical VR panorama coordinates;
s32, obtaining mapping between the spherical VR panorama coordinates and the depth map coordinates;
and S33, indirectly acquiring the mapping relation between the local point cloud map coordinates and the depth image coordinates according to S31 and S32, and attaching the depth in the depth image.
Further, the method for obtaining the mapping between the local point cloud map coordinates and the spherical VR panorama coordinates in step S31 is as follows:
s311, converting the point cloud from the fixed coordinate system in the step S22 to a camera coordinate system, wherein the conversion formula is as follows:
(pcxpcypcz)TcTf*(pfxpfypfz)T(5)
in the formula (p)cx,pcy,pcz) For a point in the camera coordinate system,cTfthe matrix is a rotation matrix from a fixed coordinate system to a camera coordinate system, and can be obtained by directly measuring the position relation between the fixed coordinate system and the camera coordinate system through a ruler;
s312, calculating the longitude and latitude of the points in the point cloud under the longitude and latitude coordinate system, wherein the calculation formula is as follows:
θlongitude=a tan 2(py,px) (6)
Figure BDA0002335509200000041
θlongitudelatituderespectively the longitude and latitude of the point cloud in a spherical longitude and latitude coordinate system.
Preferably, in step S32, the method of obtaining the mapping between the spherical VR panorama coordinates and the depth map coordinates is as follows:
s321, the spherical VR panorama is divided into pixels at equal intervals of longitude and latitude, so that the spherical panorama can be expanded into a two-dimensional plane image, the horizontal direction is divided at equal longitude intervals, and the vertical direction is divided at equal latitude. The mapping relationship is as follows:
(cxcy)Tlongitute÷dlongθlatitude÷dlati)T(8)
in the formula (c)x,cy) Is the pixel coordinate of the two-dimensional plane image (theta)longlati) Longitude of the longitude and latitude pixels, respectively.
Preferably, in step S33, the method for attaching the depth in the depth image is as follows:
and (3) calculating the depth of each point in the local point cloud map, wherein the calculation formula is as follows:
Figure BDA0002335509200000051
and r is the required depth, and each point is subjected to depth calculation through the above formula and attached to each pixel of the depth image, so that the attachment of the point cloud of the depth image is realized.
Preferably, in step S3, after the depth map capable of being scaled is generated, the scaling method is as follows:
s341, acquiring pixel coordinates and depth:
clicking two points in the VR panorama, mapping to coordinates of two pixels in the depth image through a formula (8), and reading depth values of the two pixels attached to the depth image according to the coordinates of the two pixels;
s342, point cloud reduction:
firstly, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′longituteθ′latitude)T=(c′x*dlongc′y*dlati)T(10)
in formula (θ'longitude,θ’latitude) Is the mapped longitude and latitude, (c'x,c’y) The pixel coordinates obtained in step S341;
then, restoring the space points according to the longitude, the latitude and the depth value:
Figure BDA0002335509200000052
in formula (p'x,p’y,p’z) R' is the depth value obtained in step S341 for the restored three-dimensional space point;
s343, distance measurement:
the point clicked by the user can be converted into a three-dimensional point in the real space according to the following equations (10) and (11):
Figure BDA0002335509200000061
the Euler distance of two points in the space of the two points can be solved, and the measurement of the scale is realized; in the formula:
(p’x1,p’y1,p’z1),(p’x2,p’y2,p’z2) Two points in the clicked VR panorama respectively.
Compared with the prior art, the invention has the advantages that:
(1) a set of portable data acquisition device is designed, and the operability is strong;
(2) realizing the fusion of the point cloud data and the VR panorama;
(3) and the measurement of the scale in the VR diagram is realized.
Drawings
FIG. 1 is a schematic diagram of the data acquisition device according to the present invention;
FIG. 2 is a flow chart of local point cloud map generation according to the present invention;
FIG. 3 is a block diagram of the radar frame and corner frame time alignment process of the present invention;
FIG. 4 is a perspective view and an expanded view of a VR in accordance with the present invention.
Detailed Description
In order that those skilled in the art can understand and implement the present invention, the following embodiments of the present invention will be further described with reference to the accompanying drawings.
Referring to fig. 1, the present invention provides a data acquisition apparatus, comprising: panorama VR camera 1, three-dimensional laser radar 2, A-frame 3, revolving stage 4, step motor 5, panorama VR camera 1 sets up at A-frame 3 top, carry on three-dimensional laser radar 2 on revolving stage 4 and set up on A-frame 3, this revolving stage 4 is driven by step motor 5.
Based on the data acquisition device, the invention provides a VR image generation method with scale measurement, which comprises the following steps:
and S1, collecting the VR panorama and the three-dimensional laser point set by using a data collection device, so that the corresponding three-dimensional laser point can be found for the pixels at each angle of the VR panorama.
In the data acquisition process, how to select the speed of the stepping motor 5 and determine the rotation time of the stepping motor 5 are included to ensure that each pixel in the panoramic image can find a corresponding laser point in the three-dimensional point cloud map, otherwise, certain pixel positions cannot effectively acquire scale information. The calculation formula of the output speed of the stepping motor 5 is as follows:
vM=360÷W÷fl(1)
where W is the horizontal pixel of the VR panorama, f1Updating frequency of the three-dimensional radar; by the formula, the panoramic image pixel corresponding to the rotating angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser point;
the calculation formula of the rotating angle of the stepping motor is as follows:
tM>360/vm(2)
in the formula tMThe time is the rotation duration of the motor, and the time can effectively ensure that the three-dimensional radar rotates by more than 360 degrees, so that the corresponding three-dimensional laser point can be found out for each angle pixel of the VR panorama.
And S2, generating a local point cloud map.
Specifically, the local point cloud map generation process is shown in FIG. 2,
(1) first, the first frame is rotated by an angle theta0Setting the pose of the rotary table as a fixed coordinate system and the pose as a fixed coordinate system serving as a reference angle, namely setting the pose as the origin of the coordinate system of the local point cloud map, and splicing all subsequent point clouds into the local point cloud map based on the reference coordinate system;
(2) aligning the current frame point cloud and the corner frame time stamp to ensure that the corner and the current frame point cloud are obtained simultaneously; the specific method for aligning the current frame point cloud and the corner frame timestamp is as follows (see fig. 3):
the method comprises the steps of firstly comparing a turn frame timestamp in a turn container with a current radar frame timestamp, finding out a first turn frame larger than the current radar frame as a current turn frame, then comparing timestamps of the current turn frame and a previous frame of the current turn frame with the timestamp of the current radar frame, and selecting a turn frame close to the current radar frame as a current turn frame to be used finally.
(3) After the radar frame and the corner frame are aligned, determining the conversion relation between the current radar frame and the fixed coordinate system according to the relation between the current corner and the reference corner, wherein the conversion relation is shown in the following formula:
Figure BDA0002335509200000081
in the formula [ theta ]tFor the current corner frame corner, TtA rotation matrix between a current radar frame and a fixed coordinate system;
after the rotation matrix is obtained, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(pfxpfypfz)T=Tt*(pxpypz)T(4)
in the formula (p)fx,pfy,pfz) For the coordinates of the transformed point in the fixed coordinate system, (p)x,py,pz) Coordinates of a point in a current radar coordinate system;
(4) and (3) converting all current radar frame point clouds into a fixed coordinate system according to the conversion relation of the formula (4), then directly splicing the converted current frame point clouds into an existing local point cloud map according to the position relation, and finally, recycling the process until the motor stops rotating, thereby completing the complete local point cloud map.
And S3, generating a depth map capable of carrying out scale measurement according to the VR panoramic image and the local point cloud map.
The generation of the depth map comprises three parts, namely mapping of local point cloud map coordinates and spherical VR panorama coordinates, mapping of the spherical VR panorama coordinates and depth image coordinates, and attachment of depth of the depth image.
(1) And mapping the local point cloud map coordinates and the spherical VR panorama coordinates. The VR panoramic camera can capture a 360 ° panoramic image, the panoramic image is represented in the form of a complete spherical image with pixels divided by latitude and longitude as shown in the left image of fig. 4, so that the mapping between the point cloud and the spherical image pixel can be performed through the angle information of the point cloud in space relative to the sphere. The calculation process of the mapping is as follows:
A. converting the point cloud from the fixed coordinate system in step S2 (2) to the camera coordinate system, wherein the conversion formula is as follows:
(pcxpcypcz)TcTf*(pfxpfypfz)T(5)
in the formula (p)cx,pcy,pcz) For a point in the camera coordinate system,cTffor fixing the coordinate system to the phaseThe rotation matrix of the machine coordinate system can be directly obtained by measuring the position relation between the fixed coordinate system and the camera coordinate system through a ruler;
B. calculating the longitude and latitude of the point in the point cloud under a longitude and latitude coordinate system, wherein the calculation formula is as follows:
θlongitude=a tan 2(py’px) (6)
Figure BDA0002335509200000091
θlongitudelatituderespectively the longitude and latitude of the point cloud in a spherical longitude and latitude coordinate system. Therefore, the local point cloud map can be mapped to each angle of the longitude and latitude coordinate system through the two formulas, and mapping of the local point cloud map and the VR panorama is achieved.
(2) Mapping of spherical VR panorama coordinates and depth map coordinates
The spherical VR panorama is divided pixels at equal intervals of longitude and latitude, and thus the spherical panorama can be developed into a two-dimensional planar image shown on the right side of fig. 4, the horizontal direction being divided at equal longitude intervals, and the vertical direction being divided at equal latitude. The mapping relationship is as follows:
(cxcy)T=(θlongitute÷dlongθlatitude÷dlati)T(8)
in the formula (c)x,cy) Is the pixel coordinate of the two-dimensional plane image (theta)longlati) The longitude of the pixels in the longitude direction and the latitude direction can be set artificially, and the smaller the value is set, the higher the pixel of the depth image is, and the subsequent measurement longitude can be increased. The mapping relation effectively maps the longitude and latitude coordinates under the longitude and latitude coordinate system to the pixel coordinates of the two-dimensional plane image, and the plane image is the required depth image.
(3) Attachment of depth in depth images
So far, the mapping relationship between the local point cloud map coordinates and the latitude and longitude coordinates of the spherical panoramic image and the mapping relationship between the latitude and longitude coordinate system and the depth image coordinate system are established, so that the mapping relationship between the local point cloud map coordinates and the depth image coordinates is indirectly obtained. The relationship can then be used to attach the depth of each point in the point cloud into the depth image. The depth of the point is calculated as follows:
Figure BDA0002335509200000101
and r is the required depth, and each point is subjected to depth calculation through the above formula and attached to each pixel of the depth image, so that the attachment of the point cloud of the depth image is realized. Thus, the depth image is completed.
Dimension measurement
After the depth image is completed, the dimension measurement can be performed, that is, the distance between any two points can be measured in the VR panorama, and the specific method is as follows:
the process is divided into three aspects: obtaining pixel coordinates and depth, restoring point cloud and measuring distance
(1) Acquisition of pixel coordinates and depth
The user clicks two points in the panorama, maps to two pixel coordinates in the depth image by formula (8), and reads the depth values of the two pixels attached to the depth image according to the two pixel coordinates.
(2) Restoration of point clouds
Firstly, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′longituteθ′latitude)T=(c′x*dlongc′y*dlati)T(10)
in formula (θ'longitude,θ’latitude) Is the mapped longitude and latitude, (c'x,c’y) The pixel coordinates acquired in step S341.
Then, restoring the space points according to the longitude, the latitude and the depth value:
Figure BDA0002335509200000111
in formula (p'x,p’y,p’z) R' is the depth value obtained in step S341 for the restored three-dimensional space point;
(3) and distance measurement:
the point clicked by the user can be converted into a three-dimensional point in the real space according to the following equations (10) and (11):
Figure BDA0002335509200000112
the Euler distance of two points in the space of the two points can be solved, and the measurement of the scale is realized; in the formula:
(p’x1,p’y1,p’z1),(p’x2,p’y2,p’z2) Two points in the clicked VR panorama respectively.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A data acquisition device, characterized in that it comprises: panorama VR camera, three-dimensional laser radar, A-frame, revolving stage, step motor, panorama VR camera sets up at the A-frame top, carry on three-dimensional laser radar on the revolving stage and set up on the A-frame, this revolving stage is driven by step motor.
2. A VR diagram generation method with scale measurement is characterized by comprising the following steps:
s1, collecting a VR panorama and a three-dimensional laser point set by using the data collection device of claim 1, so that the corresponding three-dimensional laser point can be found for the pixel of each angle of the VR panorama;
s2, generating a local point cloud map;
and S3, generating a depth map capable of carrying out scale measurement according to the VR panoramic image and the local point cloud map.
3. The VR graph generation method with scale measurement according to claim 2, wherein in step S1, the method for finding the corresponding three-dimensional laser point for each angle pixel of the VR panorama is as follows:
selecting a speed of the stepper motor and determining a stepper motor rotation time, wherein:
the stepper motor speed calculation formula is as follows:
vM=360÷W÷fl(1)
where W is the horizontal pixel of the VR panorama, f1Updating frequency of the three-dimensional radar; by the formula, the panoramic image pixel corresponding to the rotating angle of the three-dimensional laser radar can be effectively ensured to find the corresponding three-dimensional laser point;
the calculation formula of the rotating angle of the stepping motor is as follows:
tM>360/vm(2)
in the formula tMThe time is the rotation duration of the motor, and the time can effectively ensure that the three-dimensional radar rotates by more than 360 degrees, so that the corresponding three-dimensional laser point can be found out for each angle pixel of the VR panorama.
4. The VR map generation method with scale measurement according to claim 3, wherein the step S2 is to generate the local point cloud map as follows:
s21, rotating the first frame by angle theta0Setting the pose of the rotary table as a fixed coordinate system and the pose as a fixed coordinate system serving as a reference angle, namely setting the pose as the origin of the coordinate system of the local point cloud map, and splicing all subsequent point clouds into the local point cloud map based on the reference coordinate system;
s22, aligning the current frame point cloud and the corner frame time stamp to ensure that the corner and current frame point cloud are obtained simultaneously;
s23, determining the conversion relation between the current radar frame and the fixed coordinate system according to the relation between the current rotation angle and the reference rotation angle, as shown in the following formula:
Figure FDA0002335509190000021
in the formula [ theta ]tFor the current corner frame corner, TtA rotation matrix between a current radar frame and a fixed coordinate system;
after the rotation matrix is obtained, converting the current point cloud frame into a fixed coordinate system, wherein the conversion method comprises the following steps:
(pfxpfypfz)T=Tt*(pxpypz)T(4)
in the formula (p)fx,pfy,pfz) For the coordinates of the transformed point in the fixed coordinate system, (p)x,py,pz) Coordinates of a point in a current radar coordinate system;
and S24, converting all current radar frame point clouds into a fixed coordinate system according to the conversion relation of the formula (4) in the step S23, then directly splicing the converted current frame point clouds into an existing local point cloud map according to the position relation, and finally, recycling the process until the motor stops rotating, thereby completing the complete local point cloud map.
5. The VR graph generation method with scale measurement as claimed in claim 4, wherein the step S22 is performed by the following specific method for aligning the current frame point cloud with the corner frame timestamp:
the method comprises the steps of firstly comparing a turn frame timestamp in a turn container with a current radar frame timestamp, finding out a first turn frame larger than the current radar frame as a current turn frame, then comparing timestamps of the current turn frame and a previous frame of the current turn frame with the timestamp of the current radar frame, and selecting a turn frame close to the current radar frame as a current turn frame to be used finally.
6. The VR map generation method with scale measurement according to claim 5, wherein in step S3, a depth image capable of scale measurement is generated according to the VR panorama and the local point cloud map as follows:
s31, obtaining mapping between local point cloud map coordinates and spherical VR panorama coordinates;
s32, obtaining mapping between the spherical VR panorama coordinates and the depth map coordinates;
and S33, indirectly acquiring the mapping relation between the local point cloud map coordinates and the depth image coordinates according to S31 and S32, and attaching the depth in the depth image.
7. The VR map generation method with scale measurement as claimed in claim 6, wherein the mapping between the local point cloud map coordinates and the spherical VR panorama coordinates obtained in step S31 is as follows:
s311, converting the point cloud from the fixed coordinate system in the step S22 to a camera coordinate system, wherein the conversion formula is as follows:
(pcxpcypcz)=cTf*(pfxpfypfz)T(5)
in the formula (p)cx,pcy,pcz) For a point in the camera coordinate system,cTfthe matrix is a rotation matrix from a fixed coordinate system to a camera coordinate system, and can be obtained by directly measuring the position relation between the fixed coordinate system and the camera coordinate system through a ruler;
s312, calculating the longitude and latitude of the points in the point cloud under the longitude and latitude coordinate system, wherein the calculation formula is as follows:
θlongitude=a tan2(py,px) (6)
Figure FDA0002335509190000031
θlongitudelatituderespectively the longitude and latitude of the point cloud in a spherical longitude and latitude coordinate system.
8. The VR map generation method with scale measurement of claim 7, wherein in step S32, the method for obtaining the mapping between the coordinates of the spherical VR panorama and the coordinates of the depth map is as follows:
s321, the spherical VR panorama is divided into pixels at equal intervals of longitude and latitude, so that the spherical VR panorama can be expanded into a two-dimensional planar image, the horizontal direction is divided at equal longitude intervals, the vertical direction is divided at equal latitude, and the mapping relationship is as follows:
(cxcy)T=(θlongitute÷dlongθlatitude÷dlati)T(8)
in the formula (c)x,cy) Is the pixel coordinate of the two-dimensional plane image (theta)longlati) Longitude of the longitude and latitude pixels, respectively.
9. The VR map generation method with scale measurement according to claim 8, wherein in step S33, the method of attaching the depth in the depth image is as follows:
and (3) calculating the depth of each point in the local point cloud map, wherein the calculation formula is as follows:
Figure FDA0002335509190000041
and r is the required depth, and each point is subjected to depth calculation through the above formula and attached to each pixel of the depth image, so that the attachment of the point cloud of the depth image is realized.
10. The VR map generation method with scale measurement according to claim 9, wherein in step S3, after the depth map that can be scaled is generated, the scaling method is as follows:
s341, acquiring pixel coordinates and depth:
clicking two points in the VR panorama, mapping to coordinates of two pixels in the depth image through a formula (8), and reading depth values of the two pixels attached to the depth image according to the coordinates of the two pixels;
s342, point cloud reduction:
firstly, mapping to longitude and latitude coordinates according to pixel coordinates as follows:
(θ′longituteθ′latitude)T=(c′x*dlongc′y*dlati)T(10)
in formula (θ'longitude,θ’latitude) Is the mapped longitude and latitude, (c'x,c’y) The pixel coordinates obtained in step S341;
then, restoring the space points according to the longitude, the latitude and the depth value:
Figure FDA0002335509190000042
in formula (p'x,p’y,p’z) R' is the depth value obtained in step S341 for the restored three-dimensional space point;
s343, distance measurement:
the point clicked by the user can be converted into a three-dimensional point in the real space according to the following equations (10) and (11):
Figure FDA0002335509190000051
the Euler distance of two points in the space of the two points can be solved, and the measurement of the scale is realized; in the formula: (p'x1,p’y1,p’z1),(p’x2,p’y2,p’z2) Two points in the clicked VR panorama respectively.
CN201911362662.2A 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device Active CN111145095B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911362662.2A CN111145095B (en) 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911362662.2A CN111145095B (en) 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device

Publications (2)

Publication Number Publication Date
CN111145095A true CN111145095A (en) 2020-05-12
CN111145095B CN111145095B (en) 2023-10-10

Family

ID=70520228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911362662.2A Active CN111145095B (en) 2019-12-25 2019-12-25 VR (virtual reality) graph generation method with scale measurement and data acquisition device

Country Status (1)

Country Link
CN (1) CN111145095B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103020962A (en) * 2012-11-27 2013-04-03 武汉海达数云技术有限公司 Rapid level detection method of mouse applied to three-dimensional panoramic picture
CN103729883A (en) * 2013-12-30 2014-04-16 浙江大学 Three-dimensional environmental information collection and reconstitution system and method
US20150022555A1 (en) * 2011-07-20 2015-01-22 Google Inc. Optimization of Label Placements in Street Level Images
CN106959080A (en) * 2017-04-10 2017-07-18 上海交通大学 A kind of large complicated carved components three-dimensional pattern optical measuring system and method
CN108288292A (en) * 2017-12-26 2018-07-17 中国科学院深圳先进技术研究院 A kind of three-dimensional rebuilding method, device and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150022555A1 (en) * 2011-07-20 2015-01-22 Google Inc. Optimization of Label Placements in Street Level Images
CN103020962A (en) * 2012-11-27 2013-04-03 武汉海达数云技术有限公司 Rapid level detection method of mouse applied to three-dimensional panoramic picture
CN103729883A (en) * 2013-12-30 2014-04-16 浙江大学 Three-dimensional environmental information collection and reconstitution system and method
CN106959080A (en) * 2017-04-10 2017-07-18 上海交通大学 A kind of large complicated carved components three-dimensional pattern optical measuring system and method
CN108288292A (en) * 2017-12-26 2018-07-17 中国科学院深圳先进技术研究院 A kind of three-dimensional rebuilding method, device and equipment

Also Published As

Publication number Publication date
CN111145095B (en) 2023-10-10

Similar Documents

Publication Publication Date Title
CN112894832B (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN105096382B (en) A kind of method and device that real-world object information is associated in video monitoring image
Teller et al. Calibrated, registered images of an extended urban area
CN112184890B (en) Accurate positioning method of camera applied to electronic map and processing terminal
JP5538667B2 (en) Position / orientation measuring apparatus and control method thereof
CN107507274A (en) A kind of quick restoring method of public security criminal-scene three-dimensional live based on cloud computing
CN103533235B (en) Towards quick based on line array CCD the digital panoramic device that great cases is on-the-spot
Honkamaa et al. Interactive outdoor mobile augmentation using markerless tracking and GPS
CN111442721A (en) Calibration equipment and method based on multi-laser ranging and angle measurement
CN103438864A (en) Real-time digital geological record system for engineering side slope
CN112254670B (en) 3D information acquisition equipment based on optical scanning and intelligent vision integration
CN104463969A (en) Building method of model of aviation inclined shooting geographic photos
Reitinger et al. Augmented reality scouting for interactive 3d reconstruction
CN112082486B (en) Handheld intelligent 3D information acquisition equipment
JP6928217B1 (en) Measurement processing equipment, methods and programs
Schneider et al. Combined bundle adjustment of panoramic and central perspective images
CN111145095B (en) VR (virtual reality) graph generation method with scale measurement and data acquisition device
US20160349409A1 (en) Photovoltaic shade impact prediction
Wahbeh et al. Toward the Interactive 3D Modelling Applied to Ponte Rotto in Rome
Mahinda et al. Development of an effective 3D mapping technique for heritage structures
CN112304250B (en) Three-dimensional matching equipment and method between moving objects
KR102458559B1 (en) Construction management system and method using mobile electric device
US11418716B2 (en) Spherical image based registration and self-localization for onsite and offsite viewing
CN112254677A (en) Multi-position combined 3D acquisition system and method based on handheld device
Meierhold et al. Line-based referencing between images and laser scanner data for image-based point cloud interpretation in a CAD-environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230904

Address after: 518000 Room 601, building r2-b, Gaoxin industrial village, No. 020, Gaoxin South seventh Road, Gaoxin community, Yuehai street, Nanshan District, Shenzhen, Guangdong

Applicant after: Shenzhen Nuoda Communication Technology Co.,Ltd.

Address before: 518000 a501, 5th floor, Shanshui building, Nanshan cloud Valley Innovation Industrial Park, 4093 Liuxian Avenue, Taoyuan Street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: SHENZHEN WUJING INTELLIGENT ROBOT Co.,Ltd.

GR01 Patent grant
GR01 Patent grant