CN115937477A - Virtual model display method, calculation method and readable storage medium - Google Patents

Virtual model display method, calculation method and readable storage medium Download PDF

Info

Publication number
CN115937477A
CN115937477A CN202211667093.4A CN202211667093A CN115937477A CN 115937477 A CN115937477 A CN 115937477A CN 202211667093 A CN202211667093 A CN 202211667093A CN 115937477 A CN115937477 A CN 115937477A
Authority
CN
China
Prior art keywords
data
rtk
positioning
tracking
virtual model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211667093.4A
Other languages
Chinese (zh)
Other versions
CN115937477B (en
Inventor
刘琛
赵志伟
胡涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xunzhi Technology Co ltd
Original Assignee
Shanghai Xunzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xunzhi Technology Co ltd filed Critical Shanghai Xunzhi Technology Co ltd
Priority to CN202211667093.4A priority Critical patent/CN115937477B/en
Publication of CN115937477A publication Critical patent/CN115937477A/en
Application granted granted Critical
Publication of CN115937477B publication Critical patent/CN115937477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a virtual model display method, a calculation method and a readable storage medium. The virtual model display method comprises the following steps: acquiring RTK data and AR tracking data based on the RTK equipment and the AR equipment which are in rigid connection; acquiring positioning adjustment reference information obtained by resolving based on the RTK data and the AR tracking data; adjusting display parameters of a virtual model based on the positioning adjustment reference information so that the virtual model coincides with a corresponding real object in an observation angle, wherein the display parameters comprise a three-dimensional translation vector and a three-dimensional rotation angle of the virtual model. So dispose, utilize the measurement accuracy of RTK device in order to compensate the not enough defect of tracking precision of AR equipment to obtain the higher false real amalgamation effect of precision, it is not high to calculation power or operating personnel's requirement simultaneously, reduced the cost of realizing false real amalgamation, solved the technical problem that exists among the prior art.

Description

Virtual model display method, calculation method and readable storage medium
Technical Field
The invention relates to the technical field of augmented reality, in particular to a virtual model display method, a virtual model calculation method and a readable storage medium.
Background
The existing AR equipment has no centimeter-level high-precision positioning capability. When the virtual model or the scene is presented, the virtual model or the scene and reality need to be artificially and manually fused in a virtual-real mode. Or the virtual model or scene and reality are subjected to virtual-real fusion by acquiring characteristics such as a real image and point cloud.
In the prior art, virtual-real fusion is realized based on manual work or a characteristic point cloud mode, and specifically, the method has the following defects.
1. When the virtual model or scene is manually fused with reality, the operation is complicated.
2. When the virtual model or the scene is manually fused with reality, errors are inevitably introduced during positioning and aligning during manual operation.
3. When the virtual model or scene is manually fused with reality, an operator needs to have a large amount of related experience.
4. When a virtual model or a scene is fused with reality in a virtual-real manner by acquiring real images and point cloud characteristics, real images or point cloud data need to be acquired, and the acquisition of the point cloud data needs a specific and expensive Lidar (laser radar) device such as a laser scanner. Common AR devices do not have point cloud scanning capability.
5. When a virtual model or scene is fused with reality by acquiring real images and point cloud characteristics, a large amount of data transmission and computing power is required for processing the data. Ordinary AR devices do not have this data transfer and computing power.
6. When the virtual model or scene is fused with reality in a virtual-real mode by collecting real images and point cloud characteristics, the real world changes dynamically. It may cause errors in the false-true fusion or failure of the false-true fusion.
In a word, in the prior art, when a virtual model or a scene is fused with reality, the problem of large error or high cost exists, and virtual-real fusion failure is easily caused.
Disclosure of Invention
The invention aims to provide a virtual model display method, a virtual model calculation method and a readable storage medium, and aims to solve the problems that in the prior art, virtual-real fusion is large in error or high in cost, and virtual-real fusion is easy to fail.
In order to solve the above technical problem, according to a first aspect of the present invention, there is provided a virtual model display method, including: acquiring a ready instruction indicating that the RTK device and the AR device are rigidly connected, the ready instruction being input externally; acquiring RTK data and AR tracking data until a preset condition is met, wherein the RTK data is acquired based on an RTK positioning function of the RTK equipment, and the AR tracking data is acquired based on the AR equipment; the RTK data comprises positioning data and a collecting moment, and the AR tracking data comprises the positioning data and the collecting moment; acquiring positioning adjustment reference information obtained by resolving based on the RTK data and the AR tracking data; adjusting display parameters of a virtual model based on the positioning adjustment reference information so that the virtual model coincides with a corresponding real object in an observation angle, wherein the display parameters comprise a three-dimensional translation vector and a three-dimensional rotation angle of the virtual model.
Optionally, the step of acquiring a positioning adjustment parameter calculated based on the RTK data and the AR tracking data includes: transmitting the RTK data and the AR tracking data to a computing platform, the computing platform being located remotely or locally; and acquiring the positioning adjustment reference information fed back by the computing platform.
Optionally, after acquiring the RTK data and the AR tracking data, the computing platform performs the following steps: based on the acquisition time, corresponding positioning data of the RTK data and positioning data of the AR tracking data; performing coordinate conversion on the positioning data of the corresponding RTK data to acquire an RTK positioning track; performing coordinate conversion on the positioning data of the corresponding AR tracking data to obtain an AR tracking track; acquiring a transformation relation between the RTK positioning track and the AR tracking track; and setting the transformation relation as the positioning adjustment reference information and transmitting the positioning adjustment reference information back to the data source of the AR tracking data.
Optionally, the step of obtaining a transformation relationship between the RTK positioning trajectory and the AR tracking trajectory includes: and changing parameters and/or structures of the transformation relation, and calculating the deviation of the AR tracking trajectory and the RTK positioning trajectory after the transformation based on the transformation relation, so that the parameters and/or structures with the minimum deviation are used as calculation results.
Optionally, the step of adjusting the display parameters of the virtual model based on the positioning adjustment reference information includes: and performing transformation calculation on the current display parameters based on the transformation relation.
Optionally, the preset conditions include: the acquisition time of the RTK data and the AR tracking data exceeds a preset time and/or the displacement of the RTK equipment and the AR equipment exceeds a preset distance.
Optionally, the preset conditions include: and acquiring an acquisition completion instruction, wherein the acquisition completion instruction is input from the outside.
Optionally, the AR tracking data is generated based on a machine vision algorithm.
In order to solve the above technical problem, according to a second aspect of the present invention, there is provided a calculation method including: acquiring RTK data and AR tracking data, wherein the RTK data is acquired based on an RTK positioning function of RTK equipment, and the AR tracking data is acquired based on AR equipment; the RTK device and the AR device are rigidly connected; the RTK data comprises positioning data and a collecting moment, and the AR tracking data comprises the positioning data and the collecting moment; based on the acquisition time, corresponding positioning data of the RTK data and positioning data of the AR tracking data; performing coordinate conversion on the positioning data of the corresponding RTK data to acquire an RTK positioning track; performing coordinate conversion on the positioning data of the corresponding AR tracking data to obtain an AR tracking track; acquiring a transformation relation between the RTK positioning track and the AR tracking track; and setting the transformation relation as the positioning adjustment reference information and transmitting the positioning adjustment reference information back to the data source of the AR tracking data.
In order to solve the above technical problem, according to a third aspect of the present invention, there is provided a readable storage medium having a first program and/or a second program stored thereon; when the first program runs, executing the virtual model display method; when the second program runs, the calculating method is executed.
Compared with the prior art, the virtual model display method, the calculation method and the readable storage medium provided by the invention comprise the following steps: acquiring RTK data and AR tracking data based on the RTK equipment and the AR equipment which are rigidly connected; acquiring positioning adjustment reference information obtained by resolving based on the RTK data and the AR tracking data; adjusting display parameters of a virtual model based on the positioning adjustment reference information so that the virtual model coincides with a corresponding real object in an observation angle, wherein the display parameters comprise a three-dimensional translation vector and a three-dimensional rotation angle of the virtual model. So dispose, utilize the measurement accuracy of RTK device in order to compensate the not enough defect of tracking precision of AR equipment to obtain the higher false real amalgamation effect of precision, it is not high to calculation power or operating personnel's requirement simultaneously, reduced the realization cost of false real amalgamation, solved the technical problem who exists among the prior art.
Drawings
It will be appreciated by those skilled in the art that the drawings are provided for a better understanding of the invention and do not constitute any limitation to the scope of the invention. Wherein:
fig. 1 is a schematic flow chart of a virtual model display system according to an embodiment of the present invention.
In the drawings:
1-virtual model display method; 2-solving method.
Detailed Description
To further clarify the objects, advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is to be noted that the drawings are in greatly simplified form and are not to scale, but are merely intended to facilitate and clarify the explanation of the embodiments of the present invention. Further, the structures illustrated in the drawings are often part of actual structures. In particular, the drawings may have different emphasis points and may sometimes be scaled differently.
As used in this application, the singular forms "a", "an" and "the" include plural referents, the term "or" is generally employed in a sense including "and/or," the terms "a" and "an" are generally employed in a sense including "at least one," the terms "at least two" are generally employed in a sense including "two or more," and the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to the number of technical features indicated. Thus, features defined as "first", "second" and "third" may explicitly or implicitly include one or at least two of the features, "one end" and "the other end" and "proximal end" and "distal end" generally refer to the corresponding two parts, which include not only the end points, but also the terms "mounted", "connected" and "connected" should be understood broadly, e.g., as a fixed connection, as a detachable connection, or as an integral part; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. Furthermore, as used in the present invention, the disposition of an element with another element generally only means that there is a connection, coupling, fit or driving relationship between the two elements, and the connection, coupling, fit or driving relationship between the two elements may be direct or indirect through intermediate elements, and cannot be understood as indicating or implying any spatial positional relationship between the two elements, i.e., an element may be in any orientation inside, outside, above, below or to one side of another element, unless the content clearly indicates otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The core idea of the invention is to provide a virtual model display method, a calculation method and a readable storage medium, so as to solve the problems that in the prior art, the virtual-real fusion error is large or the cost is high, and the virtual-real fusion is easy to fail.
The following description refers to the accompanying drawings.
Referring to fig. 1, the present embodiment provides a virtual model display system, which specifically includes a virtual model display method 1 and a calculation method 2.
The virtual model display method comprises the following steps:
s10: a ready instruction is obtained that indicates that an RTK (Real Time Kinematic) device and an AR (Augmented Reality) device have been rigidly connected. Rigid connection is understood to mean that both are constrained to each other, have the same displacement both when the screen is turned and when disturbed by a certain external force, maintain the above-mentioned synchronization. The readiness command is inputted from the outside, for example, after the operator rigidly connects the RTK device and the AR device, the start fusion button is pressed, which represents that the readiness command is inputted. The RTK device is a device implementing an RTK technique, which is one of GNSS (Global Navigation Satellite System) relative positioning techniques, and realizes high-precision dynamic relative positioning mainly by a real-time data link between a reference station and a rover station and a carrier relative positioning fast resolving technique. The AR technology is a technology for enhancing the real world perceived by a user using additional information generated around a computer, and the generated information is superimposed on a real scene in a manner that physiological senses such as vision, hearing, taste, smell, and touch are fused.
S20: and acquiring RTK data and AR tracking data until a preset condition is met, wherein the RTK data is acquired based on an RTK positioning function of the RTK equipment, and the AR tracking data is acquired based on the AR equipment. In one embodiment, the AR tracking data is generated based on a machine vision algorithm. The RTK data comprises positioning data and a collecting moment, and the AR tracking data comprises the positioning data and the collecting moment. The preset conditions include: the acquisition time of the RTK data and the AR tracking data exceeds a preset time and/or the displacement of the RTK equipment and the AR equipment exceeds a preset distance. And/or the preset conditions comprise: and acquiring an acquisition completion instruction, wherein the acquisition completion instruction is input from the outside. Based on the former logic, the device automatically proceeds to the subsequent step, and based on the latter logic, when the operator considers that necessary information is collected, the relevant button is pressed to proceed to the subsequent step. In the execution process of step S20, the operator needs to move the RTK device and the AR device while carrying them, but there is no strict requirement for the moving direction, the trajectory, and the like, as long as the operator can move the RTK device and the AR device to a certain extent.
S30: and acquiring positioning adjustment reference information obtained by resolving based on the RTK data and the AR tracking data. Specifically, step S30 further includes: step S31: transmitting the RTK data and the AR tracking data to a computing platform, the computing platform being located remotely or locally; and, step S32: and acquiring the positioning adjustment reference information fed back by the computing platform. The calculation method is arranged on the computing platform, and the specific flow of the calculation method is introduced in the subsequent content.
S40: adjusting display parameters of a virtual model based on the positioning adjustment reference information so that the virtual model coincides with a corresponding real object in an observation angle, wherein the display parameters comprise a three-dimensional translation vector and a three-dimensional rotation angle of the virtual model. Step S40 is specifically to perform transformation calculation on the current display parameter based on the transformation relationship, so as to convert the current display parameter into an expected display parameter.
Correspondingly, the computing platform is used for executing a calculation method. The method specifically comprises the following steps:
s110: acquiring the RTK data and the AR tracking data;
s120: based on the acquisition time, corresponding the positioning data of the RTK data and the positioning data of the AR tracking data;
s130: and carrying out coordinate conversion on the positioning data of the corresponding RTK data to obtain an RTK positioning track.
S140: and carrying out coordinate conversion on the positioning data of the corresponding AR tracking data to obtain an AR tracking track.
S150: and acquiring a transformation relation between the RTK positioning track and the AR tracking track.
S160: and setting the transformation relation as the positioning adjustment reference information and transmitting the positioning adjustment reference information back to the data source of the AR tracking data. In one embodiment, the data source is the AR device.
Specifically, in step S120, the RTK data and the AR data are not time synchronized, and a delay error of the RTK data needs to be identified by an algorithm, and then the RTK data and the AR data are mapped. The acquisition frequencies of the RTK data and the AR data are different. In one embodiment, the AR data is acquired at a frequency of 30Hz, and the RTK data is acquired at a frequency of 1-5Hz. The correspondence is made by an algorithm based on the acquisition frequency.
In addition, noise reduction processing is also required for the RTK data and the AR data.
In step S150, the step of obtaining the transformation relationship between the RTK positioning trajectory and the AR tracking trajectory includes: and changing parameters and/or structures of the transformation relation, and calculating the deviation of the AR tracking trajectory and the RTK positioning trajectory after the transformation based on the transformation relation, so that the parameters and/or structures with the minimum deviation are used as a calculation result. The parameters refer to calculated coefficients, such as values in a rotation matrix, and the structure refers to a transformation mode, such as modifying a 3 × 3 matrix into a 4 × 4 matrix in the calculation process, or changing the calculation process of the transformation relation. When calculating the deviation, the calculation is performed using the data having the correspondence relationship. The deviation is specifically a statistical indicator, such as variance, weighted variance, etc. In an embodiment, step S150 is to obtain the transformation relation by using a least square method.
Although the present embodiment discloses a virtual model display system, any one of the virtual model display method and the solution method may be replaced by other similar methods, and thus each has independence.
The embodiment also provides a readable storage medium on which the first program and/or the second program are stored; when the first program runs, the virtual model display method is executed; when the second program runs, the resolving method is executed.
The embodiment has the following advantages:
1. the virtual-real fusion is carried out through RTK positioning and AR tracking, the virtual-real fusion can be automatically completed, and the operation is simple and convenient.
2. And virtual-real fusion is carried out through RTK positioning and AR tracking, so that manual operation is avoided, and introduction of human errors is avoided.
3. And performing virtual-real fusion through RTK positioning and AR tracking, and automatically completing the virtual-real fusion. Without requiring a great deal of experience from the operator.
4. And virtual-real fusion is carried out through RTK positioning and AR tracking, and RTK positioning data and AR tracking pose data are used without acquiring point cloud and image information. And a laser scanner and other Lidar devices are not needed.
5. Virtual-real fusion is carried out through RTK positioning and AR tracking, the RTK positioning data and the AR tracking pose data are used, point cloud and image information do not need to be transmitted and processed, and common AR equipment can complete the operation.
6. And carrying out virtual-real fusion through RTK positioning and AR tracking, and acquiring real RTK positioning data by using the RTK positioning data and the AR tracking pose data. Compared with an image and point cloud data, the RTK data is high in stability. The error of the virtual-real fusion is reduced, and the success rate of the virtual-real fusion is improved.
In summary, the present embodiment provides a virtual model display system, which specifically includes a virtual model display method and a calculation method, and a readable storage medium. The virtual model display method comprises the following steps: acquiring RTK data and AR tracking data based on the RTK equipment and the AR equipment which are rigidly connected; acquiring positioning adjustment reference information obtained by resolving based on the RTK data and the AR tracking data; adjusting display parameters of a virtual model based on the positioning adjustment reference information so that the virtual model coincides with a corresponding real object in an observation angle, wherein the display parameters comprise a three-dimensional translation vector and a three-dimensional rotation angle of the virtual model. So dispose, utilize the measurement accuracy of RTK device in order to compensate the not enough defect of tracking precision of AR equipment to obtain the higher false real amalgamation effect of precision, it is not high to calculation power or operating personnel's requirement simultaneously, reduced the realization cost of false real amalgamation, solved the technical problem who exists among the prior art.
The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art according to the above disclosure are within the scope of the present invention.

Claims (10)

1. A virtual model display method is characterized by comprising the following steps:
acquiring a ready instruction indicating that the RTK device and the AR device are rigidly connected, the ready instruction being input externally;
acquiring RTK data and AR tracking data until a preset condition is met, wherein the RTK data is acquired based on an RTK positioning function of the RTK equipment, and the AR tracking data is acquired based on the AR equipment; the RTK data comprises positioning data and a collecting moment, and the AR tracking data comprises the positioning data and the collecting moment;
acquiring positioning adjustment reference information obtained by resolving based on the RTK data and the AR tracking data;
adjusting display parameters of a virtual model based on the positioning adjustment reference information so that the virtual model and the corresponding real object are overlapped in the observation angle, wherein the display parameters comprise a three-dimensional translation vector and a three-dimensional rotation angle of the virtual model.
2. The virtual model display method of claim 1, wherein the step of obtaining positioning adjustment parameters calculated based on the RTK data and the AR tracking data comprises:
transmitting the RTK data and the AR tracking data to a computing platform, the computing platform being located remotely or locally;
and acquiring the positioning adjustment reference information fed back by the computing platform.
3. The virtual model exhibition method of claim 1, wherein said computing platform, after acquiring said RTK data and said AR tracking data, performs the steps of:
based on the acquisition time, corresponding the positioning data of the RTK data and the positioning data of the AR tracking data;
performing coordinate conversion on the positioning data of the corresponding RTK data to acquire an RTK positioning track;
performing coordinate conversion on the positioning data of the corresponding AR tracking data to obtain an AR tracking track;
acquiring a transformation relation between the RTK positioning track and the AR tracking track;
and setting the transformation relation as the positioning adjustment reference information and transmitting the positioning adjustment reference information back to the data source of the AR tracking data.
4. The virtual model exhibition method of claim 3, wherein said step of obtaining a transformation relationship between said RTK positioning trajectory and said AR tracking trajectory comprises:
and changing parameters and/or structures of the transformation relation, and calculating the deviation of the AR tracking trajectory and the RTK positioning trajectory after the transformation based on the transformation relation, so that the parameters and/or structures with the minimum deviation are used as a calculation result.
5. The method for displaying a virtual model according to claim 3, wherein the step of adjusting the display parameters of the virtual model based on the positioning adjustment reference information comprises:
and performing transformation calculation on the current display parameters based on the transformation relation.
6. The virtual model display method according to claim 1, wherein the preset condition comprises: the acquisition time of the RTK data and the AR tracking data exceeds a preset time and/or the displacement of the RTK equipment and the AR equipment exceeds a preset distance.
7. The virtual model display method according to claim 1, wherein the preset condition comprises: and acquiring an acquisition completion instruction, wherein the acquisition completion instruction is input from the outside.
8. The virtual model exhibition method of claim 1, characterized in that said AR tracking data is generated based on machine vision algorithms.
9. A solution method, characterized in that the solution method comprises:
acquiring RTK data and AR tracking data, wherein the RTK data is acquired based on an RTK positioning function of RTK equipment, and the AR tracking data is acquired based on AR equipment; the RTK device and the AR device are rigidly connected; the RTK data comprises positioning data and a collecting moment, and the AR tracking data comprises the positioning data and the collecting moment;
based on the acquisition time, corresponding positioning data of the RTK data and positioning data of the AR tracking data;
carrying out coordinate conversion on the positioning data of the corresponding RTK data to obtain an RTK positioning track;
performing coordinate conversion on the positioning data of the corresponding AR tracking data to obtain an AR tracking track;
acquiring a transformation relation between the RTK positioning track and the AR tracking track;
and setting the transformation relation as the positioning adjustment reference information and transmitting the positioning adjustment reference information back to the data source of the AR tracking data.
10. A readable storage medium, characterized in that the readable storage medium has stored thereon a first program and/or a second program; when the first program is operated, executing the virtual model display method according to any one of claims 1 to 8; the second program, when running, performs the solution method according to claim 9.
CN202211667093.4A 2022-12-22 2022-12-22 Virtual model presentation method, virtual model calculation method, and readable storage medium Active CN115937477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211667093.4A CN115937477B (en) 2022-12-22 2022-12-22 Virtual model presentation method, virtual model calculation method, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211667093.4A CN115937477B (en) 2022-12-22 2022-12-22 Virtual model presentation method, virtual model calculation method, and readable storage medium

Publications (2)

Publication Number Publication Date
CN115937477A true CN115937477A (en) 2023-04-07
CN115937477B CN115937477B (en) 2024-02-09

Family

ID=86654175

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211667093.4A Active CN115937477B (en) 2022-12-22 2022-12-22 Virtual model presentation method, virtual model calculation method, and readable storage medium

Country Status (1)

Country Link
CN (1) CN115937477B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110988947A (en) * 2019-02-20 2020-04-10 以见科技(上海)有限公司 Augmented reality positioning method based on real-time dynamic carrier phase difference technology
CN111045063A (en) * 2018-10-15 2020-04-21 广东星舆科技有限公司 Continuous high-precision positioning method, memory and system in RTK field

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111045063A (en) * 2018-10-15 2020-04-21 广东星舆科技有限公司 Continuous high-precision positioning method, memory and system in RTK field
CN110988947A (en) * 2019-02-20 2020-04-10 以见科技(上海)有限公司 Augmented reality positioning method based on real-time dynamic carrier phase difference technology

Also Published As

Publication number Publication date
CN115937477B (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN110215284B (en) Visualization system and method
CN112122840B (en) Visual positioning welding system and welding method based on robot welding
US20210074012A1 (en) Dynamic scene three-dimensional reconstruction method, apparatus and system, server, and medium
KR102555820B1 (en) Image projection method, apparatus, device and storage medium
US9008371B2 (en) Method and system for ascertaining the position and orientation of a camera relative to a real object
US20160269631A1 (en) Image generation method, system, and apparatus
CN103948361B (en) Endoscope's positioning and tracing method of no marks point and system
CN110675489A (en) Image processing method and device, electronic equipment and storage medium
CN101861526A (en) System and method for automatic calibration of tracked ultrasound
CN106625673A (en) Narrow space assembly system and assembly method
CN110638525B (en) Operation navigation system integrating augmented reality
CN103445863A (en) Surgical navigation and augmented reality system based on tablet computer
CN111220076A (en) Tracking positioning and marking point positioning mixed positioning method and device
CN115937477A (en) Virtual model display method, calculation method and readable storage medium
CN113496503B (en) Point cloud data generation and real-time display method, device, equipment and medium
CN117481756A (en) Puncture guiding method, puncture guiding equipment and robot system
CN117135146A (en) Remote guidance system based on AR technology
CN116372938A (en) Surface sampling mechanical arm fine adjustment method and device based on binocular stereoscopic vision three-dimensional reconstruction
CN113662663B (en) AR holographic surgery navigation system coordinate system conversion method, device and system
JPH11338532A (en) Teaching device
CN115457096A (en) Auxiliary control method, device and system for working machine and working machine
US10832422B2 (en) Alignment system for liver surgery
CN116687564B (en) Surgical robot self-sensing navigation method system and device based on virtual reality
EP4086102B1 (en) Navigation method and apparatus, electronic device, readable storage medium and computer program product
CN113610987B (en) Mixed reality space labeling method and system based on three-dimensional reconstruction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant