CN114998556A - Virtual-real fusion method for mixed reality flight simulation system - Google Patents

Virtual-real fusion method for mixed reality flight simulation system Download PDF

Info

Publication number
CN114998556A
CN114998556A CN202210547026.2A CN202210547026A CN114998556A CN 114998556 A CN114998556 A CN 114998556A CN 202210547026 A CN202210547026 A CN 202210547026A CN 114998556 A CN114998556 A CN 114998556A
Authority
CN
China
Prior art keywords
cockpit
model
positioning
tracker
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210547026.2A
Other languages
Chinese (zh)
Other versions
CN114998556B (en
Inventor
郝天宇
赵永嘉
雷小永
戴树岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Research Institute Of Beijing University Of Aeronautics And Astronautics
Original Assignee
Jiangxi Research Institute Of Beijing University Of Aeronautics And Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Research Institute Of Beijing University Of Aeronautics And Astronautics filed Critical Jiangxi Research Institute Of Beijing University Of Aeronautics And Astronautics
Priority to CN202210547026.2A priority Critical patent/CN114998556B/en
Publication of CN114998556A publication Critical patent/CN114998556A/en
Application granted granted Critical
Publication of CN114998556B publication Critical patent/CN114998556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T90/00Enabling technologies or technologies with a potential or indirect contribution to GHG emissions mitigation

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a virtual-real fusion method for a mixed reality flight simulation system, and belongs to the technical field of flight simulation. The method comprises the following steps: in the off-line stage, a virtual cockpit model is manufactured and positioning points are selected to form a fixed position structure, trackers are correspondingly arranged on the entity cockpit or visual features are selected to establish a corresponding relation with the positioning points; the method comprises the steps of obtaining three-dimensional coordinates of a tracker and visual features on an entity cockpit in real time in an online stage, carrying out tracker positioning and visual feature positioning, fusing two positioning results through Kalman filtering, and obtaining a more accurate registration position of a virtual cockpit; and a space positioning template is formed by scanning point clouds around the cockpit with a binocular camera, so that cross-session repeated loading is realized, and the connection of a virtual cockpit and a real cockpit is established. The method is relatively light in calculation weight and small in dependence on the surrounding environment of the entity cockpit, avoids a large amount of extra image transmission, feature extraction and rendering work, does not have the situation of picture fracture, and can bring better immersion experience.

Description

Virtual-real fusion method for mixed reality flight simulation system
Technical Field
The invention belongs to the field of flight simulation, provides a virtual-real fusion method for a flight simulation system based on a mixed reality technology, and particularly relates to a virtual-real registration technology of a semi-solid cockpit model.
Background
For the fields of simulation, digital twinning, etc. that require the real world to be reproduced using digital models, mixed reality technology exhibits great advantages, one of the important application directions being flight simulation. The conventional flight simulation system mainly uses a physical cockpit provided with various instruments, a joystick and the like as operation input, is processed by a computer and simulates a flight state in real time, and then provides information feedback for a driver by a vision system and the like mainly based on a conventional display screen. However, the traditional display screen not only occupies a large area, but also has a limited viewing angle, cannot make corresponding visual adjustment according to the physical activity of a driver, and lacks of immersion and more real operation experience. With the progress of hardware technology, it has become possible to create immersive flight simulation experience by means of mixed reality technology, so that the method has a great propulsive effect on the improvement of the efficiency of driving training and the like, and becomes a research hotspot gradually.
At present, some mixed reality solutions of flight simulation systems have been proposed at home and abroad, wherein a relatively mature scheme based on the interaction of the physical cockpit utilizes a head-mounted binocular camera to collect images of a real experimental environment, obtains the pose of the physical cockpit in an image through methods such as foreground extraction, distinguishable feature recognition matching and the like, and displays the pose after fusing and rendering with a virtual scene, namely, the part of the cockpit seen by a driver comes from a three-dimensional reconstruction result of the real image, and the part outside the cockpit is a virtual flight environment. In the scheme, because the real image and the virtual scene are directly superposed, the requirements on positioning accuracy and rendering quality are higher, for example, positioning errors of a cockpit can cause visual cracking of a virtual-real visual fusion edge, and the immersion experience is influenced; for example, in the flight process, a large amount of dynamic illumination is applied to the environment, and the real image part needs to calculate illumination in real time for rendering in order to ensure the virtual-real fusion effect, so that a large system load is caused.
Besides the problems of visual effect, precision and calculation amount, the scheme also has many other problems influencing the real-time performance and stability of the system, and has great improvement space.
Specifically, the technical scheme that the position and pose of the cockpit in the image are estimated by a method of acquiring the image by a binocular camera for foreground extraction and distinguishing feature recognition matching, and then images of the cockpit are superposed to be rendered in a virtual environment has the following problems:
(1) in the process of acquiring the position and the pose of the cockpit, higher requirements are placed on the complexity of an experimental environment, and the image foreground extraction effect is not ideal due to the complex environment of an experimental site; and a large amount of real-time image transmission, feature recognition processing and virtual-real fusion rendering exist, and huge calculated amount directly influences the real-time performance of the whole system.
(2) The method for extracting the cockpit attitude and position information from the visual information for virtual-real fusion has the advantages that the information source is single, and the positioning accuracy is low due to the difficulty in error definition; also when the camera is facing a direction where no visual features are present, a loss of positioning can result.
(3) If a strategy that a plurality of kinds of sensing information improve the positioning accuracy of the system together is adopted, the difficulty of designing and developing a system of a hybrid positioning scheme is high, and for example, solutions to common problems of data description of heterogeneous sensors, joint calibration, asynchronous data fusion and the like have the problems of complex algorithm design and difficulty in implementation.
Disclosure of Invention
Aiming at the problem of the application scheme of the existing mixed reality technology in the field of flight simulation, the invention provides a scheme for performing virtual-real registration by using a cockpit digital model, which is characterized in that an image acquired by a camera is not directly superposed with a virtual scene, the three-dimensional pose information of the cockpit is calculated from an image and tracking device, and the information is used for converting the cockpit model to be registered to a correct position.
Specifically, the invention provides a virtual-real fusion method for a mixed reality flight simulation system, which comprises the following steps:
(1) in the off-line stage, a virtual cockpit model is manufactured in advance, and positioning points are selected on the cockpit and used for tracking the pose of the entity cockpit;
firstly, selecting arbitrary non-collinear positioning points with the number of n for a tracker and a visual mark on a virtual cockpit model respectively, keeping the relative position among the positioning points unchanged, wherein n is more than or equal to 4; then, a tracker is set or a visual feature is selected at the corresponding location point on the physical cockpit. The selected positioning points form a registration template of a fixed position structure, the positioning points are all represented by three-dimensional coordinates, and a corresponding relation is established between the positioning points and the set tracker and the visual characteristics.
On the entity cockpit, the tracker utilizes the positioning system of VR equipment to obtain the three-dimensional point coordinate of oneself, and the coordinate of visual characteristic is discerned in the image that shoots by the binocular stereo camera and is obtained.
(2) And in the online stage, the three-dimensional coordinates of the tracker and the visual features on the entity cockpit are obtained in real time, and the registered position of the cockpit model is positioned. Which comprises the following steps:
establishing a reference coordinate system and a camera coordinate system by taking the initial positions of the VR equipment and the binocular stereo camera as references respectively; positioning the registered position of the cockpit model in a reference coordinate system;
the coordinates of the visual feature positioning points acquired by the camera are used as output values in advance, the coordinates of the visual feature positioning points estimated by the tracker are used as input values, and a fitting curve polynomial for data compensation is solved;
when the binocular stereo camera is used for positioning, the obtained visual characteristic coordinates are subjected to data compensation by using the fitting curve polynomial, and then the registration position of the cockpit model is positioned by using the compensated three-dimensional coordinates;
and fusing the registration position of the cockpit model positioned according to the tracker and the visual characteristics through Kalman filtering to obtain the final registration position of the cockpit model.
(3) The method comprises the steps of presetting a space positioning template of the surrounding environment of the cockpit, storing a registration position of a cockpit model in the space positioning template, and registering the cockpit model in a virtual flight environment according to the stored position when loading the cockpit model.
In the step (1), positioning points for installing the trackers are selected at the peripheral edge positions of the cockpit, and visual features are selected at the positions in the cockpit.
In the step (2), when the tracker is used for positioning, the three-dimensional coordinates of the tracker are obtained in real time, the three-dimensional coordinates of the tracker positioning points on the cockpit model and the corresponding three-dimensional coordinates of the actual tracker are fitted according to the Euclidean distance and the minimum, the coordinates of each tracker positioning point on the cockpit model are obtained, the transformation matrix is calculated, and the position of the cockpit model is transformed.
In the step (2), when the visual characteristics are used for positioning, the visual characteristics are identified from each frame of image of a real-time image shot by a binocular stereo camera, two-dimensional coordinates of the identified visual characteristics are converted into three-dimensional coordinates under a camera coordinate system by using a PnP method, then coordinate transformation is carried out, the three-dimensional coordinates are converted into three-dimensional coordinates under a reference coordinate system, and then the converted three-dimensional coordinates are compensated by using the fitting curve polynomial. Fitting the three-dimensional coordinates of the visual feature positioning points on the cockpit model with the corresponding actual visual feature three-dimensional coordinates according to the Euclidean distance and the minimum, obtaining the three-dimensional coordinates of the visual feature positioning points of the cockpit model, calculating a transformation matrix, and transforming the position of the cockpit model.
And in the step (3), manually adjusting the registered position of the cockpit model.
The invention provides a virtual-real fusion method oriented to a mixed reality flight simulation system, which is relatively light in calculation, small in dependence on the surrounding environment of an entity cockpit, capable of fusing multi-sensor information for high-precision three-dimensional registration, and compared with the prior art, the virtual-real fusion method has the following advantages and positive effects:
(1) the method selects the positioning points on the entity cockpit model, ignores useless information in a scene, only identifies and matches selected characteristics, simplifies the characteristic information into point information processing, can accurately and quickly match the characteristic points to realize the three-dimensional registration of the virtual cockpit model in the real world, and has limited dependence on an experimental field; according to the invention, the pose information is only calculated from the image, and the rendering and other work are carried out in a unified manner in the virtual environment, so that a large amount of additional image transmission, feature extraction and rendering work are avoided, the system resources are saved, the system delay is favorably reduced, the image fracture condition is avoided, and better immersion experience is brought;
(2) the single sensor can not meet the requirement of high-precision positioning, and the adaptive scenes and the precision of different sensors are different, so that the mixed registration method combining computer vision and hardware sensors improves the registration precision, and meanwhile, the hardware sensor (tracker) is arranged at a position where the visual characteristics are not easy to acquire, so that the positioning loss caused by the lack of the visual characteristics can be avoided, and the method is complementary with a visual characteristic positioning scheme;
(3) the method uniformly describes the heterogeneous sensing data as the feature points, represents the feature points by using the three-dimensional coordinate positions, and performs fusion according to the constraint structures existing among the feature points, thereby greatly reducing the design and development difficulty of the hybrid registration scheme, solving the problems of large data volume, inconsistent data dimensions and even data conflict in the data fusion process, and being beneficial to system deployment and expansion; the external parameters among the sensors are calculated through the structural relationship among the positioning points, the asynchronous data compensation curve is fitted, the calibration difficulty is reduced, the pose of the feature point position obtained from the low-frequency data in the high-frequency data can be estimated, and therefore asynchronous data fusion is achieved.
(4) The method has the advantages of small calculated amount, low delay, low requirement on hardware equipment and no specific limitation on applicable environment.
(5) The method has high expansibility and universality, can easily replace or add other sensing data for fusion, and can be applied to scenes needing virtual-real fusion, such as automobile driving simulation, virtual disassembly and assembly and the like.
Drawings
Fig. 1 is a flowchart of a virtual-real fusion method for a mixed reality-oriented flight simulation system according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating selected anchor points according to an embodiment of the present invention;
FIG. 3 is a grayscale diagram of several exemplary diagrams involved in performing an experiment in an embodiment of the invention, in which: (a) the method comprises the steps of (a) obtaining an experimental environment example diagram, (b) obtaining a depth diagram of the experimental environment, (c) obtaining an experimental environment point cloud diagram, and (d) obtaining reconstructed point cloud data and a registration result example diagram.
Detailed Description
The present invention will be described in further detail with reference to the following drawings and examples, which are only some examples of the present invention.
According to the virtual-real fusion method for the mixed reality flight simulation system, the point coordinates formed by the tracker and the visual features on the entity cockpit are obtained, the registration position of the virtual cockpit is calculated through Kalman filtering algorithm fusion, and the accurate superposition of the virtual cockpit and the real cockpit is achieved.
In the virtual-real fusion method for the mixed reality flight simulation system, as shown in fig. 1, the whole process can be divided into an off-line part and an on-line part.
And in the off-line stage, a cockpit model is manufactured, a proper position on the cockpit model is selected as a positioning point, a fixed position structure formed by the positioning point is used as a registration template to provide a pose transformation relation for subsequent virtual and real registration, and then a tracker and visual features are arranged on a real cockpit and are in corresponding relation with the selected positioning point. In order to ensure the accuracy of the positioning information, the tracker and the binocular camera need to be calibrated and a data compensation relationship is obtained in the present stage.
Specifically, the operation process in the offline stage includes the following:
step 1.1, a 1:1 digital three-dimensional model of the entity cockpit is manufactured, n (n is more than or equal to 4) positions which are not collinear are selected as positioning points for the tracker and the visual mark on the model respectively, and the relative positions of the positioning points are kept unchanged and are all represented by three-dimensional coordinates.
Preferably, in step 1.1, the quality of the digital model of the cockpit can be adjusted according to simulation requirements on the premise of ensuring the reduction of the real shape of the cockpit. The number n of tracker anchor points and visual marker anchor points may not be equal, and are here represented for simplicity by the same character. As shown in fig. 2, is an example of a selected anchor point.
And step 1.2, setting a tracker at a corresponding position on the entity cockpit and selecting visual features according to the position of the positioning point selected on the cockpit model in the step 1.1. The tracker converts an optical signal, an electric signal and the like into three-dimensional point coordinate information of the tracker itself in the space depending on a positioning system of a VR (virtual reality) device; the visual features are identified and reconstructed by a binocular stereo camera, and the self pose is represented by the coordinates of the geometric feature points of the visual features. The trackers are typically placed in locations that are difficult to capture by the camera, complementary to the visual features, so that when one fails, such as when the visual markers are obscured, the other continues to work preventing the loss of position.
Preferably, in step 1.2, the tracker setting position can be selected from the peripheral edge of the physical cockpit, and the visual feature is selected from the internal position of the cockpit. In order to improve the speed and the precision of visual positioning, artificial or natural visual features with certain specificity and obvious features can be selected.
Step 1.3, calibrating the binocular stereo camera, wherein the calibration process specifically comprises the following steps:
and 1.3.1, calculating respective internal parameters and external parameters of the left camera and the right camera by using the calibration images. The internal parameters comprise a camera matrix reflecting the relation between a camera coordinate system and an image coordinate system and a distortion vector, and the external parameters are transformation matrixes between a world coordinate system and the camera coordinate system. Let left camera extrinsic parameter be recorded as M l And the external parameter of the right camera is recorded as M r
Step 1.3.2, obtaining a transformation relation M from the left camera coordinate system to the right camera according to the external parameters of the left camera and the right camera obtained in step 1.3.1, wherein M is M r M l -1
Step 1.4, carrying out combined calibration on the positioning system of the VR equipment and the binocular stereo camera according to the relative position relation between the tracker positioning point and the vision positioning point determined in the step 1.1, and solving an external parameter M of the camera relative to the VR equipment o
And step 1.5, recording data captured by the tracker and the camera within a period of time, and performing data compensation fitting according to the structural relationship between the positioning point of the tracker and the visual positioning point determined in the step 1.1 and the external parameters obtained in the step 1.3. The coordinates V of the visual positioning point obtained from the camera data are output values, the coordinates P of the visual positioning point estimated by the positioning point of the tracker are used as input values, and a fitting curve polynomial is solved
Figure BDA0003649566760000051
Compensation coefficient beta of i M is the polynomial raised power.
And in an online stage, establishing a reference coordinate system and a camera coordinate system by taking the initial positions of the VR equipment and the camera as references, solving the transformation relation from the camera coordinate system to the reference coordinate system, and transforming the camera data to the reference coordinate system. And three-dimensional coordinate sets of the tracker and the visual features under a reference coordinate system are respectively obtained, and the coordinates obtained by the two information sources are fitted with the corresponding positions selected on the cockpit model to obtain positioning information, and further the positioning information is fused through Kalman filtering to obtain more accurate virtual and real registration positions. The registered position of the cockpit can be adjusted manually as appropriate if there is a slight error in the registered position. After the accurate registration position of the cockpit model is obtained, a spatial positioning template is formed by scanning point cloud of the surrounding environment of the cockpit in the real world with a binocular camera, cross-session repeated loading is achieved, and the connection with the real cockpit model is established.
Specifically, the online stage is mainly divided into a plurality of parts of positioning by using a tracker, positioning by using visual features, multi-positioning information fusion registration, manual adjustment and registration information storage.
The operation process of using the tracker for positioning is as follows:
and 2.1, establishing a reference coordinate system by taking the initial position of the VR equipment as a reference, and acquiring and recording the three-dimensional coordinates of the tracker and the relative position relationship of the three-dimensional coordinates and the three-dimensional coordinates under the current coordinate system.
Preferably, in step 2.1, the tracker positions can be smoothed from the tracker information in several consecutive frames, by means of buffers, taking gaussian distribution expectation values, etc.
Step 2.2, acquiring and recording three-dimensional coordinates of positioning points corresponding to the trackers on the virtual cockpit under the reference coordinate system, and recording the three-dimensional coordinates as { p i And their relative positional relationship, form a data structure corresponding one-to-one to the tracker coordinates recorded in the tracker-using localization step 2.1.
Step 2.3, calculating respectively the position of the tracker used in step 2.1Recording the set of coordinates and using the tracker to locate the centroid coordinate of the set of coordinates recorded in step 2.2, denoted c 1 And c 2
Step 2.4, calculating the centroid coordinate c of the virtual cockpit model 2 And centroid coordinates c calculated using the tracker 1 And matching, solving the sum of Euclidean distances between the coordinates of the positioning points of the trackers on the cockpit model and the coordinates of the corresponding trackers, and taking the Euclidean distance and the relative position when the Euclidean distance is minimum as the best fit of the trackers and the corresponding positions of the cockpit model. The fitted tracker position coordinates are denoted as { p } i ′}。
Step 2.5, calculate { p i And { p } i ' } transformation matrix between.
The calculation of the transformation matrix specifically includes the following steps:
and 2.5.1, respectively calculating the offset of each positioning point relative to the mass center. { p i The centroid offset of each position is recorded as o i ,{p i ' } the centroid offset for each position is denoted as o i ′。
Step 2.5.2, constructing a matrix H by using the centroid offset, wherein
Figure BDA0003649566760000061
Step 2.5.3, the matrix H is subjected to singular value decomposition with svd (H) ═ U, S, V.
Step 2.5.4, depending on R ═ VU T A rotation matrix R is calculated.
Step 2.5.5, according to T ═ c 1 -Rc 2 And calculating a displacement matrix T.
And 2.6, transforming the position of the cockpit model by using the transformation matrix.
The procedure for using visual feature localization is as follows:
and 3.1, establishing a camera reference coordinate system by taking the initial position of the camera as a reference, determining the three-dimensional coordinate of the left camera under the coordinate system, and further obtaining the three-dimensional coordinate of the right camera under the coordinate system by utilizing the transformation relation M from the left camera coordinate system to the right camera obtained in the step 1.3.2 of calibrating the binocular stereo phase in an off-line stage.
And 3.2, acquiring an experimental environment image in real time, and identifying visual features from each frame of image in the image.
Step 3.3, respectively carrying out pose estimation on the identified visual characteristics by using a PnP (Passive-n-Point) method, wherein the specific process is as follows:
3.3.1) inputting a three-dimensional coordinate set of n characteristic points in the visual characteristic in a world coordinate system and a two-dimensional coordinate set of corresponding positions in the image.
3.3.2) two-dimensional coordinates [ u, v ] in the image]In the off-line phase, a three-dimensional coordinate [ X ] in a camera coordinate system is obtained by combining a matrix A formed by the camera internal parameters obtained in the step 1.3.1 of the binocular stereo phase calibration process in the off-line phase and a perspective projection relation n c ,Y c ,Z c ]A collection of (a). The two-dimensional pixel coordinate system and camera coordinate system relationship can be expressed as:
Figure BDA0003649566760000062
3.3.3) according to the coordinate [ X ] of the camera coordinate system c ,Y c ,Z c ]Set and corresponding coordinate [ X ] in world coordinate system w ,Y w ,Z w ]And (5) solving the rotation and displacement matrix in a set mode. The camera coordinate system and world coordinate system relationship can be expressed as:
Figure BDA0003649566760000063
c T w is a transformation relation matrix from a camera coordinate system to a world coordinate system.
And 3.4, applying the solved visual characteristic pose information matrix to the coordinate system established in the step 2.1 of using the tracker to position in the online stage, namely solving the coordinate of the visual characteristic point in the virtual space where the cockpit model is located.
And 3.5, compensating the coordinate information of the visual characteristic points according to the sampling frequency of the tracker by using the compensation curve obtained in the step 1.5 in the off-line stage, and acquiring the three-dimensional coordinates of the compensated visual characteristic positions.
And 3.6, repeating the operation process of positioning by using the tracker in the online stage by using the compensated three-dimensional coordinates of the visual features, replacing the position of the tracker with the position of the visual features in the process, and solving the position of the cockpit model positioned by using the visual features.
The operation process of the multi-positioning information fusion is as follows:
step 4.1, establishing a position state equation x of the cockpit model k =Sx k-1k-1 Wherein x is k For the current k moment model position state of the cockpit, x k-1 The position state of the cockpit model at the last moment is represented, and the state quantity is formed by three-dimensional coordinates of each positioning point; s is a state transition matrix; omega k-1 Is a noise impact matrix.
Step 4.2, establishing an observation equation z for positioning by using a tracker k =Ox kk Wherein z is k Registering the position of the cockpit model for the current time using the tracker for positioning; o is a transformation matrix from an observed value to a state value, upsilon k Noise is measured for the tracker combination.
Step 4.3, use of
Figure BDA0003649566760000071
Updating the cockpit model position state to x', where K k In order to be the basis of the kalman gain,
Figure BDA0003649566760000072
and the prior state estimated value represents the position state of the cockpit model at the moment k.
Step 4.4, establishing an observation equation z 'using visual feature localization' k =Ox k +υ′ k Wherein z is k ' registration position of the cockpit model for the current time using the visual marker for positioning; upsilon is k ' is visual measurement noise.
Step 4.5, reuse of x ═ x' + K k (z k '-Ox') the position status of the updated cockpit model is x.
Preferably, in step 4.5, the position and attitude information of the cockpit model can be smoothly updated by multiple frames, so that the jitter in the positioning process is reduced.
The operation process of manually adjusting the registration position is as follows:
and 5.1, acquiring the position of the cockpit model under the reference coordinate system.
And 5.2, transforming the position coordinates of the cockpit model under the reference coordinate system to a camera coordinate system.
And 5.3, overlapping the cockpit model on the solid cockpit image for display.
And 5.4, manually adjusting parameters such as displacement and rotation of the cockpit model according to the physical cockpit image to enable the cockpit model and the cockpit model to be overlapped as much as possible.
Preferably, in step 5.4, visual features can be set at the boundary of the physical cockpit, and the point cloud of the physical cockpit is obtained by scanning, and the grid model of the point cloud is reconstructed to serve as auxiliary reference information for manually adjusting the model.
And 5.5, transforming the adjusted cockpit model to the reference coordinate system again, and storing the coordinates of the cockpit model in the virtual space.
The operation process of the registration information storage is as follows:
and 6.1, scanning the surrounding environment of the entity cockpit by using a stereo camera based on the reference coordinate system, and generating point cloud data of the region by using the depth information. As shown in fig. 3, (a) is an experimental environment, and (b) and (c) are a depth map and a cloud map of the experimental environment, respectively.
And 6.2, defining a space anchor point under a virtual space coordinate system, and storing the point cloud data and the position relation of the anchor point to form a preset space positioning template.
And 6.3, storing the position information of the cockpit model in the space positioning template.
And 6.4, when the registration information needs to be loaded, scanning the same area by using the binocular camera again to obtain the point cloud, performing point cloud feature matching with a preset space template to determine the current position, and registering the cockpit model to the storage position.
As shown in fig. 3 (d), the experimental environment point cloud reconstruction result and the registration result of the cockpit model are shown.
According to the embodiment, the light virtual and real registration of the semi-physical interaction flight simulation system is realized, the deployment site of the system related to the method is not specifically limited, and the method can adapt to most indoor environments. And the hardware sensor and the selected visual characteristics can be used only during initial positioning registration, and registration information can be recovered according to natural characteristics in the environment when the subsequent experimental environment is not changed greatly, so that the usability of the system is further improved. The method has the advantages of small calculated amount, low delay and low requirement on hardware equipment. Developers can use more energy for simulating a high-quality flight simulation environment without worrying about excessive system overhead caused by the positioning registration algorithm. In addition, the method disclosed by the invention integrates various sensor data, the positioning registration precision is higher, positioning limitations such as shielding and angles do not exist, the picture is not split, the vision and the body feeling are consistent, and the real driving experience is simulated to a great extent.
Except for the technical features described in the specification, the method is known by the technical personnel in the field. Descriptions of well-known components and techniques are omitted so as to not unnecessarily obscure the present invention. The embodiments described in the above embodiments do not represent all embodiments consistent with the present application, and various modifications or changes that can be made by those skilled in the art without inventive efforts based on the technical solution of the present invention still fall within the protective scope of the present invention.

Claims (9)

1. A virtual-real fusion method for a mixed reality flight simulation system is characterized by comprising the following steps:
step 1: in the off-line stage, a virtual cockpit model is manufactured in advance, and positioning points are selected on the cockpit model;
firstly, selecting arbitrary non-collinear positioning points with the number of n for a tracker and visual characteristics on a virtual cockpit model respectively, keeping the relative positions of the positioning points unchanged, expressing the positioning points by three-dimensional coordinates, and enabling n to be more than or equal to 4; then, a tracker is arranged at a corresponding positioning point on the entity cockpit or visual features are selected; the selected positioning points form a registration template of a fixed position structure, and the corresponding relation between each positioning point and the set tracker or the selected visual characteristics is established;
step 2: in the online stage, the position of a tracker is obtained by using VR equipment, visual features are identified by using a binocular stereo camera, three-dimensional coordinates of the tracker and the visual features on the entity cockpit are obtained in real time, and the registration position of the cockpit model is positioned; wherein:
establishing a reference coordinate system and a camera coordinate system by taking the initial positions of the VR equipment and the binocular stereo camera as references respectively; positioning the registered position of the cockpit model under a reference coordinate system;
the coordinates of the visual feature positioning points acquired by the camera are used as output values in advance, the coordinates of the visual feature positioning points estimated by the tracker are used as input values, and a fitting curve polynomial for data compensation is solved;
when the binocular stereo camera is used for positioning, the obtained visual characteristic coordinates are subjected to data compensation by using the fitting curve polynomial, and then the compensated three-dimensional coordinates are used for positioning the registered position of the cockpit model;
performing Kalman filtering fusion on the registration position of the cockpit model obtained according to the tracker and the visual characteristics to obtain the final registration position of the cockpit model;
and step 3: the method comprises the steps of presetting a space positioning template of the surrounding environment of the cockpit, storing a registered position of the cockpit model in the space positioning template, and registering the cockpit model in the virtual flight environment according to the stored position when the cockpit model is loaded.
2. The method of claim 1, wherein in step 1, the location points for installing the trackers are selected at the peripheral edge of the cockpit, and the location points for the visual features are selected at the interior of the cockpit.
3. The method according to claim 1 or 2, wherein the step 1 of calibrating the visual features and the tracker on the physical cockpit comprises: identifying visual characteristics by a binocular stereo camera, acquiring three-dimensional coordinates of the characteristics, calibrating the binocular stereo camera, and calibrating respective internal parameters and external parameters of a left camera and a right camera; and acquiring the three-dimensional coordinates of the tracker by using a positioning system of the VR equipment, and calibrating external parameters of the binocular stereo camera relative to the VR equipment.
4. The method of claim 1 or 2, wherein the step 2 of obtaining the tracker position and locating the registered position of the cockpit model comprises: acquiring the three-dimensional coordinates of the tracker in real time, and calculating the centroid coordinate c of the coordinate set 1 (ii) a Obtaining the three-dimensional coordinates of the tracker positioning point on the current cockpit model under the reference coordinate system, and calculating the centroid coordinate c of the coordinate set 2 (ii) a Coordinate of center of mass c 2 And c 1 And matching, when the sum of Euclidean distances between the coordinates of each tracker positioning point on the cockpit model and the coordinates of the corresponding actual tracker position is minimum, the coordinates of each tracker positioning point on the corresponding cockpit model are calculated, and a rotation matrix and a displacement matrix of the cockpit model reaching the registered position are calculated according to the coordinates of the tracker positioning points on the acquired cockpit model.
5. The method according to claim 1 or 2, wherein the step 2 of obtaining the visual features and locating the registered position of the cockpit model comprises: the binocular stereo camera real-time image is used for identifying visual characteristics from each frame of image of the image; converting two-dimensional coordinates of visual features identified in the image into three-dimensional coordinates under a camera coordinate system by using a PnP method, and converting the two-dimensional coordinates into three-dimensional coordinates under a reference coordinate system according to a conversion relation from the camera coordinate system to the reference coordinate system; compensating the vision characteristic three-dimensional coordinates obtained by conversion by using the fitting curve polynomial to obtain compensated actual vision characteristic three-dimensional coordinates; fitting the three-dimensional coordinates of the visual feature positioning points on the cockpit model with the corresponding actual visual feature three-dimensional coordinates according to the Euclidean distance and the minimum to obtain the three-dimensional coordinates of the visual feature positioning points of the cockpit model, and then calculating a rotation matrix and a displacement matrix of the cockpit model reaching the registered position according to the obtained three-dimensional coordinates of the visual feature positioning points on the cockpit model.
6. The method according to claim 1, wherein the step 2 of obtaining the final registered position of the cockpit model through kalman filter fusion comprises:
(1) establishing a position state equation x of a cockpit model k =Sx k-1k-1 Wherein x is k For the current k moment model position state of the cockpit, x k-1 The position state of the cockpit model at the last moment is formed by three-dimensional coordinates of positioning points on the cockpit model; s is a state transition matrix; omega k-1 Is noise;
(2) establishing an observation equation z for localization using a tracker k =Ox kk Wherein z is k The registered position of the cockpit model located using the tracker for time k, O being the transformation matrix, upsilon k Measuring noise for the tracker combination;
(3) according to z k Updating the position state of the cockpit model to
Figure FDA0003649566750000021
Wherein K k In order to be the basis of the kalman gain,
Figure FDA0003649566750000022
the estimated value of the prior state of the position state of the cockpit model at the moment k;
(4) establishing an observation equation z 'using visual feature localization' k =Ox k +υ′ k Wherein z is k ' registration position of cockpit model using visual feature localization for time k; upsilon is k ' is visual measurement noise;
(5) according to z k 'update position state x ═ x' + K of cockpit model k (z k ′-Ox′)。
7. The method according to claim 1 or 2, wherein step 3 further comprises manually adjusting the registered position of the cockpit model, in particular: changing the registration position of the obtained cockpit model in the reference coordinate system under a camera coordinate system, then overlapping the cockpit model on the image of the solid cockpit for displaying, and manually adjusting the displacement and rotation parameters of the cockpit model to ensure that the cockpit model is overlapped with the solid cockpit as much as possible; and finally, the position coordinates of the manually adjusted cockpit model are transformed to the reference coordinate system again and stored.
8. The method according to claim 7, wherein in the step 3, visual features are set at the boundary of the physical cockpit, point clouds of the physical cockpit are scanned, and a mesh model of the physical cockpit is reconstructed as auxiliary reference information for manually adjusting the model.
9. The method according to claim 1 or 2, wherein the step 3 of storing the registration information comprises: firstly, scanning the surrounding environment of an entity cockpit by using a binocular stereo camera based on a reference coordinate system, and generating point cloud data of the surrounding area of the cockpit by using depth information; secondly, defining a space anchor point under a virtual space coordinate system, and storing the point cloud data and the position relation of the point cloud data and the space anchor point to form a preset space positioning template; then, storing the registered position of the cockpit model in a space positioning template; and finally, when the registered position of the cockpit model needs to be loaded, scanning the same area by using the binocular stereo camera again to obtain point cloud, performing point cloud feature matching with a preset spatial positioning template, determining the current position, and registering the cockpit model to a storage position.
CN202210547026.2A 2022-05-18 2022-05-18 Virtual-real fusion method for mixed reality flight simulation system Active CN114998556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210547026.2A CN114998556B (en) 2022-05-18 2022-05-18 Virtual-real fusion method for mixed reality flight simulation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210547026.2A CN114998556B (en) 2022-05-18 2022-05-18 Virtual-real fusion method for mixed reality flight simulation system

Publications (2)

Publication Number Publication Date
CN114998556A true CN114998556A (en) 2022-09-02
CN114998556B CN114998556B (en) 2024-07-05

Family

ID=83026660

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210547026.2A Active CN114998556B (en) 2022-05-18 2022-05-18 Virtual-real fusion method for mixed reality flight simulation system

Country Status (1)

Country Link
CN (1) CN114998556B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228991A (en) * 2023-05-08 2023-06-06 河北光之翼信息技术股份有限公司 Coordinate conversion method and device, electronic equipment and storage medium
CN118171503A (en) * 2024-05-16 2024-06-11 南京航空航天大学 Method for coordinating canopy based on point cloud measured data virtual assembly

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383596A (en) * 2016-11-15 2017-02-08 北京当红齐天国际文化发展集团有限公司 VR (virtual reality) dizzy prevention system and method based on space positioning
CN112669671A (en) * 2020-12-28 2021-04-16 北京航空航天大学江西研究院 Mixed reality flight simulation system based on physical interaction
CN113269100A (en) * 2021-05-27 2021-08-17 南京航空航天大学 Vision-based aircraft offshore platform landing flight visual simulation system and method
WO2021258327A1 (en) * 2020-06-22 2021-12-30 拓攻(南京)机器人有限公司 Unmanned aerial vehicle visual semi-physical simulation system and simulation method thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106383596A (en) * 2016-11-15 2017-02-08 北京当红齐天国际文化发展集团有限公司 VR (virtual reality) dizzy prevention system and method based on space positioning
WO2021258327A1 (en) * 2020-06-22 2021-12-30 拓攻(南京)机器人有限公司 Unmanned aerial vehicle visual semi-physical simulation system and simulation method thereof
CN112669671A (en) * 2020-12-28 2021-04-16 北京航空航天大学江西研究院 Mixed reality flight simulation system based on physical interaction
CN113269100A (en) * 2021-05-27 2021-08-17 南京航空航天大学 Vision-based aircraft offshore platform landing flight visual simulation system and method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116228991A (en) * 2023-05-08 2023-06-06 河北光之翼信息技术股份有限公司 Coordinate conversion method and device, electronic equipment and storage medium
CN116228991B (en) * 2023-05-08 2023-07-14 河北光之翼信息技术股份有限公司 Coordinate conversion method and device, electronic equipment and storage medium
CN118171503A (en) * 2024-05-16 2024-06-11 南京航空航天大学 Method for coordinating canopy based on point cloud measured data virtual assembly

Also Published As

Publication number Publication date
CN114998556B (en) 2024-07-05

Similar Documents

Publication Publication Date Title
CN114998556B (en) Virtual-real fusion method for mixed reality flight simulation system
CN109165680B (en) Single-target object dictionary model improvement method in indoor scene based on visual SLAM
CN112505065B (en) Method for detecting surface defects of large part by indoor unmanned aerial vehicle
JP4679033B2 (en) System and method for median fusion of depth maps
CN109658461A (en) A kind of unmanned plane localization method of the cooperation two dimensional code based on virtual simulation environment
CN108053437A (en) Three-dimensional model acquiring method and device based on figure
CN106846467A (en) Entity scene modeling method and system based on the optimization of each camera position
CN109920000B (en) Multi-camera cooperation-based dead-corner-free augmented reality method
CN107862733B (en) Large-scale scene real-time three-dimensional reconstruction method and system based on sight updating algorithm
CN110349249B (en) Real-time dense reconstruction method and system based on RGB-D data
CN115471534A (en) Underwater scene three-dimensional reconstruction method and equipment based on binocular vision and IMU
WO2023116430A1 (en) Video and city information model three-dimensional scene fusion method and system, and storage medium
CN111260765B (en) Dynamic three-dimensional reconstruction method for microsurgery field
JP2961264B1 (en) Three-dimensional object model generation method and computer-readable recording medium recording three-dimensional object model generation program
CN109215128B (en) Object motion attitude image synthesis method and system
CN114255279A (en) Binocular vision three-dimensional reconstruction method based on high-precision positioning and deep learning
CN116597080A (en) Complete scene 3D fine model construction system and method for multi-source spatial data
CN111161143A (en) Optical positioning technology-assisted operation visual field panoramic stitching method
CN113662663B (en) AR holographic surgery navigation system coordinate system conversion method, device and system
CN110689625B (en) Automatic generation method and device for customized face mixed expression model
CN112416124A (en) Dance posture feedback method and device
CN113066188A (en) Three-dimensional simulation method and equipment for outdoor construction operation
CN104574475A (en) Fine animation manufacturing method based on secondary controllers
CN111750849B (en) Target contour positioning and attitude-fixing adjustment method and system under multiple visual angles
CN115457220B (en) Simulator multi-screen visual simulation method based on dynamic viewpoint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant