CN111127661A - Data processing method and device and electronic equipment - Google Patents

Data processing method and device and electronic equipment Download PDF

Info

Publication number
CN111127661A
CN111127661A CN201911301161.3A CN201911301161A CN111127661A CN 111127661 A CN111127661 A CN 111127661A CN 201911301161 A CN201911301161 A CN 201911301161A CN 111127661 A CN111127661 A CN 111127661A
Authority
CN
China
Prior art keywords
target
projected
equipment
coordinate system
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911301161.3A
Other languages
Chinese (zh)
Other versions
CN111127661B (en
Inventor
钟耳顺
黄科佳
颜鹏鹏
陈国雄
王晨亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Supermap Software Co ltd
Original Assignee
Supermap Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Supermap Software Co ltd filed Critical Supermap Software Co ltd
Priority to CN201911301161.3A priority Critical patent/CN111127661B/en
Publication of CN111127661A publication Critical patent/CN111127661A/en
Application granted granted Critical
Publication of CN111127661B publication Critical patent/CN111127661B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention provides a data processing method, a data processing device and electronic equipment, wherein when the target content of a target to be projected is projected, a scene coordinate system comprising the current position of the equipment and the target position of the target to be projected is constructed, namely the current position of the equipment and the target position of the target to be projected are placed in the same coordinate system for comparison, so that the relative position relationship between the current position of the equipment and the target position of the target to be projected is more accurate, the target content can be accurately projected to a target display position during projection, and the projection accuracy is further ensured.

Description

Data processing method and device and electronic equipment
Technical Field
The invention relates to the field of Augmented Reality (AR), in particular to a data processing method and device and electronic equipment.
Background
Augmented Reality (AR) is a physical, direct or indirect, real-time view of the real-world environment whose elements are "augmented" by computer-generated sensory information, ideally spanning multiple sensory modes, including vision, hearing, touch, body sensation, and smell. Augmented reality is simply a mixture of real-world objects and computer-generated objects. It provides an immersive experience for the user. AR fuses and integrates with the real world, projecting virtual objects in the real world.
Although the AR product may provide an immersive experience for the user, the position of the projected virtual object relative to the projected area in the AR product is inaccurate, and thus the virtual object cannot be projected onto its desired position.
Disclosure of Invention
In view of the above, the present invention provides a data processing method, an apparatus and an electronic device to solve the problem that the position of the projected virtual object and the relative position of the projection area are not accurate, and therefore the virtual object cannot be projected onto the desired position.
In order to solve the technical problems, the invention adopts the following technical scheme:
a method of data processing, comprising:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and according to the attitude information of the equipment, determining a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference, and displaying the target content at the target display position.
Optionally, the target position of the target to be projected is a coordinate located in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references, including:
determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
calculating a relative position relation between the target position of the target to be projected and the current position of the equipment based on the attitude information of the equipment and the relative distance;
and determining the display coordinates of the target content of the target to be projected in a scene coordinate system according to the coordinates of the target content of the target to be projected and the relative position relationship.
Optionally, determining, according to the posture information of the device, a target display position where the target content of the target to be projected is located in a camera coordinate system with reference to the device, includes:
generating a posture matrix corresponding to the posture information of the equipment;
and taking the product of the display coordinates and the attitude matrix of the equipment as a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference.
Optionally, the target position of the target to be projected is a coordinate in a camera coordinate system based on the device; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references, including:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the attitude information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be transmitted.
Optionally, the obtaining of the pose information of the device includes:
acquiring gravity data, acceleration data and magnetic field data of the equipment;
correcting the acceleration data by using the gravity data to obtain corrected acceleration data;
and integrating the magnetic field data and the corrected acceleration data to determine the attitude information of the equipment.
Optionally, obtaining the current location of the device includes:
acquiring initial position information of the equipment;
and filtering the initial position information to obtain the current position.
A data processing apparatus comprising:
the data acquisition module is used for acquiring the target position and the target content of the target to be projected, and acquiring the attitude information and the current position of the equipment;
the coordinate determination module is used for determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and the data display module is used for determining a target display position of the target content of the target to be projected in a camera coordinate system based on the equipment according to the attitude information of the equipment, and displaying the target content at the target display position.
Optionally, the target position of the target to be projected is a coordinate located in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
accordingly, the coordinate determination module comprises:
the distance determining submodule is used for determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
the relation determination submodule is used for calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the attitude information of the equipment and the relative distance;
and the coordinate determination submodule is used for determining the display coordinate of the target content of the target to be projected in a scene coordinate system according to the coordinate of the target content of the target to be projected and the relative position relation.
Optionally, the data display module is configured to, when determining, according to the posture information of the device, that the target content of the target to be projected is located at a target display position in a camera coordinate system based on the device, specifically:
and generating a posture matrix corresponding to the posture information of the equipment, and taking the product of the display coordinates and the posture matrix of the equipment as a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference.
An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor calls a program and is used to:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and according to the attitude information of the equipment, determining a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference, and displaying the target content at the target display position.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a data processing method, a data processing device and electronic equipment, wherein when the target content of a target to be projected is projected, a scene coordinate system comprising the current position of the equipment and the target position of the target to be projected is constructed, namely the current position of the equipment and the target position of the target to be projected are placed in the same coordinate system for comparison, so that the relative position relationship between the current position of the equipment and the target position of the target to be projected is more accurate, the target content can be accurately projected to a target display position during projection, and the projection accuracy is further ensured.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a data processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of another data processing method according to an embodiment of the present invention;
fig. 3 is a scene schematic diagram of an AR scene according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the invention relates to a data processing method combining Augmented Reality (AR) and GIS (geographic information system) application programs, which firstly explains the professional terms in the embodiment:
1. augmented Reality (AR) is a physical, direct or indirect, real-time view of the real-world environment whose elements are "Augmented" by computer-generated sensory information, ideally spanning multiple sensory modes, including vision, hearing, touch, body sensation, and smell. Augmented reality is simply a mixture of real-world objects and computer-generated objects. It provides the user with an immersive experience. The difference between AR and VR is that VR does not fuse with the real world, but rather mimics the real world. AR fuses and integrates with the real world, projecting virtual objects in the real world.
2. POI is an abbreviation for "Point of Interest" and Chinese can be translated into "points of Interest". In the map software, one POI may be one house, one shop, one mailbox, one bus station, etc. Typically, each POI contains four pieces of facet information, name, category, coordinates, classification. Comprehensive POI information is the essential information of enriching the navigation map, and timely POI interest points can remind users of detailed information of road conditions and surrounding buildings, and can also facilitate searching required places in navigation and select the most convenient and unobstructed road for path planning. Therefore, the POI information of the navigation map directly affects the usability of navigation.
3. Immersive programs are a new term used to describe applications that provide a user with an immersive experience. However, the GIS applications on the market currently cannot directly provide immersive experience, and need to be implemented by means of AR or VR technologies, where VR may provide fully immersive virtual reality experience, and AR provides immersive experience in an environment where the real world and the virtual world are integrated.
Although the AR product can provide an immersive experience for a user, the position of the projected virtual object in the AR product is a position relative to the photographing device, and the position of the projected virtual object is inaccurate relative to the projection area, so that the virtual object cannot be projected onto a desired position of the projected virtual object. Moreover, the position of the projected virtual object in the AR product is a position relative to the photographing device, and is not based on the real geographic position, which may result in that the projected virtual object and the photographing device cannot be accurately positioned, so that the virtual object to be projected and the device are difficult to be combined with the real geographic position, and the AR technology cannot be combined with the GIS (geographic information system). The embodiment of the invention provides a scheme for combining an AR (augmented reality) and a GIS (geographic information system) for displaying POI (point of interest) in a panoramic way in the static or moving process of equipment (photographing equipment, such as a camera, a mobile phone, a tablet and the like). In addition, the method can meet the urgent requirements of applications such as immersion type data acquisition, enhanced display of two-dimensional and three-dimensional virtual objects and the like in the GIS industry, and can be applied to the fields of navigation, military command and the like.
Referring to fig. 1, the data processing method may include:
and S11, acquiring the target position and the target content of the target to be projected, and acquiring the attitude information and the current position of the equipment.
The target to be projected may be referred to as a POI, and the target to be projected may be obtained from different sources, such as in a form of a file (e.g., a local JSON file), a workspace dataset, a database, a network, and the like. Each POI includes four-aspect information, a name, a category, a coordinate, and a classification, in this embodiment, for an AR scene, the POI needs to include a coordinate of a target to be projected, the coordinate may be a coordinate under a real geographic coordinate system, and further includes a target content of the target to be projected, the target content may be a picture, a two-dimensional map, a three-dimensional model in an obj format, and the like, the target content in this embodiment may carry the name, the category, and the classification, the name refers to the name of the target content, such as an a picture, the category may be two-dimensional or three-dimensional, and the classification may be divided into a picture, a map, and the like.
It should be noted that if the position information and the attribute information of the data are to be visually displayed, the category of the target content may be set to be two-dimensional, and if the data on more details of the target to be projected is to be displayed, the target to be projected may be modeled to form a three-dimensional model for displaying.
After the target content is obtained, the AR may not be able to normally identify the target content, and at this time, the target content needs to be converted into the content that the AR can identify through a format conversion performed by a manager, such as an image manager. In addition, if the target location and target content are stored as transient data, the data can be converted into persistent data by the renderer, and long-term storage of the data can be achieved.
In addition, besides directly acquiring the target position of the target to be projected, a POI point can be directly selected from the camera coordinate system, the POI point is based on the camera coordinate system, and subsequently, if the POI point is to be stored, the POI point needs to be converted into the geographic coordinate system to obtain the coordinates in the geographic coordinate system.
In this embodiment, the coordinate system where the target content of the target to be projected is located is a local coordinate system based on the target content itself.
Now, the device in this embodiment is described, the device in this embodiment may be a photographing device, such as a camera, a mobile phone, a tablet, and the like, and the state of the device is not limited, and may be in a stationary state or a mobile state.
At present, most mobile devices have built-in sensors, and the motion direction, the attitude and the like of the device can be calculated through high-precision raw data measured by the sensors (such as gravity data of the device collected by a gravity sensor, magnetic field data of the device collected by a magnetic field sensor, and acceleration data of the device collected by an acceleration sensor). The multi-sensor fusion is based on filtering, combining, optimizing and the like of measurement results of a plurality of sensors according to a certain algorithm, obtains consistency explanation and description of a target, and assists a system to carry out environment judgment, path planning, verification and the like to form a higher-level comprehensive decisionThe simulation method is a simulation of the human brain of the robot system. In this embodiment, the determination of the posture of the device is implemented by using multi-sensor fusion, and specifically, the posture of the device may be determined by data collected by built-in sensors, such as a gravity sensor, a magnetic field sensor, and an acceleration sensor. In order to ensure the accuracy of the data, after the gravity data and the acceleration data are obtained, high-pass filtering can be carried out on the gravity data, and low-pass filtering can be carried out on the acceleration data so as to reduce noise data. The attitude of the equipment is obtained by fusing the magnetic field, the acceleration and the gravity sensor data of the equipment, specifically, the data collected by the acceleration sensor can be influenced by the gravity action, and in order to reduce the influence of the gravity on the data collected by the acceleration sensor, the gravity data of the equipment is used for correcting the acceleration in the embodiment, so that the corrected acceleration data is obtained. The specific correction process is as follows: setting x ', y', z; as a component of actual acceleration, G0Is a local gravitational acceleration value, and has a formula of x'2+y’2+z’2=G0 2
The relation between the measured value x 'and the corrected x is x' ═ ax + b, and the relation is obtained by substituting a plurality of groups of measured values and solving coefficients by using least square.
The magnetic field data acquired by the magnetic field sensor are magnetic field quaternions, and the corrected acceleration data and the magnetic field quaternions are integrated to obtain the equipment posture. Specifically, the magnetic field quaternion is converted into a magnetic field vector, and the magnetic field vector and the corrected acceleration are cross-multiplied to obtain a vector V0Will vector V0Cross-multiplying with the gravity vector to obtain a vector V1The device attitude matrix is composed of V0、V1And a gravity vector.
After the equipment posture is determined, a posture matrix corresponding to the equipment posture can be determined, the posture information is three angles where the equipment is located, the posture matrix is an Euler matrix and is used for carrying out coordinate conversion and calculating the position of a POI in an AR scene, and the posture information and the posture matrix are different expression forms of the posture.
The determination of the current location of the device may be:
the method includes the steps of firstly obtaining initial position information of the device measured by a GNSS (global navigation satellite system) or a SLAM (simultaneous localization and mapping), and then filtering the initial position information, such as Kalman filtering, to obtain a current position of the device, wherein the current position is used for describing a position of the device in a geographic coordinate system in an AR scene.
It should be noted that the high-precision positioning (such as GNSS or SLAM) may not be turned on in a range where the device is far from the target position of the POI to save unnecessary computation power of the device, and the high-precision positioning may be turned on to assist in a certain range near the POI. The position of the target position of the POI in the geographic coordinate system is unchanged, but the position of the device is variable, and further the relative positions of the device and the POI are changed, so that the change of the position of the camera in the geographic coordinate system needs to be tracked in real time to realize the immersive augmented reality experience.
And S12, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references.
In practical application, the scene coordinate system is constructed by the current position of the device and the target position of the target to be projected, and after the scene coordinate system is constructed based on the current position of the device and the target position of the target to be projected, it is necessary to determine the display coordinates of the target content of the target to be projected in the scene coordinate system, that is, which position in the scene coordinate system displays the target content of the target to be projected. In a specific implementation process, the implementation process of step S12 is related to the reference coordinate system selected by the target position of the target to be projected, and is now described separately.
1. The target position of the target to be projected is a coordinate in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system with the target content as a reference.
The target position of the target to be projected may be determined based on human experience or may be obtained based on the position of the virtual object to be projected currently relative to the photographing apparatus, which is not limited herein.
Specifically, referring to fig. 2, step S12 may include:
s21, determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected.
Because the current position of the equipment and the target position of the target to be projected are both data in a geographic coordinate system, the relative distance between the target position of the target to be projected and the current position of the equipment can be obtained directly by making a difference through the positions.
And S22, calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the attitude information of the equipment and the relative distance.
In practical application, the relative position relationship between the target position of the target to be projected and the current position of the device is calculated according to the attitude information of the device and the relative distance between the device and the target to be projected. Specifically, an attitude angle is obtained according to the attitude information (represented in the form of an attitude matrix), and the relative offset of the target position of the target to be projected relative to the target to be projected is calculated according to the attitude angle and the relative distance, so that the relative position relationship can be obtained.
The relative position relationship is a position deviation of a target position of the target to be projected relative to the current position of the equipment, and can be represented by a model matrix.
S23, determining the display coordinates of the target content of the target to be projected in the scene coordinate system according to the coordinates of the target content of the target to be projected and the relative position relationship.
In practical application, the display coordinates of the target to be projected in the scene coordinate system can be obtained by adding the relative offset to the coordinates of the equipment in the scene coordinate system.
2. The target position of the target to be projected is a coordinate in a camera coordinate system taking the equipment as a reference; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system with the target content as a reference.
Unlike the above embodiments, the target position of the target to be projected in the present embodiment is directly selected in the camera coordinate system, that is, the target position of the target to be projected is a coordinate point in the camera coordinate system.
Specifically, step S12 may include:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the attitude information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be transmitted.
In this embodiment, since a scene coordinate system including a target position of the target to be projected and a current position of the device needs to be determined, the target position of the target to be projected in the camera coordinate system needs to be converted into the scene coordinate system, the previous embodiment provides conversion from the scene coordinate system to the camera coordinate system according to the posture information of the device, and inverse conversion is performed, so that conversion from the camera coordinate system to the scene coordinate system can be obtained, and further, display coordinates of the target position of the target to be projected in the scene coordinate system can be determined.
It should be noted that, if the coordinates of the target position of the target to be projected in the geographic coordinate system are to be recorded, the scene coordinate system needs to be converted into the geographic coordinate system, and the geographic coordinate system is converted into the scene coordinate system to be inversely transformed.
And S13, determining the target content of the target to be projected to be located at the target display position in the camera coordinate system with the equipment as the reference according to the attitude information of the equipment, and displaying the target content at the target display position.
If the device is moving, the relative position of the device and the target to be projected in the scene coordinate system is constantly changed, and further the target display position of the target to be projected in the camera coordinate system is constantly changed along with the movement of the device, so that the target display position of the target content of the target to be projected in the camera coordinate system with the device as the reference needs to be determined in real time, and the target content of the target to be projected is constantly displayed at the target display position.
Referring to fig. 3, fig. 3 is a schematic diagram showing a POI display, in fig. 3, the target content is a "spot type" part, the POI content is suspended in midair and has coordinates in a detailed geographic coordinate system, and the picture taken by the device also has coordinates in the detailed geographic coordinate system, and by judging whether the land corresponding to each of the two coordinates is "parking land", it can be determined whether the land is illegally occupied, and the POI display has other uses besides "parking land". I.e. by means of the present embodiment it can be used to verify that the land is being used legally. In addition, still have pipeline fortune dimension function (through putting into pipeline data at the assigned position, constructor can look over on the spot, overhauls, reports etc.), library exhibition function (through putting into books information at the assigned position, the location books position that visitor can be convenient, inspect).
In a specific implementation process, step S13 may specifically include:
and generating a posture matrix corresponding to the posture information of the equipment, and taking the product of the display coordinates and the posture matrix of the equipment as a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference.
In practical application, the attitude information and the attitude matrix are two different expression modes of attitude, in this embodiment, the attitude information is converted to obtain the attitude matrix, and then the display coordinate of the target content of the target to be projected in the scene coordinate system is multiplied by the attitude matrix, and the obtained result is the target display position of the target content of the target to be projected in the camera coordinate system based on the device.
Through the description of the above embodiment, a panoramic POI augmented reality model based on multi-sensor fusion can be established through the embodiment of the invention. And the immersive GIS industry application is carried out on the basis: the method has the advantages of high-precision positioning basis, two-dimensional and three-dimensional integrated rendering capability and data position management in the AR scene, and can easily establish immersive GIS industry application and solution on the basis. In the conventional method, an indoor map is generated by combining the trajectory data with high-precision position information acquired by the AR and the visual inertial mileage positioning technology (AR indoor trajectory acquisition), dynamic effects such as situation deduction and the like are added to the map displayed in the AR scene, and gesture operation, AR quantity calculation and the like are performed on a three-dimensional model.
In this embodiment, when the target content of the target to be projected is projected, a scene coordinate system including the current position of the device and the target position of the target to be projected is established, that is, the current position of the device and the target position of the target to be projected are placed in the same coordinate system for comparison, so that the relative position relationship between the current position of the device and the target position of the target to be projected is more accurate, and further, the target content can be accurately projected to the target display position during projection, so that the projection accuracy is ensured.
In addition, the embodiment of the invention also has the following effects:
1. immersive AR experience: the immersive AR experience is brought to a two-dimensional map and a three-dimensional scene through fusion of various sensor data and real-time calculation.
2. Displaying the POI in the whole scene: and the POI can be displayed on any plane and any position in the space. The POIs are derived from multiple types of data, such as local JSON files, map space data sets of users, or multi-source data of online networks and the like.
3. POI rendering in multiple categories: the POI of various categories, which may be pictures, two-dimensional maps, or three-dimensional models, can be a way to show POI (points of interest). The data of different data types such as two-dimensional data, three-dimensional data and the like are effectively managed in a node management mode and put into a scene, and two-dimensional and three-dimensional integration can be effectively achieved.
4. Immersive application based on GIS: high-precision positioning and two-dimensional and three-dimensional integrated rendering bring a foundation for GIS application. The POI position is selected in the camera coordinate system through the manual work of the whole scene, so that immersive data acquisition, processing and display can be realized, and an immersive ground dynamic effect can be added to an original map so as to enrich the map display function.
Optionally, on the basis of the embodiment of the data processing method, another embodiment of the present invention provides a data processing apparatus, and with reference to fig. 4, the data processing apparatus may include:
the data acquisition module 11 is used for acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of the equipment;
a coordinate determination module 12, configured to determine, by using the target position of the target to be projected, the posture information of the device, and the current position as references, a display coordinate in which the target content of the target to be projected is located in a scene coordinate system; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and the data display module 13 is configured to determine, according to the posture information of the device, a target display position where the target content of the target to be projected is located in a camera coordinate system based on the device, and display the target content at the target display position.
Further, the target position of the target to be projected is a coordinate located in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
accordingly, the coordinate determination module comprises:
the distance determining submodule is used for determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
the relation determination submodule is used for calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the attitude information of the equipment and the relative distance;
and the coordinate determination submodule is used for determining the display coordinate of the target content of the target to be projected in a scene coordinate system according to the coordinate of the target content of the target to be projected and the relative position relation.
Further, the data display module is configured to, when determining, according to the posture information of the device, that the target content of the target to be projected is located at a target display position in a camera coordinate system based on the device, specifically:
and generating a posture matrix corresponding to the posture information of the equipment, and taking the product of the display coordinates and the posture matrix of the equipment as a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference.
Further, the target position of the target to be projected is a coordinate in a camera coordinate system with the equipment as a reference; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
correspondingly, the coordinate determination module 12 is configured to, when determining, by taking the target position of the target to be projected, the posture information of the device, and the current position as references, that the target content of the target to be projected is located at a display coordinate in a scene coordinate system, specifically:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the attitude information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be transmitted.
Further, when the data obtaining module 11 is configured to obtain the posture information of the device, it is specifically configured to:
acquiring gravity data, acceleration data and magnetic field data of the equipment;
correcting the acceleration data by using the gravity data to obtain corrected acceleration data;
and integrating the magnetic field data and the corrected acceleration data to determine the attitude information of the equipment.
Further, when the data obtaining module 11 is configured to obtain the current location of the device, it is specifically configured to:
and acquiring initial position information of the equipment, and filtering the initial position information to obtain the current position.
In this embodiment, when the target content of the target to be projected is projected, a scene coordinate system including the current position of the device and the target position of the target to be projected is established, that is, the current position of the device and the target position of the target to be projected are placed in the same coordinate system for comparison, so that the relative position relationship between the current position of the device and the target position of the target to be projected is more accurate, and further, the target content can be accurately projected to the target display position during projection, so that the projection accuracy is ensured.
It should be noted that, for the working processes of each module and sub-module in this embodiment, please refer to the corresponding description in the above embodiments, which is not described herein again.
Optionally, on the basis of the embodiment of the data processing method, another embodiment of the present invention provides an electronic device, including: a memory and a processor;
wherein the memory is used for storing programs;
the processor calls a program and is used to:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and according to the attitude information of the equipment, determining a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference, and displaying the target content at the target display position.
In this embodiment, when the target content of the target to be projected is projected, a scene coordinate system including the current position of the device and the target position of the target to be projected is established, that is, the current position of the device and the target position of the target to be projected are placed in the same coordinate system for comparison, so that the relative position relationship between the current position of the device and the target position of the target to be projected is more accurate, and further, the target content can be accurately projected to the target display position during projection, so that the projection accuracy is ensured.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A data processing method, comprising:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and according to the attitude information of the equipment, determining a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference, and displaying the target content at the target display position.
2. The data processing method according to claim 1, wherein the target position of the target to be projected is a coordinate located in a geographical coordinate system; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references, including:
determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
calculating a relative position relation between the target position of the target to be projected and the current position of the equipment based on the attitude information of the equipment and the relative distance;
and determining the display coordinates of the target content of the target to be projected in a scene coordinate system according to the coordinates of the target content of the target to be projected and the relative position relationship.
3. The data processing method of claim 2, wherein determining, according to the attitude information of the device, a target display position where the target content of the target to be projected is located in a camera coordinate system with reference to the device comprises:
generating a posture matrix corresponding to the posture information of the equipment;
and taking the product of the display coordinates and the attitude matrix of the equipment as a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference.
4. The data processing method according to claim 1, wherein the target position of the object to be projected is a coordinate located in a camera coordinate system with reference to the apparatus; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
correspondingly, determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references, including:
determining display coordinates of the target position of the target to be projected in a scene coordinate system according to the attitude information of the equipment; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be transmitted.
5. The data processing method of claim 1, wherein the obtaining of the pose information of the device comprises:
acquiring gravity data, acceleration data and magnetic field data of the equipment;
correcting the acceleration data by using the gravity data to obtain corrected acceleration data;
and integrating the magnetic field data and the corrected acceleration data to determine the attitude information of the equipment.
6. The data processing method of claim 1, wherein obtaining the current location of the device comprises:
acquiring initial position information of the equipment;
and filtering the initial position information to obtain the current position.
7. A data processing apparatus, comprising:
the data acquisition module is used for acquiring the target position and the target content of the target to be projected, and acquiring the attitude information and the current position of the equipment;
the coordinate determination module is used for determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and the data display module is used for determining a target display position of the target content of the target to be projected in a camera coordinate system based on the equipment according to the attitude information of the equipment, and displaying the target content at the target display position.
8. The data processing apparatus of claim 7, wherein the target location of the target to be projected is a coordinate located in a geographic coordinate system; the coordinates of the target content of the target to be projected are coordinates in a local coordinate system based on the target content;
accordingly, the coordinate determination module comprises:
the distance determining submodule is used for determining the relative distance between the target position of the target to be projected and the current position of the equipment according to the current position of the equipment and the target position of the target to be projected;
the relation determination submodule is used for calculating the relative position relation between the target position of the target to be projected and the current position of the equipment based on the attitude information of the equipment and the relative distance;
and the coordinate determination submodule is used for determining the display coordinate of the target content of the target to be projected in a scene coordinate system according to the coordinate of the target content of the target to be projected and the relative position relation.
9. The data processing apparatus according to claim 8, wherein the data display module, when determining, according to the pose information of the device, that the target content of the target to be projected is located at a target display position in a camera coordinate system with reference to the device, is specifically configured to:
and generating a posture matrix corresponding to the posture information of the equipment, and taking the product of the display coordinates and the posture matrix of the equipment as a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference.
10. An electronic device, comprising: a memory and a processor;
wherein the memory is used for storing programs;
the processor calls a program and is used to:
acquiring a target position and target content of a target to be projected, and acquiring attitude information and a current position of equipment;
determining display coordinates of the target content of the target to be projected in a scene coordinate system by taking the target position of the target to be projected, the attitude information of the equipment and the current position as references; the scene coordinate system is constructed by the current position of the equipment and the target position of the target to be projected;
and according to the attitude information of the equipment, determining a target display position of the target content of the target to be projected in a camera coordinate system with the equipment as a reference, and displaying the target content at the target display position.
CN201911301161.3A 2019-12-17 2019-12-17 Data processing method and device and electronic equipment Active CN111127661B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911301161.3A CN111127661B (en) 2019-12-17 2019-12-17 Data processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911301161.3A CN111127661B (en) 2019-12-17 2019-12-17 Data processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111127661A true CN111127661A (en) 2020-05-08
CN111127661B CN111127661B (en) 2023-08-29

Family

ID=70498391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911301161.3A Active CN111127661B (en) 2019-12-17 2019-12-17 Data processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111127661B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379344A (en) * 2020-11-09 2021-02-19 中国科学院电子学研究所 Signal compensation method and device, equipment and storage medium
JP2023510474A (en) * 2020-12-25 2023-03-14 シェンチェン テトラス.エーアイ テクノロジー カンパニー リミテッド POINT CLOUD MAP CONSTRUCTION METHOD AND DEVICE, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118339A1 (en) * 2012-10-31 2014-05-01 The Boeing Company Automated frame of reference calibration for augmented reality
CN105354820A (en) * 2015-09-30 2016-02-24 深圳多新哆技术有限责任公司 Method and apparatus for regulating virtual reality image
CN106791784A (en) * 2016-12-26 2017-05-31 深圳增强现实技术有限公司 Augmented reality display methods and device that a kind of actual situation overlaps
CN107463261A (en) * 2017-08-11 2017-12-12 北京铂石空间科技有限公司 Three-dimensional interaction system and method
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 Image-guided virtual-real fusion processing method and device
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN108876900A (en) * 2018-05-11 2018-11-23 重庆爱奇艺智能科技有限公司 A kind of virtual target projective techniques merged with reality scene and system
CN109961522A (en) * 2019-04-02 2019-07-02 百度国际科技(深圳)有限公司 Image projecting method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140118339A1 (en) * 2012-10-31 2014-05-01 The Boeing Company Automated frame of reference calibration for augmented reality
CN105354820A (en) * 2015-09-30 2016-02-24 深圳多新哆技术有限责任公司 Method and apparatus for regulating virtual reality image
CN106791784A (en) * 2016-12-26 2017-05-31 深圳增强现实技术有限公司 Augmented reality display methods and device that a kind of actual situation overlaps
CN107463261A (en) * 2017-08-11 2017-12-12 北京铂石空间科技有限公司 Three-dimensional interaction system and method
CN108335365A (en) * 2018-02-01 2018-07-27 张涛 Image-guided virtual-real fusion processing method and device
CN108537889A (en) * 2018-03-26 2018-09-14 广东欧珀移动通信有限公司 Method of adjustment, device, storage medium and the electronic equipment of augmented reality model
CN108876900A (en) * 2018-05-11 2018-11-23 重庆爱奇艺智能科技有限公司 A kind of virtual target projective techniques merged with reality scene and system
CN109961522A (en) * 2019-04-02 2019-07-02 百度国际科技(深圳)有限公司 Image projecting method, device, equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112379344A (en) * 2020-11-09 2021-02-19 中国科学院电子学研究所 Signal compensation method and device, equipment and storage medium
CN112379344B (en) * 2020-11-09 2024-04-02 中国科学院电子学研究所 Signal compensation method and device, equipment and storage medium
JP2023510474A (en) * 2020-12-25 2023-03-14 シェンチェン テトラス.エーアイ テクノロジー カンパニー リミテッド POINT CLOUD MAP CONSTRUCTION METHOD AND DEVICE, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM
JP7316456B2 (en) 2020-12-25 2023-07-27 シェンチェン テトラス.エーアイ テクノロジー カンパニー リミテッド POINT CLOUD MAP CONSTRUCTION METHOD AND DEVICE, ELECTRONIC DEVICE, STORAGE MEDIUM AND PROGRAM

Also Published As

Publication number Publication date
CN111127661B (en) 2023-08-29

Similar Documents

Publication Publication Date Title
US11315308B2 (en) Method for representing virtual information in a real environment
US9996982B2 (en) Information processing device, authoring method, and program
US9558581B2 (en) Method for representing virtual information in a real environment
CN108958462A (en) A kind of methods of exhibiting and device of virtual objects
Fukuda et al. Improvement of registration accuracy of a handheld augmented reality system for urban landscape simulation
CN111127661B (en) Data processing method and device and electronic equipment
CN111815783A (en) Virtual scene presenting method and device, electronic equipment and storage medium
Wither et al. Using aerial photographs for improved mobile AR annotation
Afif et al. Orientation control for indoor virtual landmarks based on hybrid-based markerless augmented reality
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
CN111521193A (en) Live-action navigation method, live-action navigation device, storage medium and processor
CN115187709A (en) Geographic model processing method and device, electronic equipment and readable storage medium
Bednarczyk The use of augmented reality in geomatics
US11561669B2 (en) Systems and methods of using a digital twin for interacting with a city model
CN115240140A (en) Equipment installation progress monitoring method and system based on image recognition
Moares et al. Inter ar: Interior decor app using augmented reality technology
CN113888709A (en) Electronic sand table generation method and device and non-transient storage medium
Dekker et al. MARWind: mobile augmented reality wind farm visualization
Hew et al. Markerless Augmented Reality for iOS Platform: A University Navigational System
Thomas et al. 3D modeling for mobile augmented reality in unprepared environment
CN113836249B (en) Map information point management method, related device and equipment
CN115129213A (en) Data processing method and device, electronic equipment and storage medium
He et al. Research on underground pipeline augmented reality system based on ARToolKit
Blanco Pons Analysis and development of augmented reality applications for the dissemination of cultural heritage
Abdulazeez et al. Hoshang Kolivand, Abdennour El Rhalibi, Mostafa Tajdini

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant