WO2024045854A1 - 一种虚拟数字内容显示***、方法与电子设备 - Google Patents

一种虚拟数字内容显示***、方法与电子设备 Download PDF

Info

Publication number
WO2024045854A1
WO2024045854A1 PCT/CN2023/104001 CN2023104001W WO2024045854A1 WO 2024045854 A1 WO2024045854 A1 WO 2024045854A1 CN 2023104001 W CN2023104001 W CN 2023104001W WO 2024045854 A1 WO2024045854 A1 WO 2024045854A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual digital
target virtual
electronic device
scene
digital content
Prior art date
Application number
PCT/CN2023/104001
Other languages
English (en)
French (fr)
Inventor
郑亚
王征宇
魏记
温裕祥
冯艳妮
Original Assignee
华为云计算技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为云计算技术有限公司 filed Critical 华为云计算技术有限公司
Publication of WO2024045854A1 publication Critical patent/WO2024045854A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present application relates to the technical field of electronic equipment, and in particular to a virtual digital content display system, method and electronic equipment.
  • Digital content can be divided into two major categories: User Generated Content (UGC) and Professional Generated Content (PGC).
  • UGC User Generated Content
  • PGC Professional Generated Content
  • UGC User Generated Content
  • PGC Professional Generated Content
  • UGC User Generated Content
  • PGC Professional Generated Content
  • UGC User Generated Content
  • PGC Professional Generated Content
  • Users generally display UGC, PGC and other digital contents through the Internet platform or provide them to other users.
  • AR augmented reality
  • Users can enhance their interaction with digital content such as UGC and PGC through AR technology.
  • digital content such as UGC and PGC is displayed as a virtual object in a real-world scene, so that users can watch the real-world scene in the AR scene displayed on the display screen of the AR device and at the same time watch the virtual digital content displayed in the real-world scene.
  • Virtual digital content in .
  • Embodiments of the present application provide a virtual digital content display system, method and electronic device to solve the problem that users who are not present and users who are present are unable to simultaneously view virtual digital content displayed in a real-world scene.
  • the present application provides a virtual digital content display system, which includes a first electronic device and a second electronic device.
  • the first electronic device may: in response to a first operation triggered by a user, determine a target virtual digital scene from at least one candidate virtual digital scene; in response to a second operation triggered by a user, determine a third virtual digital scene from at least one candidate virtual digital content.
  • a target virtual digital content is provided, and the first target virtual digital content is displayed at a first position of the target virtual digital scene.
  • the second electronic device may: in response to a third operation triggered by the user, collect and display an image of a first real scene; when the first real scene is a real scene corresponding to the target virtual digital scene, in the The first target virtual digital content is displayed in a first position of the first real scene.
  • the first electronic device can display the first target virtual digital content in the first position of the target virtual digital scene, that is, the first electronic device can simulate the first target virtual digital content in the target virtual digital scene through the target virtual digital scene.
  • the corresponding display effect in the real scene allows the user to watch the display effect of the first target virtual digital content in the real world scene without arriving at the scene.
  • the second electronic device can collect and display the image of the first real scene, and when the first real scene is the real scene corresponding to the target virtual digital scene, display the first target virtual digital content at the first position of the first real scene, that is, The second electronic device can display the display effect simulated by the first electronic device through the target virtual digital scene in the real scene corresponding to the target virtual digital scene, so that users on the scene and users not on the scene can watch the first target virtual scene simultaneously. How digital content appears in real-world scenarios.
  • the first position of the first real scene is a position corresponding to the first position of the target virtual digital scene in the first real scene; or the first position of the first real scene is The distance between a position and the first position of the target virtual digital scene in the first real scene is less than or equal to the first threshold.
  • the distance may be less than or equal to the first threshold, such that the second electronic device is at the first position of the first real scene.
  • the difference between the effect of displaying the first target virtual digital content and the effect of the first electronic device displaying the first target virtual digital content at the first position of the target virtual digital scene is small, so that the user on the scene and the user who is not on the scene are You can simultaneously watch the display of the first target virtual digital content in the real world scene. display effect.
  • the second electronic device may also: in response to a fourth operation triggered by the user, determine the second target virtual digital content from at least one candidate virtual digital content, and perform the second target virtual digital content in the first real scene in the first real-life scene.
  • the second target virtual digital content is displayed in two positions; the first electronic device may also: when the first real scene is a real scene corresponding to the target virtual digital scene, on the target virtual digital scene
  • the second position displays the second target virtual digital content, wherein the second position of the first real scene is the position corresponding to the second position of the target virtual digital scene in the first real scene, or the second position of the first real scene.
  • the distance between the second position of the first real scene and the second position of the target virtual digital scene corresponding to the first real scene is less than or equal to a second threshold.
  • the first electronic device can display the second target virtual digital content when the first real scene is the real scene corresponding to the target virtual digital scene.
  • the second position of the target virtual digital scene displays the second target virtual digital content, that is, the first electronic device can display the second target virtual digital content through the first real scene corresponding to the second electronic device after the second electronic device displays the second target virtual digital content in the first real scene.
  • the virtual digital scene simulates the display effect of the second target virtual digital content in the first real-world scene, so that users on the scene and users not on the scene can simultaneously watch the display effect of the second target virtual digital content in the real-world scene. .
  • the first electronic device may also: in response to the fifth operation triggered by the user, perform any one or more of the following operations: adjust the first target virtual digital content in the target the position of the virtual digital scene; or adjusting the size of the first target virtual digital content; or adjusting the orientation of the first target virtual digital content; or deleting the first target virtual digital content; the second electronic device, You may also: perform any one or more of the following operations in response to the sixth operation triggered by the user: adjust the position of the first target virtual digital content in the first real scene; or adjust the first target virtual digital content. The size of the content; or adjusting the orientation of the first target virtual digital content; or deleting the first target virtual digital content.
  • the first electronic device or the second electronic device can edit the displayed first target virtual digital content, for example, adjust the position, size, or location of the first target virtual digital content in the target virtual digital scene or the first real scene. Orientation, deleting the first target virtual digital content, etc., so that users who are not on site or users who are on site can not only watch the display effect of the first target virtual digital content in the real world scene, but also interact with the first target virtual digital content. Interaction.
  • the first electronic device may also: in response to a fifth operation triggered by the user, send first editing information to the second electronic device, wherein the first editing information includes the The first electronic device edits the first target virtual digital content displayed by the first electronic device; the second electronic device may also: when receiving the first editing information from the first electronic device , edit the displayed first target virtual digital content according to the first editing information, and display the edited first target virtual digital content in the first real scene.
  • the second electronic device can receive the first editing information from the first electronic device, and edit the displayed first target virtual digital content according to the first editing information, so that the first electronic device can edit the displayed first electronic device. After the first target virtual digital content is edited, the second electronic device can update the second target virtual digital content displayed in the real scene in real time.
  • the second electronic device may also: in response to a sixth operation triggered by the user, send second editing information to the first electronic device, wherein the second editing information includes the The second electronic device edits the first target virtual digital content displayed by the second electronic device; the first electronic device may also: when receiving the second editing information from the second electronic device , edit the displayed first target virtual digital content according to the second editing information, and display the edited first target virtual digital content in the target virtual digital scene.
  • the first electronic device can receive the second editing information from the second electronic device, and edit the displayed first target virtual digital content according to the second editing information, so that the second electronic device can edit the displayed first target virtual digital content in the real scene.
  • the first electronic device can update the first target virtual digital content in the virtual digital scene corresponding to the real scene in real time.
  • the first electronic device may also: before responding to the first operation triggered by the user, display any one of the two-dimensional map or text corresponding to the at least one candidate virtual digital scene. or more; when the first electronic device determines the target virtual digital scene from the at least one candidate virtual digital scene in response to the first operation triggered by the user, the first electronic device may: respond to the user selecting the two The first operation of any one or more of a dimensional map or text determines the target virtual digital scene.
  • the first electronic device can display at least one candidate virtual digital scene in the form of a two-dimensional map or text on the display screen of the first electronic device, so that the user can directly view multiple candidate virtual digital scenes and select from multiple Select a target virtual digital scene among candidate virtual digital scenes.
  • the present application also provides a virtual digital content display method, applied to a first electronic device.
  • the method includes: in response to a first operation triggered by a user, the first electronic device can select from at least one candidate virtual digital Determine a target virtual digital scene in the scene; in response to a second operation triggered by the user, the first electronic device may determine a first target virtual digital content from at least one candidate virtual digital content, and select a first target virtual digital scene in the target virtual digital scene.
  • the first target virtual digital content is displayed in a position; in response to a third operation triggered by the user, the first electronic device collects and displays an image of the first real scene; when the first real scene is the target virtual digital scene When corresponding to the real scene, the first electronic device displays the first target virtual digital content at the first position of the first real scene, wherein the first target virtual digital content is the first electronic device at the target position.
  • the virtual digital content displayed in the first position of the virtual digital scene, the first position of the first real scene is the position corresponding to the first position of the target virtual digital scene in the first real scene, or the third
  • the distance between the first position of a real scene and the first position of the target virtual digital scene corresponding to the first real scene is less than or equal to the first threshold.
  • the first electronic device may also: in response to a fourth operation triggered by the user, determine the second target virtual digital content from at least one candidate virtual digital content, and perform the operation in the first real scene in the first real-life scene.
  • Display the second target virtual digital content at two positions; when the first real scene is the real scene corresponding to the target virtual digital scene, display the second target virtual digital content at the second position of the target virtual digital scene, wherein, the second target virtual digital content is the virtual digital content displayed by the first electronic device at the second position of the first real scene, and the second position of the first real scene is the target virtual digital scene.
  • the second position is between the position corresponding to the first real scene, or the second position of the first real scene and the second position of the target virtual digital scene to the position corresponding to the first real scene. The distance is less than or equal to the second threshold.
  • the first electronic device may also: in response to a fifth operation triggered by the user, edit the first target virtual digital content of the target virtual digital scene so that the first The electronic device may perform any one or more of the following operations: adjust the position of the first target virtual digital content in the target virtual digital scene; or adjust the size of the first target virtual digital content; or adjust the third target virtual digital content.
  • the first electronic device may perform any one or more of the following operations: adjust the position of the first target virtual digital content in the first real scene; or adjust the size of the first target virtual digital content; or adjust the Orientation of the first target virtual digital content; or delete the first target virtual digital content.
  • the first electronic device may also: generate and save first editing information in response to a fifth operation triggered by the user, wherein the first editing information includes a pair of the first electronic device Information for editing the first target virtual digital content of the target virtual digital scene; the first electronic device is further configured to edit the first target virtual digital content according to the first editing information when generating and saving the first editing information. Edit the first target virtual digital content displayed in the first real scene, and display the edited first target virtual digital content in the first real scene.
  • the first electronic device may also: generate and save second editing information in response to a sixth operation triggered by the user, wherein the second editing information includes a pair of the first electronic device Information for editing the first target virtual digital content of the first real scene; the first electronic device is also configured to edit the first target virtual digital content according to the second editing information when generating and saving the second editing information.
  • the displayed first target virtual digital content is edited, and the edited first target virtual digital content is displayed in the first real scene.
  • the first electronic device may also: before responding to the first operation triggered by the user, display any one of the two-dimensional map or text corresponding to the at least one candidate virtual digital scene. or multiple items; the first electronic device determines the target virtual digital scene from the at least one candidate virtual digital scene, including: in response to the user selecting any one or more of the two-dimensional map or text. The first operation determines the target virtual digital scene.
  • the present application also provides an electronic device, which includes a processor, a memory, and one or more programs; wherein the one or more programs are stored in the memory, and the One or more programs include instructions that, when executed by the processor, cause the electronic device to perform the method described in the above second aspect or any possible design of the second aspect.
  • the present application provides a computer-readable storage medium, the computer-readable storage medium being used to store a computer program, which when the computer program is run on a computer, causes the computer to execute the above second aspect or Any possible design of the second aspect is the method described.
  • the present application provides a computer program product, including a computer program.
  • the computer program When the computer program is run on a computer, it causes the computer to execute as described in the above second aspect or any possible design of the second aspect. Methods.
  • Figure 1 is a schematic diagram of an AR device provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of an AR scene provided by an embodiment of the present application.
  • Figure 3 is a schematic structural diagram of a virtual digital content display system provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of the hardware structure of a first electronic device provided by an embodiment of the present application.
  • Figure 5 is a schematic diagram of the software structure of a first electronic device provided by an embodiment of the present application.
  • Figure 6a is a schematic diagram of an application initialization interface provided by an embodiment of the present application.
  • Figure 6b is a schematic diagram of a target virtual digital scene determination interface provided by an embodiment of the present application.
  • Figure 6c is a schematic diagram of another target virtual digital scene determination interface provided by an embodiment of the present application.
  • Figure 6d is a schematic diagram of a target virtual digital scene generation interface provided by an embodiment of the present application.
  • Figure 6e is a schematic diagram of a realistic scene of a shooting target provided by an embodiment of the present application.
  • Figure 6f is a schematic diagram of another realistic scene of a shooting target provided by an embodiment of the present application.
  • Figure 6g is a schematic diagram of a target virtual digital content determination interface provided by an embodiment of the present application.
  • Figure 6h is a schematic diagram of a virtual digital scene provided by an embodiment of the present application.
  • Figure 6i is a schematic diagram of a virtual digital content display interface provided by an embodiment of the present application.
  • Figure 6j is a schematic diagram of another virtual digital content display interface provided by an embodiment of the present application.
  • Figure 6k is a schematic diagram of virtual digital content interaction provided by an embodiment of the present application.
  • Figure 6l is a schematic diagram of virtual digital content editing provided by an embodiment of the present application.
  • Figure 6m is a schematic diagram of yet another virtual digital content interaction provided by an embodiment of the present application.
  • Figure 6n is a schematic diagram of yet another virtual digital content editing provided by an embodiment of the present application.
  • Figure 7 is a schematic flow chart of a virtual digital content display method provided by an embodiment of the present application.
  • Figure 8 is a schematic flow chart of another virtual digital content display method provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of the hardware structure of another first electronic device provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the hardware structure of a second electronic device provided by an embodiment of the present application.
  • At least one of the embodiments of the present application involves one or more; where multiple means greater than or equal to two.
  • words such as “first” and “second” are only used for the purpose of distinguishing the description, and cannot be understood to express or imply relative importance, nor can they be understood to express Or suggestive order.
  • the first object and the second object do not represent the importance of the two or the order of the two, but are only used to distinguish the description.
  • "and/or” only describes the association relationship, indicating that three relationships can exist, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone. these three situations.
  • the character "/" in this article generally indicates that the related objects are an "or” relationship.
  • connection can be detachably connected, or can be detachably connected. It is non-detachably connected; it can be directly connected or indirectly connected through an intermediate medium.
  • connection can be detachably connected, or can be detachably connected. It is non-detachably connected; it can be directly connected or indirectly connected through an intermediate medium.
  • orientation terms are used to describe and understand the embodiments of the present application better and more clearly, but do not indicate or imply that the device or component referred to must have a specific orientation, be constructed and operated in a specific orientation, and therefore cannot be understood as a limitation of the present application.
  • “Plural” means at least two.
  • Digital content can be divided into two categories: User Generated Content (UGC) and professionally generated content.
  • Content Professional Generated Content, PGC
  • UGC User Generated Content
  • PGC Personal Generated Content
  • Users generally display UGC, PGC and other digital contents through the Internet platform or provide them to other users.
  • Augmented reality (AR) technology refers to superimposing computer-generated virtual objects onto real-world scenes to enhance the real world.
  • AR technology needs to collect real-world scenes and then add a virtual environment to the real world. Therefore, the difference between virtual reality (VR) technology and AR technology is that VR technology creates a complete virtual environment, and all users see are virtual objects; while AR technology superimposes virtual objects on the real world, That includes both real-world objects and virtual objects.
  • VR technology creates a complete virtual environment, and all users see are virtual objects
  • AR technology superimposes virtual objects on the real world, That includes both real-world objects and virtual objects.
  • users wear transparent glasses through which they can see the real environment around them, and virtual objects can also be displayed on the glasses. In this way, the user can see both real objects and virtual objects.
  • FIG. 1 is a schematic diagram of an AR device provided by an embodiment of the present application.
  • the AR device includes an AR wearable device, as well as a host (such as an AR host) or a server (such as an AR server).
  • the AR wearable device is connected to the AR host or AR server (wired connection or wireless connection).
  • the AR host or AR server can be a device with large computing power.
  • the AR host can be a mobile phone, tablet, laptop, etc.
  • the AR server can be a cloud server, etc.
  • the AR host or AR server is responsible for image generation, image rendering, etc., and then sends the rendered image to the AR wearable device for display.
  • the user can see the image by wearing the AR wearable device.
  • the AR wearable device may be a head-mounted display (HMD), such as glasses, helmets, etc.
  • HMD head-mounted display
  • the AR device in Figure 1 may not include an AR host or AR server.
  • AR wearable devices have local image generation and rendering capabilities, without the need to obtain images from the AR host or AR server for display.
  • FIG. 1 is a schematic diagram of an AR scene provided by an embodiment of the present application.
  • the ground and road in Figure 2 are real-world images captured by the camera of the AR device in real time, and virtual cartoon characters are on the road. It is the virtual digital content added by the user to the current AR scene.
  • the user can simultaneously observe the ground, roads and virtual cartoon characters in the real world on the display screen of the AR device.
  • Users can also edit virtual digital content in the AR scene displayed on the display screen of the AR device. For example, edit the size, position, and orientation of the virtual cartoon characters in Figure 2.
  • FIG. 3 is a schematic structural diagram of a virtual digital content display system provided by an embodiment of the present application.
  • the virtual digital content display system may include a first electronic device and a second electronic device.
  • FIG. 3 illustrates a virtual digital content display system for ease of understanding only, but this should not constitute any limitation on the present application.
  • the virtual digital content display system may also include a greater number of first
  • the electronic device may also include a larger number of second electronic devices; the second electronic device that interacts with the different first electronic device may be the same second electronic device, or it may be a different second electronic device; with different first electronic devices
  • the number of second electronic devices that the first electronic device interacts with may be the same or different; in the embodiment of the present application, the first electronic device and the second electronic device may also be the same electronic device, and the embodiment of the present application does not make this specific limited.
  • the first electronic device is configured to determine a target virtual digital scene from at least one candidate virtual digital scene in response to a first operation triggered by a user, and to determine a target virtual digital scene from at least one candidate virtual digital scene in response to a second operation triggered by the user. Determine the first target virtual digital content in the digital content, superimpose the first target virtual digital content with the first position of the target virtual digital scene, and display the first target virtual digital content at the first position of the target virtual digital scene.
  • the second electronic device is configured to collect and display an image of the first real scene in response to the third operation triggered by the user.
  • the first real scene is the real scene corresponding to the target virtual digital scene
  • the first real scene of the first real scene The position displays the first target virtual digital content, wherein the first position of the first real scene is the position corresponding to the first position of the target virtual digital scene in the first real scene, or the first position of the first real scene and the target virtual
  • the distance between the first position of the digital scene and the position corresponding to the first real scene is less than or equal to the first threshold.
  • the first threshold may be 100 centimeters. Because the first electronic device can simulate the first target virtual digital content through the target virtual digital scene in the real world corresponding to the target virtual digital scene.
  • the display effect in the real scene so that the user can watch the display effect of the first target virtual digital content in the real world scene without arriving at the scene, because the second electronic device can display the second target virtual digital content in the real scene corresponding to the target virtual digital scene.
  • An electronic device simulates the display effect through the target virtual digital scene, so that users on site and users not on site can simultaneously watch the display effect of the first target virtual digital content in the real world scene.
  • the first electronic device may be a device with a wireless connection function.
  • the second electronic device may be the AR device shown in FIG. 1 .
  • the first electronic device may be a device equipped with a display screen, a camera, and a sensor.
  • the first electronic device may be a portable device, such as a mobile phone, a tablet, a wearable device with wireless communication functions (for example, a watch, a bracelet, a helmet, a headset, etc.), a vehicle-mounted terminal device, an AR /Virtual reality (VR) equipment, notebook computers, ultra-mobile personal computers (UMPC), netbooks, personal digital assistants (personal digital assistants, PDA), etc.
  • the first electronic device may also be a smart home device (for example, a smart TV, a smart speaker, etc.), a smart car, a smart robot, a workshop device, a wireless terminal in self-driving (Self Driving), or a remote medical surgery (Remote Medical Surgery).
  • Wireless terminals wireless terminals in Smart Grid, wireless terminals in Transportation Safety, wireless terminals in Smart City, or wireless terminals in Smart Home, flight Equipment (e.g., intelligent robots, hot air balloons, drones, airplanes), etc.
  • the first electronic device may also be a portable terminal device that also includes other functions such as a personal digital assistant and/or a music player function.
  • portable terminal devices include, but are not limited to, carrying Or portable terminal devices with other operating systems.
  • the above-mentioned portable terminal device may also be other portable terminal devices, such as a laptop computer (Laptop) with a touch-sensitive surface (such as a touch panel).
  • the above-mentioned first electronic device may not be a portable terminal device, but a desktop computer with a touch-sensitive surface (such as a touch panel).
  • FIG. 4 is a schematic diagram of the hardware structure of a first electronic device provided by an embodiment of the present application.
  • the first electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, and a power management module 141.
  • Battery 142 antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, Camera 193, display screen 194, and subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the first electronic device 100 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the first electronic device 100, and can also be used to transmit data between the first electronic device 100 and peripheral devices.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, etc.
  • the wireless communication function of the first electronic device 100 can be implemented through the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the first electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G applied on the first electronic device 100 case.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • at least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the wireless communication module 160 can provide wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), and global network that are applied on the first electronic device 100. Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), etc.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the first electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the first electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi) -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the display screen 194 is used to display a display interface of an application, such as displaying a display page of an application installed on the first electronic device 100 .
  • Display 194 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the first electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the first electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the camera 193 can be used to capture a panoramic view. If the user holds the first electronic device 100 and rotates it horizontally 360 degrees, the camera 193 can capture a panoramic view corresponding to the location of the first electronic device 100 .
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the first electronic device 100 .
  • the internal memory 121 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, software code of at least one application program, etc.
  • the storage data area may store data generated during use of the first electronic device 100 (such as captured images, recorded videos, etc.).
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the first electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement the data storage function. For example, save pictures, videos, etc. files on an external memory card.
  • the first electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the sensor module 180 may include a pressure sensor 180A, an acceleration sensor 180B, a touch sensor 180C, etc.
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • pressure sensor 180A may be disposed on display screen 194 .
  • Touch sensor 180C also known as "touch panel”.
  • the touch sensor 180C can be disposed on the display screen 194.
  • the touch sensor 180C and the display screen 194 form a touch screen, which is also called a "touch screen”.
  • the touch sensor 180C is used to detect a touch operation on or near the touch sensor 180C.
  • the touch sensor can pass the detected touch operation to the application processor to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 194 .
  • the touch sensor 180C may also be disposed on the surface of the first electronic device 100 in a position different from that of the display screen 194 .
  • the buttons 190 include a power button, a volume button, etc.
  • Key 190 may be a mechanical key. It can also be a touch button.
  • the first electronic device 100 may receive key input and generate key signal input related to user settings and function control of the first electronic device 100 .
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications (such as taking pictures, audio playback, etc.) can correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also be customized.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 195 is used to connect a SIM card. The SIM card can be connected to and separated from the first electronic device 100 by inserting it into the SIM card interface 195 or pulling it out from the SIM card interface 195 .
  • the components shown in FIG. 4 do not constitute a specific limitation on the first electronic device 100.
  • the first electronic device 100 may also include more or less components than shown in the figure, or some components may be combined, or Splitting certain parts, or different parts arrangements.
  • the combination/connection relationship between the components in Figure 4 can also be adjusted and modified.
  • FIG. 5 is a schematic diagram of the software structure of a first electronic device provided by an embodiment of the present application.
  • the software structure of the first electronic device may be a layered architecture.
  • the software may be divided into several layers, and each layer has a clear role and division of labor.
  • the layers communicate through software interfaces.
  • the operating system is divided into four layers, from top to bottom: application layer, application framework layer (framework, FWK), runtime (runtime) and system library, and kernel layer.
  • the application layer can include a series of application packages. As shown in Figure 5, the application layer can include cameras, settings, skin modules, user interface (UI), third-party applications, etc. Among them, third-party applications can include gallery, calendar, calls, maps, navigation, WLAN, Bluetooth, music, video, short messages, etc.
  • UI user interface
  • third-party applications can include gallery, calendar, calls, maps, navigation, WLAN, Bluetooth, music, video, short messages, etc.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • the application framework layer can include some predefined functions. As shown in Figure 5, the application framework layer can include window manager, content provider, view system, phone manager, resource manager, and notification manager.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications. Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • Telephone managers are used to provide communication functions of electronic devices. For example, call status management (including connected, hung up, etc.).
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • the runtime includes core libraries and virtual machines.
  • the runtime is responsible for the scheduling and management of the operating system.
  • the core library contains two parts: one part is the functional functions that need to be called by the Java language, and the other part is the core library of the operating system.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media library (media libraries), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the hardware layer can include various types of sensors, such as acceleration sensors, gravity sensors, touch sensors, etc.
  • embodiments of the present application also provide a virtual digital content display method.
  • the solutions provided by the embodiments of the present application will be described below with reference to specific examples.
  • the solution provided by the embodiment of the present application may include virtual digital scene display and virtual digital content display. After displaying the virtual digital content, it may further include virtual digital content interaction, virtual digital content roaming, etc. Detailed explanation below.
  • the user can log in to the AR application or VR application.
  • the user can enter the login information on the application login interface to log in to the AR application or VR application or trigger the "one-click login with mobile phone number" to log in to the AR application or VR. application.
  • the first electronic device may display an application initialization interface as shown in Figure 6a on the display screen.
  • Figure 6a shows an application initialization interface, which can display a "virtual digital scene library” icon 601 and a thumbnail of at least one candidate virtual digital scene, such as a thumbnail corresponding to the virtual digital scene "Underwater World” .
  • the user can view candidate virtual digital scenes by selecting the "Virtual Digital Scene Library” icon 601.
  • the first electronic device detects the user's operation of selecting the "Virtual Digital Scene Library” icon 601, in response to the operation, the first electronic device Any one or more of the two-dimensional map or text corresponding to the at least one candidate virtual digital scene may be displayed on the display screen of the first electronic device.
  • the user can determine the target virtual digital scene from at least one candidate virtual digital scene displayed on the display screen of the first electronic device, when the first electronic device detects the user's operation of selecting any one or more of the two-dimensional map or text. , in response to this operation, the first electronic device may display the target virtual digital scene determined by the user on the display screen.
  • the first electronic device when the first electronic device detects the user's operation of selecting the "virtual digital scene library” icon 601, in response to the operation, the first electronic device may display the target virtual digital scene determination as shown in Figure 6b on the display screen.
  • An interface that displays a two-dimensional map with an icon of at least one candidate virtual digital scene, such as "scene 1", “scene 2", “scene 3", “scene 4", “scene 5", “scene 6” , “Scene 7", “Scene 8", “Scene 9” and other candidate virtual digital scene icons are two-dimensional maps.
  • the user can view the corresponding target virtual digital scene by selecting the icon of the candidate virtual digital scene in the two-dimensional map.
  • the first electronic device When the first electronic device detects that the user selects the icon of the candidate virtual digital scene displayed on the display screen of the first electronic device In response to the operation, the first electronic device may display the corresponding target virtual digital scene.
  • the first electronic device when the first electronic device detects the user's operation of selecting the "virtual digital scene library” icon 601, in response to the operation, the first electronic device can also display the target virtual digital scene as shown in Figure 6c on the display screen. Confirm the interface, which displays icons for different regions, such as "Beijing City”, “Shanghai City”, “Hebei province”, “Shanxi City”, “Zhejiang province”, “Fujian province”, “Jiangxi province” and other regions icon.
  • the user can view the icons of the areas that are not currently displayed in the interface by sliding the scroll bar on the right side of the area icon up or down.
  • the user can also view the corresponding candidate virtual digital scene by selecting the icon of the area.
  • the first electronic device When the first electronic device detects the operation of the user selecting the icon of the area displayed on the display screen of the first electronic device, in response to the operation, the first electronic device The device can display candidate virtual digital scenes corresponding to the area. For example, when the first electronic device detects the user's operation of selecting the icon of "Beijing City” displayed on the display screen of the first electronic device, in response to the operation, the first electronic device may display a candidate corresponding to "Beijing City” Icons of virtual digital scenes, such as "Capital Museum”, “Beijing Tongzhou Grand Canal (Wharf)", “Chang Ying Tian Street”, “Lize Tian Street”, “Tsinghua University History Museum”, “Beijing Research Institute”, “Beijing “Fang” and other candidate virtual digital scene icons.
  • a candidate corresponding to "Beijing City” Icons of virtual digital scenes such as "Capital Museum”, “Beijing Tongzhou Grand Canal (Wharf)", “Chang Ying Ti
  • the user can slide up and down the scroll bar on the right side of the icon of the candidate virtual digital scene of "Beijing City” to view the icon of the candidate virtual digital scene corresponding to "Beijing City” that is not displayed in the current interface.
  • the user can also view the corresponding target virtual digital scene by selecting the icon of the candidate virtual digital scene corresponding to "Beijing City”.
  • the first electronic device detects that the user selects the icon corresponding to "Beijing City” displayed on the display screen of the first electronic device
  • the first electronic device may display the corresponding target virtual digital scene.
  • the above-mentioned candidate virtual digital scene may be a virtual digital scene stored by the first electronic device, such as an official pre-set virtual digital scene, or may be a virtual digital scene obtained by the first electronic device from the cloud or a server, or may be a user Created and uploaded Virtual digital scenes are not specifically limited in this application.
  • the user can also generate a target virtual digital scene through the first electronic device.
  • a target virtual digital scene through the first electronic device.
  • Figure 6a shows an application initialization interface, in which a "start creation” icon 602 may be displayed.
  • the user may select the "start creation” icon 602 to generate a target virtual digital scene.
  • the first electronic device detects When the user selects the "start creating” icon 602, in response to the operation, the first electronic device may display an interface as shown in FIG. 6d.
  • a graphical user interface (GUI) of the first electronic device can include an operation button 603.
  • the user can trigger the shooting of the target real scene by selecting the operation button 603.
  • the first electronic device may display a shooting interface.
  • the user can operate the first electronic device to shoot the target real scene and obtain the corresponding target virtual digital scene.
  • Figure 6e is a schematic diagram of a real-life scene of a shooting target provided by an embodiment of the present application.
  • the interface may include a shooting interface of the camera of the first electronic device and an operation button that prompts the user to continue shooting, such as " "Continue scanning” operation button 604, and an operation button for triggering to stop shooting, such as "Stop acquisition” operation button 605.
  • the user can shoot by moving the first electronic device.
  • the user can select an operation button that prompts the user to continue shooting, such as the "continue scanning” operation button 604, then the first electronic device can continue shooting; when the user determines to stop When shooting, you can select an operation button that triggers stopping shooting, such as the "stop acquisition” operation button 605, and then the first electronic device can stop shooting and obtain the corresponding target virtual digital scene based on the photographed content.
  • the user when the user operates the first electronic device to capture the target real scene, the user can hold the first electronic device and rotate the first electronic device to capture the target real scene.
  • the first electronic device can also display information prompting the user to continue shooting.
  • the user instructs the first electronic device to continue shooting and selects an operation button that prompts the user to continue shooting, such as the "continue scanning" operation button 604, the user can rotate the first electronic device to continue shooting.
  • the device continues to capture the target real-life scene.
  • the first electronic device can stop shooting the target real scene, based on the above user shooting Based on the data of the target real scene, the first electronic device can generate a corresponding target virtual digital scene.
  • the first electronic device when the above-mentioned user operates the first electronic device to capture the target real scene, based on the captured data of the target real scene, the first electronic device can obtain N panoramas of the corresponding target virtual digital scene, and each panorama
  • the pose information of the panorama where the pose information of the panorama can be the position and orientation in the real world of the shooting device (such as the first electronic device or official device) that shoots the panorama when shooting the panorama, the position of the panorama
  • the position represented by the pose information is determined by the global positioning system (GPS) positioning of the shooting equipment that shoots the panorama.
  • the orientation represented by the pose information of the panorama is determined by the inertial measurement of the shooting equipment that shoots the panorama. Determined by measurement of the unit (inertial measurement unit, IMU).
  • the first electronic device can also obtain a white model (ie, a simplified model) of each building in the target virtual digital scene and pose information of each building in the target virtual digital scene, where the pose information of the building can be The location and orientation of the building in the real world
  • the interface shown in Figure 6e can display any panoramic slice of a panorama of the target virtual digital scene captured by the camera of the first electronic device.
  • the user can move the first electronic device to The device shoots a wider range of space.
  • the operation button shown in Figure 6e for triggering the stop of shooting such as the "stop acquisition" operation button 605
  • the first electronic device can stop shooting.
  • N panoramas of the target virtual digital scene based on the captured content.
  • the first electronic device when the user rotates the first electronic device to shoot the target real scene, the first electronic device can acquire multi-frame panoramic slices of the panorama of the target virtual digital scene during the rotation and shooting process.
  • the first electronic device can display any frame panorama slice of any panorama of the target virtual digital scene captured by the camera on the display screen in real time.
  • the user when the first electronic device displays information prompting the user to continue shooting, the user can operate the first electronic device to continue shooting any panorama of the target virtual digital scene, and the first electronic device can continue to obtain the target virtual digital scene. Any frame panorama slice of any panorama of the scene, the first electronic device can splice multiple frame panorama slices of any panorama to obtain the panorama, until the user instructs the first electronic device to stop shooting.
  • the first electronic device when acquiring each panorama of the target virtual digital scene, can determine the position of the first electronic device in the real world when the panorama was photographed by performing GPS positioning, and determine the position of the first electronic device in the real world by performing IMU measurements.
  • the orientation of an electronic device in the real world when photographing the panorama is used to obtain the pose information of the panorama.
  • the first electronic device when the first electronic device captures the target virtual digital scene, the first electronic device can obtain multiple environmental images reflecting the target virtual digital scene, and the first electronic device can obtain multiple environmental images reflecting the target virtual digital scene.
  • the image determines the boundary vector data of each building in the target virtual digital scene, and determines the white mold data of the building based on the boundary vector data, and then determines the white mold data of the building based on the boundary vector data.
  • the white model data of the building is used to obtain the white model of the building, the pose information of the white model of the building, and the pose information of the building.
  • the pose information of the white model of the building can be the white model of the building in the corresponding three-dimensional Position and orientation in space. It should be noted that the acquisition method of the position and orientation of the white model of the building in the corresponding three-dimensional space is consistent with the above-mentioned acquisition of the position and orientation of the panorama in the real world, and will not be described again here.
  • the first electronic device when the first electronic device detects the operation of the user to determine the target virtual digital scene from at least one candidate virtual digital scene, in response to the operation, the first electronic device can obtain N panoramic views of the target virtual digital scene. image, the pose information of each panorama, the white model of each building in the target virtual digital scene, and the pose information of each building in the target virtual digital scene.
  • the first electronic device can obtain N panoramic images of the target virtual digital scene, pose information of each panoramic image, and each image in the target virtual digital scene from the panoramic image gallery and the white model database stored in the first electronic device.
  • the panorama in the panoramic gallery and the white model of the building in the white model library are obtained by using official equipment to shoot complex scenes.
  • the first electronic device can also obtain by itself N panoramas of the target virtual digital scene, the pose information of each panorama, the white model of each building in the target virtual digital scene, and each object in the target virtual digital scene. pose information of a building.
  • the target virtual digital scene may include one or more virtual digital scenes, which are not specifically limited in the embodiments of this application.
  • the following refers to virtual digital scene display, virtual digital content display, and virtual digital content interaction.
  • the target virtual digital scene only includes one virtual digital scene, and the target virtual digital scene in virtual digital content roaming may include multiple virtual digital scenes for introduction.
  • the above-mentioned first electronic device acquires N panoramic images of the target virtual digital scene, the pose information of each panoramic image, the white model of each building in the target virtual digital scene, and the N panoramic images of the target virtual digital scene. After obtaining the pose information of each building, the first electronic device can display the target virtual digital scene and any one or more of the images or text corresponding to at least one candidate virtual digital content on the display screen.
  • the user can determine the first target virtual digital content from at least one candidate virtual digital content displayed on the display screen of the first electronic device, and move the first target virtual digital content to the first position of the target virtual digital scene, when The first electronic device detects the user's operation of selecting any one or more of the image or text corresponding to the first target virtual digital content. In response to the operation, the first electronic device may select the first target virtual digital content determined by the user. Overlaying with the first position of the target virtual digital scene, the first target virtual digital content can be displayed at the first position of the target virtual digital scene.
  • the user can edit the first target virtual digital content, for example, move the first target virtual digital content to the target virtual digital scene. position, perform any one or more operations of enlarging, reducing, flipping or rotating the first target virtual digital content, adjusting the orientation of the first target virtual digital content, and so on.
  • candidate virtual digital content may be virtual digital content stored in the first electronic device, such as an official pre-set virtual digital content, or may be virtual digital content created and uploaded by a user, which is not specifically limited in this application.
  • the first electronic device can display the virtual digital scene and the content display interface as shown in (1) of Figure 6g on the display screen.
  • the target virtual digital scene and the "virtual digital content library” icon 606 are displayed in the interface. .
  • the user can view the candidate virtual digital content by selecting the "virtual digital content library” icon 606.
  • the first electronic device detects the user's operation of selecting the "virtual digital content library” icon 606, in response to the operation, the first electronic device Any one or more of the images or texts corresponding to at least one candidate virtual digital content may be displayed on the display screen of the first electronic device, such as "virtual digital content 1", “virtual digital content 2", “virtual digital content” 3", "Virtual Digital Content 4" and other candidate virtual digital content icons.
  • the user may determine the first target virtual digital content from at least one candidate virtual digital content displayed on the display screen of the first electronic device, and move the first target virtual digital content to a first position of the target virtual digital scene, when the first The electronic device detects the user's operation of selecting and moving the first target virtual digital content, and in response to the operation, the first electronic device can display the first target virtual digital content determined by the user at a first position of the target virtual digital scene.
  • the first electronic device detects that the user selects the icon of "virtual digital content 1" and moves the icon of "virtual digital content 1" to the target virtual digital scene.
  • the first electronic device can superimpose "virtual digital content 1" with the first position of the target virtual digital scene, and display it on the display screen of the first electronic device.
  • the first electronic device displays the first target virtual digital content determined by the user in the target virtual digital field.
  • the first electronic device may determine the pose information of the first target virtual digital content.
  • the first position of the target virtual digital scene may be the coordinate information of the first target virtual digital content in the three-dimensional coordinate system of the target virtual digital scene
  • the pose information of the first target virtual digital content may be the first target virtual digital content.
  • the three-dimensional coordinate system of the target virtual digital scene and the three-dimensional coordinate system of the real world may have a mapping relationship. Based on the mapping relationship, the first electronic device can pass the coordinates of the first position of the target virtual digital scene.
  • the information determines the pose information of the first target virtual digital content.
  • the first electronic device can also use the three-dimensional coordinate system corresponding to the pose information of each panorama of the target virtual digital scene as a reference coordinate system, and adjust the pose information of each building in the target virtual digital scene and the first target virtual
  • the pose information of the digital content is used to obtain the pose information of each building in the target virtual digital scene in the reference coordinate system, and the pose information of the first target virtual digital content in the reference coordinate system.
  • the first electronic device may also be configured according to the pose information of each panorama of the target virtual digital scene in the reference coordinate system, the pose information of each building in the target virtual digital scene, and the pose of the first target virtual digital content.
  • the first electronic device may also convert the white model of each building in the first panorama and the relative pose information of each building in the first panorama and the first target virtual digital content according to the first panorama, the relative pose information of each building in the first panorama, and the first target virtual digital content.
  • the first target virtual digital content is superimposed on the first panorama and displayed on the display screen of the first electronic device.
  • Figure 6h is a schematic diagram of a virtual digital scene provided by the embodiment of the present application, including a panoramic view 1 of the first virtual digital scene, where the panoramic view 1 includes building 1 and building 2.
  • the pose information of the panorama 1 may be the position A1 and the orientation B1 in the real world of the shooting device of the panorama 1 when the panorama 1 is captured.
  • the pose information of building 1 can be the position of building 1 in the real world as A2 and its orientation as B2.
  • the pose information of building 2 may be the position of building 2 in the real world as A3 and its orientation as B3.
  • the first electronic device may determine the pose information of the first target virtual digital content, where the pose information of the first target virtual digital content may be that the position of the first target virtual digital content in the real world is A4 and the orientation is B4.
  • the first electronic device may be different from the real-world three-dimensional coordinate system of the shooting device, building 1, building 2 and the first target virtual digital content when shooting the panorama 1, the first electronic device may Using the three-dimensional coordinate system of the camera as the reference coordinate system, adjust the pose information of panorama 1, building 1 and building 2 as well as the pose information of the first target virtual digital content to obtain the position of panorama 1 in the reference coordinate system.
  • pose information for example, indicating that the position of panorama 1 in the reference coordinate system is A11 and the orientation is B11
  • the pose information of building 1 for example, indicating that the position of building 1 in the reference coordinate system is A12 and the orientation is B12
  • the pose information of building 2 in the reference coordinate system for example, indicating that the position of building 2 in the reference coordinate system is A13 and the orientation is B13
  • the pose information of the first target virtual digital content in the reference coordinate system For example, it indicates that the position of the first target virtual digital content in the reference coordinate system is A14 and the orientation is B14).
  • the first electronic device can also determine, based on A11, A12, A13 and A14, B11, B12, B13 and B14, the shooting equipment, building 1, building 2 and the first target virtual digital content when shooting the panorama 1.
  • the relative position in the world is C1 and the relative orientation is D1.
  • the first electronic device can also render and superimpose the white model of building 1, the white model of building 2 and the panorama 1 according to C1 and D1 to obtain a second virtual digital scene.
  • the first electronic device can also render the first target The virtual digital content and the first position of the second virtual digital scene are superimposed.
  • Figure 6i is a schematic diagram of a virtual digital content display interface provided by the embodiment of the present application.
  • the virtual digital content shown in Figure 6i The display interface includes the white model of Building 1, the white model of Building 2, the first target virtual digital content and the panorama 1. Among them, the white model of Building 1 and the white model of Building 2 basically block the white model of Panorama 1. of Building 1 and Building 2.
  • the algorithm determines the position of the panorama shooting device in the real world when the panorama was shot. and orientation, there is an error between the position and orientation of the actual panorama shooting equipment in the real world when shooting the panorama, and the error unit is generally centimeter level. Therefore, in order to avoid errors affecting the superimposed effect of the first target virtual digital content and the first panoramic rendering, the first electronic device can use the pose information of the first panorama and each building in the second virtual digital scene to pose information to determine the pose difference of each building in the second virtual digital scene.
  • the pose difference of the building represents the difference between the position and orientation of the building in the real world and the position and orientation of the white model of the building in the corresponding three-dimensional space.
  • the first electronic device may also determine whether to display the second virtual digital scene on the display screen of the first electronic device based on the pose difference of each building in the second virtual digital scene.
  • the first electronic device determines that the second virtual digital scene meets the externally provided accuracy requirements, and the second virtual digital scene can be The scene is displayed on the display screen of the first electronic device; if the second virtual digital scene The pose difference of each building is not within the preset range, and the first electronic device determines that the second virtual digital scene does not meet the externally provided accuracy requirements, and the second virtual digital scene cannot be displayed on the first electronic device.
  • the second virtual digital scene needs to be eliminated or the second virtual digital scene needs to be reacquired.
  • the first electronic device may first display on the display screen as shown in Figure 6j (1 ), the interface can display the second virtual digital scene and the first target virtual digital content shown in Figure 6i, and can also display an operation button 607 that prompts the user to close the white model of the building. .
  • the user can close the white mold of building 1 and the white mold of building 2 in the second virtual digital scene shown in Figure 6i by selecting the operation button 607.
  • the first electronic device When the first electronic device detects the user's operation of selecting the operation button 607, In response to this operation, the first electronic device may display a virtual digital content display interface as shown in (2) of Figure 6j on the display screen, and the first virtual digital scene and the first target virtual digital content may be displayed in the interface, An operation button 608 prompting the user to open the white mold of the building may also be displayed, and the user can open the white mold of Building 1 and the white mold of Building 2 by selecting the operation button 608 .
  • the above-mentioned first electronic device superimposes the first target virtual digital content and the first position of the target virtual digital scene, can display the first target virtual digital content at the first position of the target virtual digital scene, and combines it with the first target
  • the target virtual digital scene after the virtual digital content is superimposed is displayed on the display screen of the first electronic device, so that the display effect of the first target virtual digital content in the real scene corresponding to the target virtual digital scene can be simulated, so that the user does not need to arrive You can watch the display effect of the first target virtual digital content in the real world scene on the spot.
  • the user can operate the second electronic device to interact with the virtual digital content.
  • the first electronic device displays the first target virtual digital content at the first position of the target virtual digital scene as an example.
  • the user can enter the AR scene.
  • the second electronic device detects the user's operation of selecting to enter the AR scene, in response to the operation, the second electronic device can collect and display the image of the first real scene.
  • the user can operate the first electronic device to place the first target virtual digital content at the first position of the target virtual digital scene.
  • the first real scene is the real scene corresponding to the target virtual digital scene
  • the second electronic device can place the first target virtual digital content in the first real position.
  • the first position of the scene displays the placed first target virtual digital content, wherein the first position of the first real scene is the position corresponding to the first position of the target virtual digital scene in the first real scene.
  • the first electronic device may send a first request to the second electronic device.
  • Information wherein the first request information is used to request the second electronic device to display the placed at the first position of the first real scene when the first real scene displayed by the second electronic device is a real scene corresponding to the target virtual digital scene.
  • the first target virtual digital content When the second electronic device receives the first request information, the second electronic device can determine that the first real scene currently displayed is the real scene corresponding to the target virtual digital scene currently displayed by the first electronic device. , displaying the first target virtual digital content at a first position of the first real scene.
  • the first position of the first real scene may be the position corresponding to the first position of the target virtual digital scene in the first real scene, and the first position of the first real scene and the first position of the target virtual digital scene are in the third position.
  • the first threshold may be 100 centimeters.
  • the second electronic device may obtain The first electronic device can obtain the three-dimensional coordinate 1 of the first position of the first real scene.
  • the first electronic device can obtain the three-dimensional coordinate 2 of the first position of the target virtual digital scene, and then the distance between the two positions can be obtained based on the three-dimensional coordinate 1 and the three-dimensional coordinate 2. ; There may be an error between the orientation of the first target virtual digital content placed at the first position of the first real scene and the orientation of the first target virtual digital content placed at the first position of the target virtual digital scene, and the error may be less than or equal to The first angle threshold, for example, 3 degrees.
  • the second electronic device can obtain the rotation value 1 of the first target virtual digital content at the first position of the first real scene.
  • the first electronic device can obtain the rotation value 2 of the first target virtual digital content at the first position of the target virtual digital scene, and then obtain the angle difference between the two rotation values based on the rotation value 1 and the rotation value 2.
  • the first electronic device can obtain the three-dimensional coordinate 2 and the rotation value 2 of the virtual digital content from the server, and the second electronic device can obtain the three-dimensional coordinate 1 and the rotation value 1 of the virtual digital content from the server. This application does not specifically limit this.
  • the second electronic device when the second electronic device detects the user's operation of selecting "Sandbank World is covered", in response to the operation, the second electronic device can display (1) in Figure 6k on the display screen.
  • the real scene display interface shown is a first real scene displayed in the interface, in which curtains, sofas, walls, doors, etc. are pictures of the first real scene captured in real time by the second electronic device.
  • the first electronic device may display on the display screen
  • the target virtual digital scene display interface may include buildings in the target virtual digital scene, such as sofas, walls, doors, etc., when the first electronic device detects that the user is in the target virtual scene.
  • the first electronic device may display the first target virtual digital content at the first position of the target virtual digital scene, such as (2) of Figure 6k
  • the first electronic device may also send the first request information to the second electronic device, and the second electronic device may receive the first request information from the first electronic device, when the second electronic device determines the currently displayed first reality
  • the scene is a real scene corresponding to the target virtual digital scene currently displayed by the first electronic device, for example, the real scene display interface shown in (3) of Figure 6k
  • the second electronic device can display the real scene in the interface of the display screen.
  • the second electronic device can display the first target virtual digital content (ie, the whale) at the first position of the first real scene, wherein the first real scene shown in (3) of Figure 6k is The real scene corresponding to the target virtual digital scene shown in (2) of Figure 6k; the first position of the first real scene shown in (3) of Figure 6k is the target virtual digital scene shown in (2) of Figure 6k The first position is at the position corresponding to the first real scene.
  • the first target virtual digital content ie, the whale
  • the distance may be less than or equal to the first position of the first real scene shown in (3) of FIG. 6k.
  • a threshold for example, the first threshold may be 100 cm.
  • the orientation of the first target virtual digital content displayed at the first position of the first real scene shown in (3) of FIG. 6k is the same as the orientation of the first target virtual digital content shown in (2) of FIG. 6k
  • the first electronic device can edit the target virtual digital content displayed in the target virtual digital scene, and synchronously update the edited target virtual digital content in the first real scene displayed on the display screen of the second electronic device.
  • the user may edit the first target virtual digital content.
  • the first electronic device detects the user's operation of editing the first target virtual digital content, in response to the operation, the first electronic device may send a first message to the second electronic device.
  • Editing information the first editing information includes information that the first electronic device edits the first target virtual digital content displayed on the display screen of the first electronic device.
  • the second electronic device When the second electronic device receives the first editing information from the first electronic device, the second electronic device can edit and synchronize the first target virtual digital content displayed in the display screen of the second electronic device according to the first editing information, so that The first target virtual digital content displayed in the display screen of the first electronic device and the first target virtual digital content displayed in the display screen of the second electronic device may be updated synchronously.
  • the first electronic device can display the target virtual digital scene display interface as shown in (1) of Figure 6l on the display screen.
  • the sofa, wall, door, etc. in the interface are all buildings in the target virtual digital scene.
  • the object, the whale is the first target virtual digital content displayed in the first position of the target virtual digital scene.
  • the user can move the whale from the first position of the target virtual digital scene to the second position of the target virtual digital scene.
  • the first electronic device detects the user's operation of moving the whale, in response to the operation, the first electronic device can display a display on the display screen.
  • the target virtual digital scene display interface shown in (2) of Figure 6l is displayed.
  • the first electronic device may send the first editing information to the second electronic device.
  • the first editing information includes information that the first electronic device moves the whale from a first position of the target virtual digital scene to a second position of the target virtual digital scene.
  • the second electronic device receives the first editing information from the first electronic device, the second electronic device can move the whale from the first position of the first real scene to the second position of the first real scene according to the first editing information.
  • the second electronic device can display the real scene display interface as shown in (3) of Figure 6l on the display screen, wherein the second position of the first real scene is the second position of the target virtual digital scene corresponding to the first real scene. Location.
  • the second position of the first real scene shown in (3) of Figure 6l and the second position of the target virtual digital scene shown in (2) of Figure 6l can be between the position corresponding to the first real scene.
  • the distance may be less than or equal to the second threshold, for example, the second threshold may be 100 cm
  • the direction of the first target virtual digital content placed at the second position of the first real scene shown in (3) of FIG. 6l there may be an error between the orientation of the first target virtual digital content placed at the first position of the target virtual digital scene shown in (2) of Figure 6l
  • the error may be less than or equal to the second angle threshold, such as 3 degrees , please refer to the relevant descriptions of other above-mentioned embodiments for details, which will not be described again here.
  • the first electronic device when the first electronic device detects the user's operation of editing the first target virtual digital content, in response to the operation, the first electronic device may also generate and save first editing information, where the first editing information includes the first Information that the electronic device edits the first target virtual digital content displayed on the display screen of the first electronic device.
  • the first electronic device can edit the first target virtual digital content displayed in the first real scene according to the first editing information.
  • the first electronic device displays the first real scene
  • the first electronic device can display the edited first real scene.
  • the first target for virtual digital content when the first electronic device displays the first real scene.
  • the second electronic device after the above-mentioned second electronic device collects and displays the image of the first real scene, it can also display any one or more of the image or text corresponding to the at least one candidate virtual digital content on the second electronic device. in the display.
  • the user may determine the second target virtual digital content from at least one candidate virtual digital content displayed on the display screen of the second electronic device, and set the second target virtual digital content to The virtual digital content moves to the third position of the first real scene.
  • the second electronic device detects the user's operation of selecting and moving any one or more of the image or text, in response to the operation, the second electronic device can The second target virtual digital content determined by the user is displayed in a third position of the first real scene.
  • candidate virtual digital content may be virtual digital content stored in the second electronic device, such as an official pre-set virtual digital content, or may be virtual digital content created and uploaded by a user, which is not specifically limited in this application.
  • the second electronic device may display a real scene display interface as shown in (1) of FIG. 6m on the display screen, in which the first real scene and the "virtual digital content library” icon 609 are displayed.
  • the user can view the candidate virtual digital content by selecting the "virtual digital content library” icon 609.
  • the second electronic device detects the user's operation of selecting the "virtual digital content library” icon 609, in response to the operation, the second electronic device Any one or more of the images or texts corresponding to at least one candidate virtual digital content may be displayed on the display screen of the second electronic device, such as "virtual digital content 1", “virtual digital content 2", “virtual digital content” 3", "Virtual Digital Content 4" and other candidate virtual digital content icons.
  • the user may determine the second target virtual digital content from at least one candidate virtual digital content displayed on the display screen of the second electronic device, and move the second target virtual digital content to a third position of the first real scene, when the second The electronic device detects the user's operation of selecting and moving any one or more of the image or text. In response to the operation, the second electronic device can display the second target virtual digital content determined by the user in the first real scene. On the third position.
  • the second electronic device detects the operation of the user selecting the icon of "Virtual Digital Content 1” and moving the icon of "Virtual Digital Content 1" to the third position of the first real scene, in response to the operation, the second electronic device
  • the electronic device can display "virtual digital content 1" in the third position of the first real scene, and obtain a real scene display interface as shown in (2) of Figure 6m.
  • the puppy in this interface is displayed in the first real scene.
  • the second electronic device may also send second request information to the first electronic device displaying the target virtual digital scene.
  • the second request information is used to request the first electronic device to display the second target virtual digital content at a third position of the target virtual digital scene when the first real scene is a real digital scene corresponding to the target virtual digital scene, and the target virtual The third position of the digital scene is the position corresponding to the third position of the first real scene in the target virtual digital scene.
  • the third threshold may be 100 centimeters
  • a third angle threshold such as 3 degrees.
  • the second electronic device when the second electronic device detects the user's operation of placing the second target virtual digital content at the third location of the first real scene, in response to the operation, the second electronic device may display on the display screen as shown in Figure 6m (2)
  • the real scene display interface shown in this interface includes a first real scene, and the puppy in the interface is the second target virtual digital content displayed at the third position of the first real scene.
  • the second electronic device may also send second request information to the first electronic device.
  • the first electronic device receives the second request information from the second electronic device, when the first electronic device determines that the target virtual digital scene currently displayed by the first electronic device is the virtual number corresponding to the first real scene currently displayed by the second electronic device.
  • the first electronic device can display the target virtual digital scene and the second target virtual digital content in the interface of the display screen.
  • the interface The puppy in is the second target virtual digital content displayed in the third position of the target virtual digital scene.
  • the third position of the target virtual digital scene shown in (3) of Figure 6m is as shown in (2) of Figure 6m
  • the third position of the first real scene is at a position corresponding to the target virtual digital scene.
  • the third position of the first real scene shown in (2) of Figure 6m and the third position of the target virtual digital scene shown in (3) of Figure 6m can be between the positions corresponding to the first real scene.
  • the distance may be less than or equal to a third threshold, for example, the third threshold may be 100 cm
  • the direction of the second target virtual digital content placed at the third position of the first real scene shown in (2) of Figure 6m there may be an error between the orientation of the second target virtual digital content placed at the third position of the target virtual digital scene shown in (3) of Figure 6m, and the error may be less than or equal to the third angle threshold, such as 3 degrees
  • the third angle threshold such as 3 degrees
  • the second electronic device can also edit the second target virtual digital content displayed in the first real scene, and synchronously update the edited content in the target virtual digital scene displayed on the display screen of the first electronic device.
  • Second target virtual digital content specifically Implementation method and the above-mentioned first electronic device edits the first target virtual digital content displayed in the target virtual digital scene, and synchronously updates the edited first target virtual digital content in the first real scene displayed on the display screen of the second electronic device.
  • the second electronic device can display a real scene display interface as shown in (1) of Figure 6n on the display screen.
  • the first real scene is displayed in the interface
  • the puppy in the interface is displayed in the first real scene.
  • the user can delete the puppy from the first real scene.
  • the second electronic device can display on the display screen as shown in (2) of Figure 6n Real-life scene display interface.
  • the second electronic device may send second editing information to the first electronic device, where the second editing information includes the second electronic device deleting the puppy from Deleted information in the first real-life scenario.
  • the first electronic device can display the target virtual digital scene display interface as shown in (3) of Figure 6m on the display screen.
  • the first electronic device receives the second editing information from the second electronic device.
  • the first electronic device can display the target virtual digital scene according to the second editing information. 2. Edit the information to delete the puppy from the target virtual digital scene, and the first electronic device can display the real scene display interface as shown in (3) of Figure 6n on the display screen.
  • the second electronic device when the second electronic device detects the user's operation of editing the second target virtual digital content, in response to the operation, the second electronic device can also generate and save second editing information, and the second editing information includes the second Information that the electronic device edits the second target virtual digital content displayed on the display screen of the second electronic device.
  • the second electronic device can edit the second target virtual digital content displayed in the first real scene according to the second editing information.
  • the second electronic device displays the target virtual digital scene
  • the second electronic device can display and edit the target virtual digital scene.
  • the second target is virtual digital content.
  • the first electronic device updates the virtual digital content in the panorama of the virtual digital scene in real time, or is not on the scene.
  • the second electronic device updates the virtual digital content displayed in the real-world scene in real time, thereby improving the user experience.
  • the target virtual digital scene includes multiple scenes.
  • the first electronic device displays the target virtual digital scene on the display screen of the first electronic device, it can display the target virtual digital scene on the display screen to start the target virtual digital scene.
  • Toggle action button The user can start switching the target virtual digital scene by selecting the operation button.
  • the first electronic device detects the user's operation of selecting the operation button, in response to the operation, the first electronic device can switch the target virtual digital scene, for example, the target virtual scene can be switched.
  • the digital scene switches from scene 1 to scene 2.
  • the first electronic device may display the switched target virtual digital scene on the display screen, and may also display a bit for redetermining the first target virtual digital content.
  • Action button for posture information The user can re-determine the pose information of the first target virtual digital content by selecting the operation button.
  • the first electronic device When the first electronic device detects the user's operation of selecting the operation button, in response to the operation, the first electronic device can according to the switched target
  • the pose information of each panorama of the virtual digital scene, the pose information of each building in the switched target virtual digital scene, and the redetermined pose information of the first target virtual digital content are the first target virtual
  • the digital content is superimposed on the first panorama of the switched target virtual digital scene and displayed on the display screen of the first electronic device, thereby enabling the virtual digital content to roam in different scenes.
  • the first electronic device when the first electronic device displays the switched target virtual digital scene on the display screen, it can also directly superimpose the first target virtual digital content and the first panorama of the switched target virtual digital scene, and displayed on the display screen of the first electronic device, thereby enabling virtual digital content to roam in different scenes.
  • Figure 7 is a schematic flow chart of a digital content display method provided by an embodiment of the present application. As shown in Figure 7, the process of the method may include:
  • the first electronic device determines the target virtual digital scene from at least one candidate virtual digital scene in response to the first operation triggered by the user.
  • the method for the first electronic device to determine the target virtual digital scene from at least one candidate virtual digital scene in response to the user's operation is detailed in the description in "1. Virtual Digital Scene Display” and will not be described again here.
  • the first electronic device determines the first target virtual digital content from at least one candidate virtual digital content, and displays the first target virtual digital content at the first position of the target virtual digital scene. .
  • the first electronic device determines the first target virtual digital content from at least one candidate virtual digital content in response to the user's operation.
  • the method of displaying the first target virtual digital content at the first position of the target virtual digital scene please refer to the description in "2. Virtual Digital Content Display” for details, which will not be described again here.
  • the second electronic device collects and displays the image of the first real scene, and when the first real scene is the real scene corresponding to the target virtual digital scene, in the first real scene the One position displays the first target virtual digital content.
  • the second electronic device responds to the user's operation, collects and displays the image of the first real scene, and when the first real scene is the real scene corresponding to the target virtual digital scene, displays the second real scene at the first position of the first real scene.
  • the method of virtual digital content of a target please refer to the description in "3. Virtual Digital Content Interaction" and will not be repeated here.
  • the first electronic device can also edit the first target virtual digital content.
  • the second electronic device can also synchronously display the edited first virtual digital content.
  • the target virtual digital content please refer to the description in "3. Virtual Digital Content Interaction" and will not be described again here.
  • the first electronic device and the second electronic device can also implement another virtual digital content display method as shown in Figure 8.
  • the process of this method can be include:
  • the second electronic device determines a second target virtual digital content from at least one candidate virtual digital content, and displays the second target virtual digital content at a second location in the first real scene.
  • the second electronic device responds to the user's operation, determines the second target virtual digital content from at least one candidate virtual digital content, and displays the second target virtual digital content at the second position of the first real scene.
  • the second electronic device responds to the user's operation, determines the second target virtual digital content from at least one candidate virtual digital content, and displays the second target virtual digital content at the second position of the first real scene.
  • the method for the first electronic device to display the second target virtual digital content at the second position of the target virtual digital scene is described in "3. Virtual Digital Content Interaction" for details, which will not be described again here.
  • the second electronic device can also edit the second target virtual digital content.
  • the first electronic device can also synchronously display the edited second virtual digital content.
  • the target virtual digital content please refer to the description in "3. Virtual Digital Content Interaction" and will not be described again here.
  • embodiments of the present application further provide a first electronic device, which is used to implement the method performed by the first electronic device provided in the embodiment of the present application.
  • the first electronic device 900 may include: a memory 901, one or more processors 902, and one or more computer programs (not shown in the figure). The various devices described above may be coupled through one or more communication buses 903 .
  • the first electronic device 900 may also include a display screen 904.
  • one or more computer programs are stored in the memory 901, and the one or more computer programs include computer instructions; one or more processors 902 call the computer instructions stored in the memory 901, so that the first electronic device 900 executes A virtual digital content display method provided by embodiments of the present application.
  • the display screen 904 is used to display images, videos, application interfaces and other related user interfaces.
  • the memory 901 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the memory 901 can store an operating system (hereinafter referred to as the system), such as an embedded operating system such as ANDROID, IOS, WINDOWS, or LINUX.
  • the memory 901 can be used to store the implementation program of the embodiment of the present application.
  • the memory 901 may also store a network communication program that may be used to communicate with one or more additional devices, one or more user devices, and one or more network devices.
  • One or more processors 902 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC), or one or more processors for controlling the present application. Scheme program execution on the integrated circuit.
  • CPU central processing unit
  • ASIC Application-Specific Integrated Circuit
  • FIG. 9 is only an implementation manner of the first electronic device 900 provided by the embodiment of the present application.
  • the first electronic device 900 may also include more or fewer components, which are not limited here.
  • embodiments of the present application further provide a second electronic device, which is used to implement the method performed by the second electronic device provided by the embodiment of the present application.
  • the second electronic device 1000 may include: a memory 1001, one or more processors 1002, and one or more computer programs (not shown in the figure).
  • the various devices described above may be coupled through one or more communication buses 1003.
  • the second electronic device 1000 may also include a display screen 1004.
  • one or more computer programs are stored in the memory 1001, and the one or more computer programs include computer instructions; one or more processors 1002 call the computer instructions stored in the memory 1001, so that the second electronic device 1000 executes A virtual digital content display method provided by embodiments of the present application.
  • the display screen 1004 is used to display images, videos, application interfaces and other related user interfaces.
  • the memory 1001 may include high-speed random access memory, and may also include non-volatile memory, such as one or more disk storage devices, flash memory devices or other non-volatile solid-state storage devices.
  • the memory 1001 can store an operating system (hereinafter referred to as the system), such as an embedded operating system such as ANDROID, IOS, WINDOWS, or LINUX.
  • the memory 1001 can be used to store the implementation program of the embodiment of the present application.
  • the memory 1001 can also store a network communication program that can be used to communicate with one or more additional devices, one or more user devices, and one or more network devices.
  • One or more processors 1002 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (Application-Specific Integrated Circuit, ASIC), or one or more processors for controlling the present application. Scheme program execution on the integrated circuit.
  • CPU central processing unit
  • ASIC Application-Specific Integrated Circuit
  • FIG. 10 is only an implementation manner of the second electronic device 1000 provided by the embodiment of the present application.
  • the second electronic device 1000 may also include more or fewer components, which is not limited here.
  • embodiments of the present application also provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program When the computer program is run on a computer, it causes the computer to execute the steps provided in the above embodiments.
  • the method is executed by the first electronic device or the second electronic device.
  • embodiments of the present application also provide a computer program product.
  • the computer program product includes a computer program or instructions.
  • the computer program or instructions When the computer program or instructions are run on a computer, the computer is caused to execute the method provided in the above embodiments. A method executed by the first electronic device or the second electronic device.
  • the methods provided by the embodiments of this application can be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • a computer program product includes one or more computer instructions.
  • Computer program instructions When computer program instructions are loaded and executed on a computer, processes or functions according to embodiments of the present invention are generated in whole or in part.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, a network device, a user equipment, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g., computer instructions may be transmitted from a website, computer, server or data center via a wired link (e.g.
  • Coaxial cable, optical fiber, digital subscriber line (DSL) or wireless means to transmit to another website site, computer, server or data center.
  • Computer-readable storage media can It is any available media that can be accessed by a computer or a data storage device such as a server or data center that contains one or more available media. Available media can be magnetic media (for example, floppy disks, hard disks, tapes), optical media (for example, Digital video disc (DVD for short), or semiconductor media (such as SSD), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种虚拟数字内容显示***、方法与电子设备,在该***中,第一电子设备用于从至少一个候选虚拟数字场景中确定目标虚拟数字场景,从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并在目标虚拟数字场景的第一位置显示第一目标虚拟数字内容,第二电子设备用于采集并显示第一现实场景的图像,在第一现实场景为目标虚拟数字场景所对应的现实场景时,在第一现实场景的第一位置显示第一目标虚拟数字内容,从而解决不在现场的用户和在现场的用户无法同步观看显示在真实世界场景中的虚拟数字内容的问题。

Description

一种虚拟数字内容显示***、方法与电子设备
相关申请的交叉引用
本申请要求在2022年08月31日提交中国专利局、申请号为202211052147.6、申请名称为“一种虚拟数字内容显示***、方法与电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及电子设备技术领域,尤其涉及一种虚拟数字内容显示***、方法与电子设备。
背景技术
数字内容可以分为两大类:用户生成内容(User Generated Content,UGC)和专业生成内容(Professional Generated Content,PGC),UGC是一种由用户生成的内容,而PGC是一种由官方生成的内容。用户一般将UGC、PGC等数字内容通过互联网平台进行展示或者提供给其他用户。随着终端显示技术的发展,增强现实(augmented reality,AR)技术的应用场景越来越多,用户可以通过AR技术增强与UGC、PGC等数字内容的交互。例如,将UGC、PGC等数字内容作为虚拟对象显示在真实世界场景中,使得用户可以在AR设备的显示屏显示的AR场景中观看真实世界场景的同时,观看显示在真实世界场景中的虚拟数字内容,并与其进行交互。但是,由于AR技术的限制,不在现场的用户将无法观看显示在真实世界场景中的虚拟数字内容,也无法与其进行交互,使得不在现场的用户和在现场的用户无法同步观看显示在真实世界场景中的虚拟数字内容。
发明内容
本申请实施例提供一种虚拟数字内容显示***、方法与电子设备,用以解决不在现场的用户和在现场的用户无法同步观看显示在真实世界场景中的虚拟数字内容的问题。
第一方面,本申请提供一种虚拟数字内容显示***,所述虚拟数字内容显示***包括第一电子设备和第二电子设备。所述第一电子设备可以:响应于用户触发的第一操作,从至少一个候选虚拟数字场景中确定目标虚拟数字场景;响应于用户触发的第二操作,从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并在所述目标虚拟数字场景的第一位置显示所述第一目标虚拟数字内容。所述第二电子设备可以:响应于用户触发的第三操作,采集并显示第一现实场景的图像;在所述第一现实场景为所述目标虚拟数字场景所对应的现实场景时,在所述第一现实场景的第一位置显示所述第一目标虚拟数字内容。
基于上述***,第一电子设备可以在目标虚拟数字场景的第一位置显示第一目标虚拟数字内容,即第一电子设备可以通过目标虚拟数字场景模拟出第一目标虚拟数字内容在目标虚拟数字场景所对应的现实场景中的显示效果,从而使得用户无需到达现场就可以观看第一目标虚拟数字内容在真实世界场景中的显示效果。第二电子设备可以采集并显示第一现实场景的图像,在第一现实场景为目标虚拟数字场景所对应的现实场景时,在第一现实场景的第一位置显示第一目标虚拟数字内容,即第二电子设备可以在目标虚拟数字场景所对应的现实场景中显示第一电子设备通过目标虚拟数字场景模拟出的显示效果,从而使得在现场的用户与不在现场的用户可以同步观看第一目标虚拟数字内容在真实世界场景中的显示效果。
在一种可能的设计中,所述第一现实场景的第一位置为所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置;或所述第一现实场景的第一位置和所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置之间的距离小于或等于第一阈值。
通过该设计,第一现实场景的第一位置与目标虚拟数字场景的第一位置之间可以存在距离,距离可以小于或等于第一阈值,使得第二电子设备在第一现实场景的第一位置显示第一目标虚拟数字内容的效果与第一电子设备在目标虚拟数字场景的第一位置显示第一目标虚拟数字内容的效果之间的差异较小,从而使得在现场的用户与不在现场的用户可以同步观看第一目标虚拟数字内容在真实世界场景中的显 示效果。
在一种可能的设计中,所述第二电子设备还可以:响应于用户触发的第四操作,从至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并在第一现实场景的第二位置显示所述第二目标虚拟数字内容;所述第一电子设备还可以:在所述第一现实场景为所述目标虚拟数字场景所对应的现实场景时,在所述目标虚拟数字场景的第二位置显示所述第二目标虚拟数字内容,其中,所述第一现实场景的第二位置为所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置,或所述第一现实场景的第二位置和所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置之间的距离小于或等于第二阈值。
通过该设计,在第二电子设备在第一现实场景的第二位置显示第二目标虚拟数字内容之后,第一电子设备可以在第一现实场景为目标虚拟数字场景所对应的现实场景时,在目标虚拟数字场景的第二位置显示第二目标虚拟数字内容,即第一电子设备可以在第二电子设备将第二目标虚拟数字内容显示在第一现实场景中之后,通过第一现实场景所对应的虚拟数字场景模拟出第二目标虚拟数字内容在第一现实场景中的显示效果,从而使得在现场的用户与不在现场的用户可以同步观看第二目标虚拟数字内容在真实世界场景中的显示效果。
在一种可能的设计中,所述第一电子设备,还可以:响应于用户触发的第五操作,执行以下任意一项或多项操作:调整所述第一目标虚拟数字内容在所述目标虚拟数字场景的位置;或调整所述第一目标虚拟数字内容的大小;或调整所述第一目标虚拟数字内容的朝向;或删除所述第一目标虚拟数字内容;所述第二电子设备,还可以:响应于用户触发的第六操作,执行以下任意一项或多项操作:调整所述第一目标虚拟数字内容在所述第一现实场景的位置;或调整所述第一目标虚拟数字内容的大小;或调整所述第一目标虚拟数字内容的朝向;或删除所述第一目标虚拟数字内容。
通过该设计,第一电子设备或第二电子设备可以对显示的第一目标虚拟数字内容进行编辑,例如,调整第一目标虚拟数字内容在目标虚拟数字场景或第一现实场景的位置、大小、朝向,删除第一目标虚拟数字内容等,从而使得不在现场的用户或在现场的用户不仅可以观看第一目标虚拟数字内容在真实世界场景中的显示效果,还可以与第一目标虚拟数字内容进行交互。
在一种可能的设计中,所述第一电子设备还可以:响应于用户触发的第五操作,向所述第二电子设备发送第一编辑信息,其中,所述第一编辑信息包括所述第一电子设备对所述第一电子设备显示的所述第一目标虚拟数字内容进行编辑的信息;所述第二电子设备还可以:在接收来自所述第一电子设备的第一编辑信息时,根据所述第一编辑信息对显示的所述第一目标虚拟数字内容进行编辑,并在所述第一现实场景显示编辑后的所述第一目标虚拟数字内容。
通过该设计,第二电子设备可以接收来自第一电子设备的第一编辑信息,根据第一编辑信息对显示的第一目标虚拟数字内容进行编辑,使得第一电子设备对第一电子设备显示的第一目标虚拟数字内容进行编辑之后,第二电子设备可以实时更新显示在现实场景中的第二目标虚拟数字内容。
在一种可能的设计中,所述第二电子设备还可以:响应于用户触发的第六操作,向所述第一电子设备发送第二编辑信息,其中,所述第二编辑信息包括所述第二电子设备对所述第二电子设备显示的所述第一目标虚拟数字内容进行编辑的信息;所述第一电子设备还可以:在接收来自所述第二电子设备的第二编辑信息时,根据所述第二编辑信息对显示的所述第一目标虚拟数字内容进行编辑,并在所述目标虚拟数字场景显示编辑后的所述第一目标虚拟数字内容。
通过该设计,第一电子设备可以接收来自第二电子设备的第二编辑信息,根据第二编辑信息对显示的第一目标虚拟数字内容进行编辑,使得第二电子设备对显示在现实场景中的第一目标虚拟数字内容进行编辑之后,第一电子设备可以实时更新该现实场景所对应的虚拟数字场景中的第一目标虚拟数字内容。
在一种可能的设计中,所述第一电子设备还可以:在响应用户触发的所述第一操作之前,显示所述至少一个候选虚拟数字场景对应的二维地图或者文字中的任意一项或多项;所述第一电子设备在响应于用户触发的所述第一操作,从所述至少一个候选虚拟数字场景中确定所述目标虚拟数字场景时,可以:响应于用户选择所述二维地图或者文字中的任意一项或多项的所述第一操作,确定所述目标虚拟数字场景。
通过该设计,第一电子设备可以将至少一个候选虚拟数字场景以二维地图或者文字的形式显示在第一电子设备的显示屏中,使得用户可以直接查看多个候选虚拟数字场景,并从多个候选虚拟数字场景中选择目标虚拟数字场景。
第二方面,本申请还提供一种虚拟数字内容显示方法,应用于第一电子设备,所述方法包括:响应于用户触发的第一操作,所述第一电子设备可以从至少一个候选虚拟数字场景中确定目标虚拟数字场景;响应于用户触发的第二操作,所述第一电子设备可以从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并在所述目标虚拟数字场景的第一位置显示所述第一目标虚拟数字内容;响应于用户触发的第三操作,所述第一电子设备采集并显示第一现实场景的图像;在所述第一现实场景为目标虚拟数字场景所对应的现实场景时,所述第一电子设备在所述第一现实场景的第一位置显示第一目标虚拟数字内容,其中,所述第一目标虚拟数字内容为第一电子设备在所述目标虚拟数字场景的第一位置显示的虚拟数字内容,所述第一现实场景的第一位置为所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置,或所述第一现实场景的第一位置和所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置之间的距离小于或等于第一阈值。
在一种可能的设计中,所述第一电子设备还可以:响应于用户触发的第四操作,从至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并在第一现实场景的第二位置显示所述第二目标虚拟数字内容;在第一现实场景为所述目标虚拟数字场景所对应的现实场景时,在所述目标虚拟数字场景的第二位置显示第二目标虚拟数字内容,其中,所述第二目标虚拟数字内容为第一电子设备在所述第一现实场景的第二位置显示的虚拟数字内容,所述第一现实场景的第二位置为所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置,或所述第一现实场景的第二位置和所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置之间的距离小于或等于第二阈值。
在一种可能的设计中,所述第一电子设备还可以:响应于用户触发的第五操作,对所述目标虚拟数字场景的所述第一目标虚拟数字内容进行编辑,使所述第一电子设备可以执行以下任意一项或多项操作:调整所述第一目标虚拟数字内容在所述目标虚拟数字场景的位置;或调整所述第一目标虚拟数字内容的大小;或调整所述第一目标虚拟数字内容的朝向;或删除所述第一目标虚拟数字内容;响应于用户触发的第六操作,对所述第一现实场景的所述第一目标虚拟数字内容进行编辑,使所述第一电子设备可以执行以下任意一项或多项操作:调整所述第一目标虚拟数字内容在所述第一现实场景的位置;或调整所述第一目标虚拟数字内容的大小;或调整所述第一目标虚拟数字内容的朝向;或删除所述第一目标虚拟数字内容。
在一种可能的设计中,所述第一电子设备还可以:响应于用户触发的第五操作,生成并保存第一编辑信息,其中,所述第一编辑信息包括所述第一电子设备对所述目标虚拟数字场景的所述第一目标虚拟数字内容进行编辑的信息;所述第一电子设备,还用于在生成并保存所述第一编辑信息时,根据所述第一编辑信息对所述第一现实场景显示的所述第一目标虚拟数字内容进行编辑,并在所述第一现实场景显示编辑后的所述第一目标虚拟数字内容。
在一种可能的设计中,所述第一电子设备还可以:响应于用户触发的第六操作,生成并保存第二编辑信息,其中,所述第二编辑信息包括所述第一电子设备对所述第一现实场景的所述第一目标虚拟数字内容进行编辑的信息;所述第一电子设备,还用于在生成并保存所述第二编辑信息时,根据所述第二编辑信息对显示的所述第一目标虚拟数字内容进行编辑,并在所述第一现实场景显示编辑后的所述第一目标虚拟数字内容。
在一种可能的设计中,所述第一电子设备还可以:在响应用户触发的所述第一操作之前,显示所述至少一个候选虚拟数字场景对应的二维地图或者文字中的任意一项或多项;所述第一电子设备从所述至少一个候选虚拟数字场景中确定所述目标虚拟数字场景,包括:响应于用户选择所述二维地图或者文字中的任意一项或多项的所述第一操作,确定所述目标虚拟数字场景。
第三方面,本申请还提供一种电子设备,所述电子设备包括处理器,存储器,以及,一个或多个程序;其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述电子设备执行如上述第二方面或第二方面的任一可能的设计所描述的方法。
第四方面,本申请提供一种计算机可读存储介质,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第二方面或第二方面的任一可能的设计所描述的方法。
第五方面,本申请提供一种计算机程序产品,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述第二方面或第二方面的任一可能的设计所描述的方法。
上述第二方面至第五方面及其可能的设计中的有益效果可以参考上述对第一方面及其任一可能的 设计中所述的方法的有益效果的描述。
附图说明
图1为本申请实施例提供的一种AR设备的示意图;
图2为本申请实施例提供的一种AR场景的示意图;
图3为本申请实施例提供的一种虚拟数字内容显示***的结构示意图;
图4为本申请实施例提供的一种第一电子设备的硬件结构示意图;
图5为本申请实施例提供的一种第一电子设备的软件结构示意图;
图6a为本申请实施例提供的一种应用初始化界面的示意图;
图6b为本申请实施例提供的一种目标虚拟数字场景确定界面的示意图;
图6c为本申请实施例提供的另一种目标虚拟数字场景确定界面的示意图;
图6d为本申请实施例提供的一种目标虚拟数字场景生成界面的示意图;
图6e为本申请实施例提供的一种拍摄目标现实场景的示意图;
图6f为本申请实施例提供的另一种拍摄目标现实场景的示意图;
图6g为本申请实施例提供的一种目标虚拟数字内容确定界面的示意图;
图6h为本申请实施例提供的一种虚拟数字场景的示意图;
图6i为本申请实施例提供的一种虚拟数字内容显示界面的示意图;
图6j为本申请实施例提供的另一种虚拟数字内容显示界面的示意图;
图6k为本申请实施例提供的一种虚拟数字内容交互的示意图;
图6l为本申请实施例提供的一种虚拟数字内容编辑的示意图;
图6m为本申请实施例提供的又一种虚拟数字内容交互的示意图;
图6n为本申请实施例提供的又一种虚拟数字内容编辑的示意图;
图7为本申请实施例提供的一种虚拟数字内容显示方法的流程示意图;
图8为本申请实施例提供的另一种虚拟数字内容显示方法的流程示意图;
图9为本申请实施例提供的另一种第一电子设备的硬件结构示意图;
图10为本申请实施例提供的一种第二电子设备的硬件结构示意图。
具体实施方式
以下,对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。
(1)本申请实施例涉及的至少一个,包括一个或者多个;其中,多个是指大于或者等于两个。另外,需要理解的是,在本说明书的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为明示或暗示相对重要性,也不能理解为明示或暗示顺序。比如,第一对象和第二对象并不代表二者的重要程度或者代表二者的顺序,仅仅是为了区分描述。在本申请实施例中,“和/或”,仅仅是描述关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。
在本申请实施例的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“连接”应做广义理解,例如,“连接”可以是可拆卸地连接,也可以是不可拆卸地连接;可以是直接连接,也可以通过中间媒介间接连接。本申请实施例中所提到的方位用语,例如,“上”、“下”、“左”、“右”、“内”、“外”等,仅是参考附图的方向,因此,使用的方位用语是为了更好、更清楚地说明及理解本申请实施例,而不是指示或暗指所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请实施例的限制。“多个”是指至少两个。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本说明书的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
(2)数字内容,可以分为两大类:用户生成内容(User Generated Content,UGC)和专业生成内 容(Professional Generated Content,PGC),UGC是一种由用户生成的内容,而PGC是一种由官方生成的内容。用户一般将UGC、PGC等数字内容通过互联网平台进行展示或者提供给其他用户。
(3)增强现实(augmented reality,AR)技术,是指将计算机生成的虚拟对象叠加到真实世界的场景之上,从而实现对真实世界的增强。也就是说,AR技术中需要采集真实世界的场景,然后在真实世界上增加虚拟环境。因此,虚拟现实(virtual reality,VR)技术与AR技术的区别在于,VR技术创建的是完全的虚拟环境,用户看到的全部是虚拟对象;而AR技术是在真实世界上叠加了虚拟对象,即既包括真实世界中对象也包括虚拟对象。比如,用户佩戴透明眼镜,通过该眼镜可以看到周围的真实环境,而且该眼镜上还可以显示虚拟对象,这样,用户既可以看到真实对象也可以看到虚拟对象。
示例性的,图1为本申请实施例提供的一种AR设备的示意图。如图1所示,AR设备中包括AR穿戴设备,以及主机(例如AR主机)或服务器(例如AR服务器),AR穿戴设备与AR主机或AR服务器连接(有线连接或无线连接)。AR主机或AR服务器可以是具有较大计算能力的设备。例如,AR主机可以是手机、平板电脑、笔记本电脑等设备,AR服务器可以是云服务器等。AR主机或AR服务器负责图像生成、图像渲染等,然后将渲染后的图像发送给AR穿戴设备显示,用户佩戴AR穿戴设备可以看到图像。示例性的,AR穿戴设备可以是头戴式设备(Head Mounted Display,HMD),比如眼镜、头盔等。
可选的,图1中AR设备中也可以不包括AR主机或AR服务器。比如,AR穿戴设备本地具有图像生成、渲染的能力,无需从AR主机或AR服务器获取图像进行显示。
本申请实施例中,用户可以通过AR技术增强与数字内容的交互。当用户通过AR设备的摄像头(如图1中AR穿戴设备的摄像头)实时拍摄真实世界时,用户可以将数字内容作为虚拟对象添加到AR设备的显示屏(如图1中AR穿戴设备的显示屏)显示的AR场景中,即将虚拟数字内容显示在真实世界场景中,使得用户可以在AR设备的显示屏显示的AR场景中观看真实世界场景的同时,观看显示在真实世界场景中的虚拟数字内容,并与其进行交互。例如,图2为本申请实施例提供的一种AR场景的示意图,如图2所示,图2中地面、道路为AR设备的摄像头实时拍摄到的真实世界的画面,道路上虚拟的卡通人物则为用户在当前的AR场景中添加的虚拟数字内容,用户可以在AR设备的显示屏上同时观察到真实世界中的地面、道路和虚拟的卡通人物。用户还可以在AR设备的显示屏显示的AR场景中编辑虚拟数字内容。例如,编辑图2中虚拟的卡通人物的大小、位置以及朝向等。
但是,由于AR技术的限制,不在现场的用户将无法观看显示在真实世界场景中的虚拟数字内容,也无法与其进行交互,使得不在现场的用户和在现场的用户无法同步观看显示在真实世界场景中的虚拟数字内容。
基于上述问题,本申请实施例提供一种虚拟数字内容显示***,用以解决不在现场的用户和在现场的用户无法同步观看显示在真实世界场景中的虚拟数字内容,用户体验较差的问题。图3为本申请实施例提供的一种虚拟数字内容显示***的结构示意图。如图3所示,该虚拟数字内容显示***可以包括第一电子设备和第二电子设备。
应理解,图3中仅为便于理解,示例性地示出了一个虚拟数字内容显示***,但这不应对本申请构成任何限定,该虚拟数字内容显示***中还可以包括更多数量的第一电子设备,也可以包括更多数量的第二电子设备;与不同的第一电子设备交互的第二电子设备可以是相同的第二电子设备,也可以是不同的第二电子设备;与不同的第一电子设备交互的第二电子设备的数量可以相同,也可以不同;本申请实施例中,第一电子设备和第二电子设备还可以是同一电子设备,本申请实施例对此不做具体限定。
在本申请实施例中,第一电子设备用于响应于用户触发的第一操作,从至少一个候选虚拟数字场景中确定目标虚拟数字场景,响应于用户触发的第二操作,从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并将第一目标虚拟数字内容与目标虚拟数字场景的第一位置进行叠加,可以在目标虚拟数字场景的第一位置显示第一目标虚拟数字内容。第二电子设备用于响应于用户触发的第三操作,采集并显示第一现实场景的图像,在第一现实场景为目标虚拟数字场景所对应的现实场景时,在第一现实场景的第一位置显示第一目标虚拟数字内容,其中,第一现实场景的第一位置为目标虚拟数字场景的第一位置在第一现实场景所对应的位置,或第一现实场景的第一位置和目标虚拟数字场景的第一位置在第一现实场景所对应的位置之间的距离小于或等于第一阈值,示例性的,第一阈值可以为100厘米。由于第一电子设备可以通过目标虚拟数字场景模拟出第一目标虚拟数字内容在目标虚拟数字场景所对应的现 实场景中的显示效果,从而使得用户无需到达现场就可以观看第一目标虚拟数字内容在真实世界场景中的显示效果,由于第二电子设备可以在目标虚拟数字场景所对应的现实场景中显示第一电子设备通过目标虚拟数字场景模拟出的显示效果,从而使得在现场的用户与不在现场的用户可以同步观看第一目标虚拟数字内容在真实世界场景中的显示效果。
应理解,第一电子设备可以为具有无线连接功能的设备。第二电子设备可以为图1所示的AR设备。在本申请一些实施例中,第一电子设备可以是具备显示屏、摄像头和传感器的设备。
在本申请一些实施例中,第一电子设备可以是便携式设备,诸如手机、平板电脑、具备无线通讯功能的可穿戴设备(例如,手表、手环、头盔、耳机等)、车载终端设备、AR/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等。第一电子设备还可以是智能家居设备(例如,智能电视、智能音箱等)、智能汽车、智能机器人、车间设备、无人驾驶(Self Driving)中的无线终端、远程手术(Remote Medical Surgery)中的无线终端、智能电网(Smart Grid)中的无线终端、运输安全(Transportation Safety)中的无线终端、智慧城市(Smart City)中的无线终端,或智慧家庭(Smart Home)中的无线终端、飞行设备(例如,智能机器人、热气球、无人机、飞机)等。
在本申请一些实施例中,第一电子设备还可以是还包含其它功能诸如个人数字助理和/或音乐播放器功能的便携式终端设备。便携式终端设备的示例性实施例包括但不限于搭载 或者其它操作***的便携式终端设备。上述便携式终端设备也可以是其它便携式终端设备,诸如具有触敏表面(例如触控面板)的膝上型计算机(Laptop)等。还应当理解的是,在本申请其它一些实施例中,上述第一电子设备也可以不是便携式终端设备,而是具有触敏表面(例如触控面板)的台式计算机。
图4为本申请实施例提供的一种第一电子设备的硬件结构示意图。如图4所示,第一电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是第一电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了***的效率。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为第一电子设备100充电,也可以用于第一电子设备100与***设备之间传输数据。充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
第一电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。第一电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在第一电子设备100上的包括2G/3G/4G/5G等无线通信的解决方 案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在第一电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星***(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,第一电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得第一电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯***(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位***(global positioning system,GPS),全球导航卫星***(global navigation satellite system,GLONASS),北斗卫星导航***(beidou navigation satellite system,BDS),准天顶卫星***(quasi-zenith satellite system,QZSS)和/或星基增强***(satellite based augmentation systems,SBAS)。
显示屏194用于显示应用的显示界面,例如显示第一电子设备100上安装的应用的显示页面等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,第一电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,第一电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。在本申请实施例中,摄像头193可以用于拍摄全景图,如用户持第一电子设备100水平旋转360度,摄像头193可以采集到一张第一电子设备100所处位置对应的全景图。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行第一电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作***,以及至少一个应用程序的软件代码等。存储数据区可存储第一电子设备100使用过程中所产生的数据(例如拍摄的图像、录制的视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展第一电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将图片,视频等文件保存在外部存储卡中。
第一电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
其中,传感器模块180可以包括压力传感器180A,加速度传感器180B,触摸传感器180C等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。
触摸传感器180C,也称“触控面板”。触摸传感器180C可以设置于显示屏194,由触摸传感器180C与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180C用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180C也可以设置于第一电子设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。第一电子设备100可以接收按键输入,产生与第一电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过***SIM卡接口195,或从SIM卡接口195拔出,实现与第一电子设备100的接触和分离。
可以理解的是,图4所示的部件并不构成对第一电子设备100的具体限定,第一电子设备100还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。此外,图4中的部件之间的组合/连接关系也是可以调整修改的。
图5为本申请实施例提供的一种第一电子设备的软件结构示意图。如图5所示,第一电子设备的软件结构可以是分层架构,例如可以将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将操作***分为四层,从上至下分别为应用程序层,应用程序框架层(framework,FWK),运行时(runtime)和***库,以及内核层。
应用程序层可以包括一系列应用程序包(application package)。如图5所示,应用程序层可以包括相机、设置、皮肤模块、用户界面(user interface,UI)、三方应用程序等。其中,三方应用程序可以包括图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预先定义的函数。如图5所示,应用程序框架层可以包括窗口管理器,内容提供器,视图***,电话管理器,资源管理器,通知管理器。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图***包括可视控件,例如显示文字的控件,显示图片的控件等。视图***可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在***顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
运行时包括核心库和虚拟机。运行时负责操作***的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是操作***的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
***库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子***进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
硬件层可以包括各类传感器,例如加速度传感器、重力传感器、触摸传感器等。
基于上述虚拟数字内容显示***,本申请实施例还提供一种虚拟数字内容显示方法。下面结合具体实施例,对本申请实施例提供的方案进行说明。
本申请实施例提供的方案可以包括虚拟数字场景显示和虚拟数字内容显示。在显示虚拟数字内容后,还可以进一步包括虚拟数字内容交互、虚拟数字内容漫游等。下面进行详细说明。
一、虚拟数字场景显示
本申请实施例中,用户可以登录AR应用或者VR应用,示例性的,用户可以在应用登录界面输入登录信息登录AR应用或者VR应用或者通过触发“手机号一键登录”以登录AR应用或者VR应用。当用户登录第一电子设备的AR应用或者VR应用,第一电子设备可以在显示屏中显示如图6a所示的一种应用初始化界面。
如图6a所示为一种应用初始化界面,该界面中可以显示有“虚拟数字场景库”图标601,以及至少一个候选虚拟数字场景的缩略图,例如虚拟数字场景“海底世界”对应的缩略图。其中,用户可以通过选择“虚拟数字场景库”图标601来查看候选虚拟数字场景,当第一电子设备检测到用户选择“虚拟数字场景库”图标601的操作,响应于该操作,第一电子设备可以将至少一个候选虚拟数字场景对应的二维地图或者文字中的任意一个或多个显示在第一电子设备的显示屏上。用户可以从第一电子设备的显示屏中显示的至少一个候选虚拟数字场景中确定目标虚拟数字场景,当第一电子设备检测到用户选择该二维地图或者文字中的任意一个或多个的操作,响应于该操作,第一电子设备可以将用户确定的目标虚拟数字场景显示在显示屏上。
示例性的,当第一电子设备检测到用户选择“虚拟数字场景库”图标601的操作,响应于该操作,第一电子设备可以在显示屏中显示如图6b所示的目标虚拟数字场景确定界面,该界面中显示有至少一个候选虚拟数字场景的图标的二维地图,例如“场景1”、“场景2”、“场景3”、“场景4”、“场景5”、“场景6”、“场景7”、“场景8”、“场景9”等候选虚拟数字场景的图标的二维地图。用户可以通过选择该二维地图中的候选虚拟数字场景的图标来查看对应的目标虚拟数字场景,当第一电子设备检测到用户选择第一电子设备的显示屏中显示的候选虚拟数字场景的图标的操作,响应于该操作,第一电子设备可以显示对应的目标虚拟数字场景。
示例性的,当第一电子设备检测到用户选择“虚拟数字场景库”图标601的操作,响应于该操作,第一电子设备还可以在显示屏中显示如图6c所示的目标虚拟数字场景确定界面,该界面中显示有不同区域的图标,例如“北京市”、“上海市”、“河北省”、“山西省”、“浙江省”、“福建省”、“江西省”等区域的图标。用户可以通过上下滑动区域的图标的右侧的滚动条,来查看当前该界面中未显示的区域的图标。用户还可以通过选择区域的图标来查看对应的候选虚拟数字场景,当第一电子设备检测到用户选择第一电子设备的显示屏中显示的区域的图标的操作,响应于该操作,第一电子设备可以显示该区域对应的候选虚拟数字场景。示例性的,当第一电子设备检测到用户选择第一电子设备的显示屏中显示的“北京市”的图标的操作,响应于该操作,第一电子设备可以显示“北京市”对应的候选虚拟数字场景的图标,例如“首都博物馆”、“北京通州大运河(码头)”、“长楹天街”、“丽泽天街”、“清华校史馆”、“北研所”、“北京坊”等候选虚拟数字场景的图标。用户可以通过上下滑动“北京市”的候选虚拟数字场景的图标的右侧的滚动条,来查看当前界面中未显示的“北京市”对应的候选虚拟数字场景的图标。用户还可以通过选择“北京市”对应的候选虚拟数字场景的图标来查看对应的目标虚拟数字场景,当第一电子设备检测到用户选择第一电子设备的显示屏中显示的“北京市”对应的候选虚拟数字场景的图标的操作,响应于该操作,第一电子设备可以显示对应的目标虚拟数字场景。
应理解,上述候选虚拟数字场景可以是第一电子设备存储的虚拟数字场景,例如官方预先设置的虚拟数字场景,也可以是第一电子设备从云端或者服务器获取的虚拟数字场景,还可以是用户创作上传的 虚拟数字场景,本申请不作具体限制。
本申请实施例中,用户也可以通过第一电子设备生成目标虚拟数字场景,接下来对用户如何通过第一电子设备生成目标虚拟数字场景进行介绍。
如图6a所示为一种应用初始化界面,该界面中可以显示有“开始创作”图标602,其中,用户可以通过选择“开始创作”图标602来生成目标虚拟数字场景,当第一电子设备检测到用户选择“开始创作”图标602的操作,响应于该操作,第一电子设备可以显示如图6d所示的界面。
如图6d所示的第一电子设备的一种图形用户界面(graphical user interface,GUI),该界面可以包括操作按钮603,用户可以通过选择操作按钮603触发拍摄目标现实场景,当第一电子设备检测到用户选择操作按钮603的操作,响应于该操作,第一电子设备可以显示拍摄界面。用户可以操作第一电子设备对目标现实场景进行拍摄,得到对应的目标虚拟数字场景。示例性的,如图6e所示为本申请实施例提供的一种拍摄目标现实场景的示意图,该界面中可以包括第一电子设备的摄像头的拍摄界面、提示用户继续拍摄的操作按钮,例如“继续扫描”操作按钮604、以及用于触发停止拍摄的操作按钮,例如“停止采集”操作按钮605。用户可以通过移动第一电子设备进行拍摄,当用户确定继续拍摄时,可以选择提示用户继续拍摄的操作按钮,例如“继续扫描”操作按钮604,则第一电子设备可以继续拍摄;当用户确定停止拍摄时,可以选择触发停止拍摄的操作按钮,例如“停止采集”操作按钮605,则第一电子设备可以停止拍摄,并可以根据已拍摄的内容得到对应的目标虚拟数字场景。
示例性的,如图6f所示,用户在操作第一电子设备拍摄目标现实场景时,用户可以持第一电子设备,旋转第一电子设备拍摄目标现实场景。第一电子设备还可以显示提示用户继续拍摄的信息,当用户指示第一电子设备继续拍摄,选择提示用户继续拍摄的操作按钮,例如“继续扫描”操作按钮604时,用户可以通过旋转第一电子设备继续拍摄目标现实场景。当用户指示第一电子设备停止拍摄,选择用于触发停止拍摄的操作按钮,例如“停止采集”操作按钮605时,响应于该操作,第一电子设备可以停止拍摄目标现实场景,基于上述用户拍摄的目标现实场景的数据,第一电子设备可以生成对应的目标虚拟数字场景。
本申请实施例中,上述用户在操作第一电子设备拍摄目标现实场景时,基于拍摄的目标现实场景的数据,第一电子设备可以得到对应的目标虚拟数字场景的N张全景图、每张全景图的位姿信息,其中,全景图的位姿信息可以为拍摄全景图的拍摄设备(例如第一电子设备或官方设备)在拍摄全景图时在真实世界中的位置和朝向,全景图的位姿信息所表示的位置是拍摄全景图的拍摄设备通过进行全球定位***(global positioning system,GPS)定位确定的,全景图的位姿信息所表示的朝向是拍摄全景图的拍摄设备通过进行惯性测量单元(inertial measurement unit,IMU)测量确定的。第一电子设备还可以得到目标虚拟数字场景中的每个建筑物的白模(即简化模型)以及目标虚拟数字场景中的每个建筑物的位姿信息,其中,建筑物的位姿信息可以为建筑物在真实世界中的位置和朝向。
示例性的,如图6e所示的界面,该界面中可以显示第一电子设备的摄像头拍摄得到的目标虚拟数字场景的一张全景图的任一帧全景图切片,用户可以通过移动第一电子设备进行更大范围空间的拍摄,当用户确定停止拍摄时,可以选择图6e中所示的用于触发停止拍摄的操作按钮,例如“停止采集”操作按钮605,则第一电子设备可以停止拍摄,并根据已拍摄的内容得到目标虚拟数字场景的N张全景图。
示例性的,如图6f所示,当用户旋转第一电子设备拍摄目标现实场景时,第一电子设备在旋转拍摄过程中可以获取目标虚拟数字场景的全景图的多帧全景图切片。第一电子设备可以将摄像头拍摄得到的目标虚拟数字场景的任一张全景图的任一帧全景图切片实时显示在显示屏上。
本申请实施例中,当第一电子设备显示提示用户继续拍摄的信息时,用户可以操作第一电子设备继续拍摄目标虚拟数字场景的任一张全景图,第一电子设备可以继续获取目标虚拟数字场景的任一张全景图的任一帧全景图切片,第一电子设备可以将任一张全景图的多帧全景图切片进行拼接得到该全景图,直至用户指示第一电子设备停止拍摄。
示例性的,第一电子设备在获取目标虚拟数字场景的每张全景图时,可以通过进行GPS定位确定第一电子设备在拍摄该全景图时在真实世界中的位置,通过进行IMU测量确定第一电子设备在拍摄该全景图时在真实世界中的朝向,进而得到该全景图的位姿信息。
本申请实施例中,当第一电子设备拍摄目标虚拟数字场景时,第一电子设备可以得到反映目标虚拟数字场景的多张环境图像,第一电子设备可以根据反映目标虚拟数字场景的多张环境图像确定目标虚拟数字场景的每个建筑物的边界矢量数据,并根据边界矢量数据确定建筑物的白模数据,进而根据建筑物 的白模数据得到建筑物的白模、建筑物的白模的位姿信息、建筑物的位姿信息,其中,建筑物的白模的位姿信息可以为建筑物的白模在对应的三维空间中的位置和朝向。需要说明的是,建筑物的白模在对应的三维空间中的位置和朝向的获取方式和上述获取全景图在真实世界中的位置和朝向的方式一致,此处不再赘述。
本申请实施例中,当第一电子设备检测到上述用户从至少一个候选虚拟数字场景中确定目标虚拟数字场景的操作,响应于该操作,第一电子设备可以获取目标虚拟数字场景的N张全景图、每张全景图的位姿信息、目标虚拟数字场景中的每个建筑物的白模以及目标虚拟数字场景中的每个建筑物的位姿信息。
应理解,第一电子设备可以从第一电子设备存储的全景图库以及白模库中获取目标虚拟数字场景的N张全景图、每张全景图的位姿信息、目标虚拟数字场景中的每个建筑物的白模以及目标虚拟数字场景中的每个建筑物的位姿信息,该全景图库中的全景图以及白模库中的建筑物的白模是官方操作官方设备对复杂场景进行拍摄得到的;第一电子设备也可以自己获取目标虚拟数字场景的N张全景图、每张全景图的位姿信息、目标虚拟数字场景中的每个建筑物的白模以及目标虚拟数字场景中的每个建筑物的位姿信息。本申请实施例中不做具体限定。
应理解,目标虚拟数字场景可以包括一个或多个虚拟数字场景,本申请实施例中不做具体限定,为了方便说明,接下来以虚拟数字场景显示、虚拟数字内容显示、虚拟数字内容交互中的目标虚拟数字场景均只包括一个虚拟数字场景,虚拟数字内容漫游中的目标虚拟数字场景可包括多个虚拟数字场景来进行介绍。
二、虚拟数字内容显示
本申请实施例中,上述第一电子设备获取目标虚拟数字场景的N张全景图、每张全景图的位姿信息、目标虚拟数字场景中的每个建筑物的白模以及目标虚拟数字场景中的每个建筑物的位姿信息后,第一电子设备可以在显示屏中显示目标虚拟数字场景,以及至少一个候选虚拟数字内容对应的图像或者文字中的任意一个或多个。其中,用户可以从第一电子设备的显示屏中显示的至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并将第一目标虚拟数字内容移动到目标虚拟数字场景的第一位置,当第一电子设备检测到用户选择该第一目标虚拟数字内容对应的图像或者文字中的任意一个或多个的操作,响应于该操作,第一电子设备可以将用户确定的第一目标虚拟数字内容与目标虚拟数字场景的第一位置进行叠加,可以在目标虚拟数字场景的第一位置显示第一目标虚拟数字内容。
在一些实施例中,用户在目标虚拟数字场景的第一位置放置第一目标虚拟数字内容后,用户可以对第一目标虚拟数字内容进行编辑,例如移动第一目标虚拟数字内容在目标虚拟数字场景的位置,对第一目标虚拟数字内容进行放大、缩小、翻转或旋转中的任意一项或多项操作,调整第一目标虚拟数字内容的朝向等等。
应理解,上述候选虚拟数字内容可以是第一电子设备存储的虚拟数字内容,例如官方预先设置的虚拟数字内容,也可以是用户创作上传的虚拟数字内容,本申请不作具体限制。
示例性的,第一电子设备可以在显示屏中显示如图6g的(1)所示的虚拟数字场景以及内容显示界面,该界面中显示有目标虚拟数字场景以及“虚拟数字内容库”图标606。其中,用户可以通过选择“虚拟数字内容库”图标606来查看候选虚拟数字内容,当第一电子设备检测到用户选择“虚拟数字内容库”图标606的操作,响应于该操作,第一电子设备可以将至少一个候选虚拟数字内容对应的图像或者文字中的任意一个或多个显示在第一电子设备的显示屏上,例如“虚拟数字内容1”、“虚拟数字内容2”、“虚拟数字内容3”、“虚拟数字内容4”等候选虚拟数字内容的图标。用户可以从第一电子设备的显示屏中显示的至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并将第一目标虚拟数字内容移动到目标虚拟数字场景的第一位置,当第一电子设备检测到用户选择并移动该第一目标虚拟数字内容的操作,响应于该操作,第一电子设备可以将用户确定的第一目标虚拟数字内容显示在目标虚拟数字场景的第一位置上。示例性的,如图6g的(2)所示,当第一电子设备检测到用户选择“虚拟数字内容1”的图标,并将“虚拟数字内容1”的图标移动到目标虚拟数字场景的第一位置的操作,响应于该操作,第一电子设备可以将“虚拟数字内容1”与目标虚拟数字场景的第一位置进行叠加,并显示在第一电子设备的显示屏上。
在本申请实施例中,上述第一电子设备将用户确定的第一目标虚拟数字内容显示在目标虚拟数字场 景的第一位置时,第一电子设备可以确定第一目标虚拟数字内容的位姿信息。其中,目标虚拟数字场景的第一位置可以为第一目标虚拟数字内容在目标虚拟数字场景的三维坐标系下的坐标信息,第一目标虚拟数字内容的位姿信息可以为第一目标虚拟数字内容在真实世界中的位置和朝向,目标虚拟数字场景的三维坐标系和真实世界的三维坐标系可以有映射关系,基于该映射关系,第一电子设备可以通过目标虚拟数字场景的第一位置的坐标信息确定第一目标虚拟数字内容的位姿信息。第一电子设备还可以将目标虚拟数字场景的每张全景图的位姿信息对应的三维坐标系作为参考坐标系,调整目标虚拟数字场景中的每个建筑物的位姿信息以及第一目标虚拟数字内容的位姿信息,得到参考坐标系中的目标虚拟数字场景中的每个建筑物的位姿信息,以及参考坐标系中的第一目标虚拟数字内容的位姿信息。第一电子设备还可以根据参考坐标系中的目标虚拟数字场景的每张全景图的位姿信息、目标虚拟数字场景中的每个建筑物的位姿信息以及第一目标虚拟数字内容的位姿信息,确定目标虚拟数字场景的第一全景图、第一全景图中的每个建筑物以及第一目标虚拟数字内容的相对位姿信息,其中,相对位姿信息可以为拍摄第一全景图的拍摄设备、第一全景图中的每个建筑物以及第一目标虚拟数字内容在真实世界中的相对位置和相对朝向。第一电子设备还可以根据第一全景图、第一全景图中的每个建筑物以及第一目标虚拟数字内容的相对位姿信息,将第一全景图中的每个建筑物的白模以及第一目标虚拟数字内容与第一全景图进行叠加,并显示在第一电子设备的显示屏中。
示例性的,图6h为本申请实施例提供的一种虚拟数字场景的示意图,包括第一虚拟数字场景的全景图1,其中,全景图1中包括建筑物1和建筑物2。其中,全景图1的位姿信息可以为全景图1的拍摄设备在拍摄全景图1时在真实世界中的位置为A1以及朝向为B1。建筑物1的位姿信息可以为建筑物1在真实世界中的位置为A2以及朝向为B2。建筑物2的位姿信息可以为建筑物2在真实世界中的位置为A3以及朝向为B3。当第一电子设备检测到用户选择第一目标虚拟数字内容的图标,并将第一目标虚拟数字内容的图标移动到第一虚拟数字场景的第一位置的操作,响应于该操作,第一电子设备可以确定第一目标虚拟数字内容的位姿信息,其中,第一目标虚拟数字内容的位姿信息可以为第一目标虚拟数字内容在真实世界中的位置为A4以及朝向为B4。
由于第一电子设备的相机的三维坐标系和拍摄全景图1时的拍摄设备、建筑物1、建筑物2以及第一目标虚拟数字内容的真实世界的三维坐标系可以不同,第一电子设备可以将相机的三维坐标系作为参考坐标系,调整全景图1、建筑物1和建筑物2的位姿信息以及第一目标虚拟数字内容的位姿信息,得到参考坐标系中的全景图1的位姿信息(例如表示全景图1在参考坐标系中的位置为A11以及朝向为B11)、建筑物1的位姿信息(例如表示建筑物1在参考坐标系中的位置为A12以及朝向为B12)、参考坐标系中的建筑物2的位姿信息(例如表示建筑物2在参考坐标系中的位置为A13以及朝向为B13),以及参考坐标系中的第一目标虚拟数字内容的位姿信息(例如表示第一目标虚拟数字内容在参考坐标系中的位置为A14以及朝向为B14)。第一电子设备还可以根据A11、A12、A13和A14,B11、B12、B13和B14,确定在拍摄全景图1时的拍摄设备、建筑物1、建筑物2以及第一目标虚拟数字内容在真实世界中的相对位置为C1和相对朝向为D1。第一电子设备还可以根据C1和D1,将建筑物1的白模、建筑物2的白模与全景图1进行渲染叠加,得到第二虚拟数字场景,第一电子设备还可以将第一目标虚拟数字内容和第二虚拟数字场景的第一位置进行叠加,示例性的,如图6i所示为本申请实施例提供的一种虚拟数字内容显示界面的示意图,图6i所示的虚拟数字内容显示界面包括建筑物1的白模、建筑物2的白模、第一目标虚拟数字内容与全景图1,其中,建筑物1的白模和建筑物2的白模基本遮挡了全景图1中的建筑物1和建筑物2。
在一些实施例中,由于全景图的位姿信息是由全景图的拍摄设备通过进行GPS定位以及IMU测量确定的,而算法确定的全景图的拍摄设备在拍摄全景图时在真实世界中的位置和朝向,与实际的全景图的拍摄设备在拍摄全景图时在真实世界中的位置和朝向,存在误差,误差单位一般为厘米级。因此为了避免误差影响到第一目标虚拟数字内容与第一全景图渲染叠加后的效果,第一电子设备可以根据第一全景图的位姿信息,以及第二虚拟数字场景中的每个建筑物的位姿信息,确定该第二虚拟数字场景中的每个建筑物的位姿差值。其中,建筑物的位姿差值表示建筑物在真实世界中的位置和朝向与建筑物的白模在对应的三维空间中的位置和朝向的差值。第一电子设备还可以根据第二虚拟数字场景中的每个建筑物的位姿差值,确定是否将第二虚拟数字场景显示在第一电子设备的显示屏中。例如,如果第二虚拟数字场景中的每个建筑物的位姿差值位于预设范围内,则第一电子设备确定第二虚拟数字场景符合对外提供的精度要求,可以将该第二虚拟数字场景显示在第一电子设备的显示屏中;如果第二虚拟数字场景中的 每个建筑物的位姿差值不位于预设范围内,则第一电子设备确定第二虚拟数字场景不符合对外提供的精度要求,不可以将该第二虚拟数字场景显示在第一电子设备的显示屏中,需要将第二虚拟数字场景剔除或者重新获取第二虚拟数字场景。
示例性的,当第一电子设备确定第二虚拟数字场景中的每个建筑物的位姿差值位于预设范围内时,第一电子设备可以先在显示屏中显示如图6j的(1)所示的虚拟数字内容显示界面,该界面中可以显示有图6i所示的第二虚拟数字场景和第一目标虚拟数字内容,还可以显示有提示用户关闭建筑物的白模的操作按钮607。用户可以通过选择操作按钮607来关闭图6i所示的第二虚拟数字场景中的建筑物1的白模和建筑物2的白模,当第一电子设备检测到用户选择操作按钮607的操作,响应于该操作,第一电子设备可以在显示屏中显示如图6j的(2)所示的虚拟数字内容显示界面,该界面中可以显示有第一虚拟数字场景和第一目标虚拟数字内容,还可以显示有提示用户开启建筑物的白模的操作按钮608,用户可以通过选择操作按钮608来打开建筑物1的白模和建筑物2的白模。
可见,上述第一电子设备将第一目标虚拟数字内容与目标虚拟数字场景的第一位置进行叠加,可以在目标虚拟数字场景的第一位置显示第一目标虚拟数字内容,并将与第一目标虚拟数字内容叠加后的目标虚拟数字场景显示在第一电子设备的显示屏中,从而可以模拟出第一目标虚拟数字内容在目标虚拟数字场景所对应的现实场景中的显示效果,使得用户无需到达现场就可以观看第一目标虚拟数字内容在真实世界场景中的显示效果。
三、虚拟数字内容交互
本申请实施例中,上述第一电子设备将虚拟数字内容显示在目标虚拟数字场景中后,用户可以操作第二电子设备和虚拟数字内容进行交互。接下来以第一电子设备将第一目标虚拟数字内容显示在目标虚拟数字场景的第一位置上为例进行介绍。
用户可以进入AR场景,当第二电子设备检测到用户选择进入AR场景的操作,响应于该操作,第二电子设备可以采集并显示第一现实场景的图像。用户可以操作第一电子设备在目标虚拟数字场景的第一位置放置第一目标虚拟数字内容,当第一现实场景为目标虚拟数字场景所对应的现实场景时,第二电子设备可以在第一现实场景的第一位置显示放置的第一目标虚拟数字内容,其中,第一现实场景的第一位置为目标虚拟数字场景的第一位置在第一现实场景所对应的位置。示例性的,当第一电子设备检测到用户在目标虚拟数字场景的第一位置放置第一目标虚拟数字内容的操作,响应于该操作,第一电子设备可以向第二电子设备发送第一请求信息,其中,第一请求信息用于请求第二电子设备在第二电子设备显示的第一现实场景为目标虚拟数字场景所对应的现实场景时,在第一现实场景的第一位置显示放置的第一目标虚拟数字内容,当第二电子设备接收第一请求信息,第二电子设备可以在确定当前显示的第一现实场景是第一电子设备当前显示的目标虚拟数字场景所对应的现实场景时,在第一现实场景的第一位置显示第一目标虚拟数字内容。
应理解,第一现实场景的第一位置可以为目标虚拟数字场景的第一位置在第一现实场景所对应的位置,第一现实场景的第一位置与目标虚拟数字场景的第一位置在第一现实场景所对应的位置之间也可以存在距离,距离可以小于或等于第一阈值,例如第一阈值可以为100厘米,示例性的,基于真实世界的三维坐标系,第二电子设备可以获取第一现实场景的第一位置的三维坐标1,第一电子设备可以获取目标虚拟数字场景的第一位置的三维坐标2,进而基于三维坐标1和三维坐标2可以得到两个位置之间的距离;第一现实场景的第一位置放置的第一目标虚拟数字内容的朝向,与目标虚拟数字场景的第一位置放置的第一目标虚拟数字内容的朝向之间可以存在误差,误差可以小于或等于第一角度阈值,例如3度,示例性的,基于真实世界的三维坐标系,第二电子设备可以获取第一目标虚拟数字内容在第一现实场景的第一位置的旋转值1,第一电子设备可以获取第一目标虚拟数字内容在目标虚拟数字场景的第一位置的旋转值2,进而基于旋转值1和旋转值2可以得到两个旋转值之间的角度差。其中,第一电子设备可以从服务器获取三维坐标2和虚拟数字内容的旋转值2,第二电子设备可以从服务器获取三维坐标1和虚拟数字内容的旋转值1,本申请对此不作具体限制。
示例性的,如图6a所示,当第二电子设备检测到用户选择“沙洲世界已覆盖”的操作,响应于该操作,第二电子设备可以在显示屏中显示如图6k的(1)所示的现实场景显示界面,该界面中显示有第一现实场景,其中,窗帘、沙发、墙壁、门等是第二电子设备实时拍摄到的第一现实场景的画面。当第一电子设备检测到用户确定目标虚拟数字场景的操作,响应于该操作,第一电子设备可以在显示屏中显示 如图6k的(2)所示的目标虚拟数字场景显示界面,该界面中可以包括目标虚拟数字场景中的建筑物,例如沙发、墙壁、门等,当第一电子设备检测到用户在目标虚拟数字场景的第一位置放置第一目标虚拟数字内容的操作,响应于该操作,第一电子设备可以在目标虚拟数字场景的第一位置显示第一目标虚拟数字内容,例如图6k的(2)中的鲸鱼,第一电子设备还可以向第二电子设备发送第一请求信息,第二电子设备可以接收来自第一电子设备的第一请求信息,当第二电子设备确定当前显示的第一现实场景是第一电子设备当前显示的目标虚拟数字场景所对应的现实场景时,示例性的,如图6k的(3)所示的现实场景显示界面,第二电子设备可以在显示屏的界面中显示第一现实场景,第二电子设备可以将第一目标虚拟数字内容(即鲸鱼)显示在第一现实场景的第一位置上,其中,图6k的(3)所示的第一现实场景为图6k的(2)所示的目标虚拟数字场景所对应的现实场景;图6k的(3)所示的第一现实场景的第一位置为图6k的(2)所示的目标虚拟数字场景的第一位置在第一现实场景所对应的位置。
应理解,图6k的(3)所示的第一现实场景的第一位置与图6k的(2)所示的目标虚拟数字场景的第一位置之间可以存在距离,距离可以小于或等于第一阈值,例如第一阈值可以为100厘米,在图6k的(3)所示的第一现实场景的第一位置上显示的第一目标虚拟数字内容的朝向,与在图6k的(2)所示的目标虚拟数字场景的第一位置上显示的第一目标虚拟数字内容的朝向之间可以存在误差,误差可以小于或等于第一角度阈值,例如3度,本申请对此不作具体限制。
本申请实施例中,第一电子设备可以对目标虚拟数字场景显示的目标虚拟数字内容进行编辑,并在第二电子设备的显示屏显示的第一现实场景中同步更新编辑后的目标虚拟数字内容。例如,用户可以对第一目标虚拟数字内容进行编辑,当第一电子设备检测到用户编辑第一目标虚拟数字内容的操作,响应于该操作,第一电子设备可以向第二电子设备发送第一编辑信息,第一编辑信息包括第一电子设备对第一电子设备的显示屏中显示的第一目标虚拟数字内容进行编辑的信息。当第二电子设备接收来自第一电子设备的第一编辑信息,第二电子设备可以根据第一编辑信息对显示在第二电子设备的显示屏中的第一目标虚拟数字内容进行编辑同步,使得显示在第一电子设备的显示屏中的第一目标虚拟数字内容与显示在第二电子设备的显示屏中的第一目标虚拟数字内容可以同步更新。
示例性的,第一电子设备可以在显示屏中显示如图6l的(1)所示的目标虚拟数字场景显示界面,该界面中的沙发、墙壁、门等都是目标虚拟数字场景中的建筑物,鲸鱼是显示在目标虚拟数字场景的第一位置上的第一目标虚拟数字内容。用户可以将鲸鱼从目标虚拟数字场景的第一位置移动到目标虚拟数字场景的第二位置,当第一电子设备检测到用户移动鲸鱼的操作,响应于该操作,第一电子设备可以在显示屏显示如图6l的(2)所示的目标虚拟数字场景显示界面。当第一电子设备检测到用户移动鲸鱼的操作,响应于该操作,第一电子设备可以将第一编辑信息发送给第二电子设备。其中,第一编辑信息包括第一电子设备将鲸鱼从目标虚拟数字场景的第一位置移动到目标虚拟数字场景的第二位置的信息。当第二电子设备接收来自第一电子设备的第一编辑信息,第二电子设备可以根据第一编辑信息将鲸鱼从第一现实场景的第一位置移动到第一现实场景的第二位置,第二电子设备可以在显示屏显示如图6l的(3)所示的现实场景显示界面,其中,第一现实场景的第二位置为目标虚拟数字场景的第二位置在第一现实场景所对应的位置。
应理解,图6l的(3)所示的第一现实场景的第二位置与图6l的(2)所示的目标虚拟数字场景的第二位置在第一现实场景所对应的位置之间可以存在距离,距离可以小于或等于第二阈值,例如第二阈值可以为100厘米,在图6l的(3)所示的第一现实场景的第二位置上放置的第一目标虚拟数字内容的朝向,与在图6l的(2)所示的目标虚拟数字场景的第一位置上放置的第一目标虚拟数字内容的朝向之间可以存在误差,误差可以小于或等于第二角度阈值,例如3度,具体参见上述其他实施例的相关描述,此处不再赘述。
在一些实施例中,当第一电子设备检测到用户编辑第一目标虚拟数字内容的操作,响应于该操作,第一电子设备还可以生成并保存第一编辑信息,第一编辑信息包括第一电子设备对第一电子设备的显示屏中显示的第一目标虚拟数字内容进行编辑的信息。第一电子设备可以根据第一编辑信息对第一现实场景显示的第一目标虚拟数字内容进行编辑,当第一电子设备显示第一现实场景,第一电子设备可以在第一现实场景显示编辑后的第一目标虚拟数字内容。
在本申请实施例中,上述第二电子设备采集并显示第一现实场景的图像后,还可以将至少一个候选虚拟数字内容对应的图像或者文字中的任意一个或多个显示在第二电子设备的显示屏中。用户可以从第二电子设备的显示屏中显示的至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并将第二目标 虚拟数字内容移动到第一现实场景的第三位置,当第二电子设备检测到用户选择并移动该图像或者文字中的任意一个或多个的操作,响应于该操作,第二电子设备可以在第一现实场景的第三位置上显示用户确定的第二目标虚拟数字内容。
应理解,上述候选虚拟数字内容可以是第二电子设备存储的虚拟数字内容,例如官方预先设置的虚拟数字内容,也可以是用户创作上传的虚拟数字内容,本申请不作具体限制。
示例性的,第二电子设备可以在显示屏中显示如图6m的(1)所示的现实场景显示界面,该界面中显示有第一现实场景,以及“虚拟数字内容库”图标609。其中,用户可以通过选择“虚拟数字内容库”图标609来查看候选虚拟数字内容,当第二电子设备检测到用户选择“虚拟数字内容库”图标609的操作,响应于该操作,第二电子设备可以将至少一个候选虚拟数字内容对应的图像或者文字中的任意一个或多个显示在第二电子设备的显示屏上,例如“虚拟数字内容1”、“虚拟数字内容2”、“虚拟数字内容3”、“虚拟数字内容4”等候选虚拟数字内容的图标。用户可以从第二电子设备的显示屏中显示的至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并将第二目标虚拟数字内容移动到第一现实场景的第三位置,当第二电子设备检测到用户选择并移动该图像或者文字中的任意一个或多个的操作,响应于该操作,第二电子设备可以将用户确定的第二目标虚拟数字内容显示在第一现实场景的第三位置上。例如,当第二电子设备检测到用户选择“虚拟数字内容1”的图标,并将“虚拟数字内容1”的图标移动到第一现实场景的第三位置的操作,响应于该操作,第二电子设备可以将“虚拟数字内容1”显示在第一现实场景的第三位置上,得到如图6m的(2)所示的现实场景显示界面,该界面中的小狗是显示在第一现实场景的第三位置上的第二目标虚拟数字内容。
在本申请实施例中,上述用户操作第二电子设备在第一现实场景的第三位置放置第二目标虚拟数字内容后,当第二电子设备检测到用户在第一现实场景的第三位置放置第二目标虚拟数字内容的操作,响应于该操作,第二电子设备还可以向显示目标虚拟数字场景的第一电子设备发送第二请求信息。其中,第二请求信息用于请求第一电子设备在第一现实场景为目标虚拟数字场景所对应的现实数字场景时,在目标虚拟数字场景的第三位置显示第二目标虚拟数字内容,目标虚拟数字场景的第三位置为第一现实场景的第三位置在目标虚拟数字场景所对应的位置。
应理解,第一现实场景的第三位置与目标虚拟数字场景的第三位置在第一现实场景所对应的位置之间可以存在距离,距离可以小于或等于第三阈值,例如第三阈值可以为100厘米,第一现实场景的第三位置放置的第二目标虚拟数字内容的朝向,与目标虚拟数字场景的第三位置放置的第二目标虚拟数字内容的朝向之间可以存在误差,误差可以小于或等于第三角度阈值,例如3度,具体参见上述其他实施例对第一现实场景的第一位置和目标虚拟数字场景的第一位置的相关描述,此处不再赘述。
示例性的,当第二电子设备检测到用户在第一现实场景的第三位置放置第二目标虚拟数字内容的操作,响应于该操作,第二电子设备可以在显示屏中显示如图6m的(2)所示的现实场景显示界面,该界面包括第一现实场景,该界面中的小狗是显示在第一现实场景的第三位置上的第二目标虚拟数字内容。当第二电子设备检测到用户在第一现实场景的第三位置放置第二目标虚拟数字内容的操作,响应于该操作,第二电子设备还可以向第一电子设备发送第二请求信息。第一电子设备接收来自第二电子设备的第二请求信息,当第一电子设备确定第一电子设备当前显示的目标虚拟数字场景是第二电子设备当前显示的第一现实场景所对应的虚拟数字场景时,示例性的,如图6m的(3)所示的目标虚拟数字场景显示界面,第一电子设备可以在显示屏的界面中显示目标虚拟数字场景和第二目标虚拟数字内容,该界面中的小狗是显示在目标虚拟数字场景的第三位置上的第二目标虚拟数字内容,图6m的(3)所示的目标虚拟数字场景的第三位置为图6m的(2)所示的第一现实场景的第三位置在目标虚拟数字场景所对应的位置。
应理解,图6m的(2)所示的第一现实场景的第三位置与图6m的(3)所示的目标虚拟数字场景的第三位置在第一现实场景所对应的位置之间可以存在距离,距离可以小于或等于第三阈值,例如第三阈值可以为100厘米,在图6m的(2)所示的第一现实场景的第三位置上放置的第二目标虚拟数字内容的朝向,与在图6m的(3)所示的目标虚拟数字场景的第三位置上放置的第二目标虚拟数字内容的朝向之间可以存在误差,误差可以小于或等于第三角度阈值,例如3度,具体参见上述其他实施例对第一现实场景的第一位置和目标虚拟数字场景的第一位置的相关描述,此处不再赘述。
在本申请实施例中,第二电子设备也可以对第一现实场景显示的第二目标虚拟数字内容进行编辑,并在第一电子设备的显示屏显示的目标虚拟数字场景中同步更新编辑后的第二目标虚拟数字内容,具体 实现方式和上述第一电子设备对目标虚拟数字场景显示的第一目标虚拟数字内容进行编辑,并在第二电子设备的显示屏显示的第一现实场景中同步更新编辑后的第一目标虚拟数字内容一致,此处不再赘述。
示例性的,第二电子设备可以在显示屏中显示如图6n的(1)所示的现实场景显示界面,该界面中显示有第一现实场景,该界面中的小狗是显示在第一现实场景的第二位置上的第二目标虚拟数字内容。用户可以将小狗从第一现实场景中删除,当第一电子设备检测到用户删除小狗的操作,响应于该操作,第二电子设备可以在显示屏显示如图6n的(2)所示的现实场景显示界面。当第二电子设备检测到用户删除小狗的操作,响应于该操作,第二电子设备可以第二编辑信息发送给第一电子设备,其中,第二编辑信息包括第二电子设备将小狗从第一现实场景中删除的信息。第一电子设备可以在显示屏中显示如图6m的(3)所示的目标虚拟数字场景显示界面,第一电子设备接收来自第二电子设备的第二编辑信息,第一电子设备可以根据第二编辑信息将小狗从目标虚拟数字场景中删除,第一电子设备可以在显示屏显示如图6n的(3)所示的现实场景显示界面。
在一些实施例中,当第二电子设备检测到用户编辑第二目标虚拟数字内容的操作,响应于该操作,第二电子设备还可以生成并保存第二编辑信息,第二编辑信息包括第二电子设备对第二电子设备的显示屏中显示的第二目标虚拟数字内容进行编辑的信息。第二电子设备可以根据第二编辑信息对第一现实场景显示的第二目标虚拟数字内容进行编辑,当第二电子设备显示目标虚拟数字场景,第二电子设备可以在目标虚拟数字场景显示编辑后的第二目标虚拟数字内容。
从而使得在现场的第二电子设备侧的用户对显示在真实世界场景中的虚拟数字内容进行编辑之后,第一电子设备实时更新显示虚拟数字场景的全景图中的虚拟数字内容,或者,不在现场的第一电子设备侧的用户对显示在虚拟数字场景的全景图中的虚拟数字内容进行编辑之后,第二电子设备实时更新显示在真实世界场景中的虚拟数字内容,提高了用户体验。
四、虚拟数字内容漫游
在本申请实施例中,目标虚拟数字场景包括多个场景,第一电子设备将目标虚拟数字场景显示在第一电子设备的显示屏中后,可以在显示屏中显示用于启动目标虚拟数字场景切换的操作按钮。用户可以通过选择该操作按钮启动目标虚拟数字场景切换,当第一电子设备检测到用户选择该操作按钮的操作,响应于该操作,第一电子设备可以切换目标虚拟数字场景,例如,将目标虚拟数字场景由场景1切换到场景2。
在一些实施例中,第一电子设备切换目标虚拟数字场景后,第一电子设备可以在显示屏中显示切换后的目标虚拟数字场景,还可以显示用于重新确定第一目标虚拟数字内容的位姿信息的操作按钮。用户可以通过选择该操作按钮重新确定第一目标虚拟数字内容的位姿信息,当第一电子设备检测到用户选择该操作按钮的操作,响应于该操作,第一电子设备可以根据切换后的目标虚拟数字场景的每张全景图的位姿信息、切换后的目标虚拟数字场景中的每个建筑物的位姿信息以及重新确定的第一目标虚拟数字内容的位姿信息,将第一目标虚拟数字内容与切换后的目标虚拟数字场景的第一全景图进行叠加,并显示在第一电子设备的显示屏中,从而实现虚拟数字内容在不同场景中漫游。上述步骤的具体执行可参照上述虚拟数字内容显示中的相关介绍,此处不再赘述。
在一些实施例中,第一电子设备在显示屏中显示切换后的目标虚拟数字场景时,还可以直接将第一目标虚拟数字内容与切换后的目标虚拟数字场景的第一全景图进行叠加,并显示在第一电子设备的显示屏中,从而实现虚拟数字内容在不同场景中漫游。
图7为本申请实施例提供的一种数字内容显示方法的流程示意图,如图7中所示,该方法的流程可以包括:
S701:第一电子设备响应于用户触发的第一操作,从至少一个候选虚拟数字场景中确定目标虚拟数字场景。
其中,第一电子设备响应用户的操作,从至少一个候选虚拟数字场景中确定目标虚拟数字场景的方式具体参见“一、虚拟数字场景显示”中的描述,此处不再赘述。
S702:第一电子设备响应于用户触发的第二操作,从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并在目标虚拟数字场景的第一位置显示所述第一目标虚拟数字内容。
其中,第一电子设备响应用户的操作,从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容, 并在目标虚拟数字场景的第一位置显示所述第一目标虚拟数字内容的方式具体参见“二、虚拟数字内容显示”中的描述,此处不再赘述。
S703:第二电子设备响应于用户触发的第三操作,采集并显示第一现实场景的图像,并在第一现实场景为目标虚拟数字场景所对应的现实场景时,在第一现实场景的第一位置显示第一目标虚拟数字内容。
其中,第二电子设备响应用户的操作,采集并显示第一现实场景的图像,并在第一现实场景为目标虚拟数字场景所对应的现实场景时,在第一现实场景的第一位置显示第一目标虚拟数字内容的方式具体参见“三、虚拟数字内容交互”中的描述,此处不再赘述。
本申请实施例中,第一电子设备还可以对第一目标虚拟数字内容进行编辑,当第一电子设备对第一目标虚拟数字内容进行编辑,第二电子设备也可以同步显示编辑后的第一目标虚拟数字内容,具体参见“三、虚拟数字内容交互”中的描述,此处不再赘述。
需要说明的是,上述实例提供的具体实施流程,仅是对本申请实施例适用方法流程的举例说明,其中各步骤的执行顺序可根据实际需求进行相应调整,还可以增加其它步骤,或减少部分步骤。
当第二电子设备显示第一现实场景,第一电子设备和第二电子设备还可以实施如图8所示的另一种虚拟数字内容显示方法,如图8中所示,该方法的流程可以包括:
S801:第二电子设备响应于用户触发的第四操作,从至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并在第一现实场景的第二位置显示第二目标虚拟数字内容。
其中,第二电子设备响应用户的操作,从至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并在第一现实场景的第二位置显示第二目标虚拟数字内容的方式具体参见“三、虚拟数字内容交互”中的描述,此处不再赘述。
S802:第一电子设备在第一现实场景为目标虚拟数字场景所对应的现实场景时,在目标虚拟数字场景的第二位置显示第二目标虚拟数字内容。
其中,第一电子设备在目标虚拟数字场景的第二位置显示第二目标虚拟数字内容的方式具体参见“三、虚拟数字内容交互”中的描述,此处不再赘述。
本申请实施例中,第二电子设备还可以对第二目标虚拟数字内容进行编辑,当第二电子设备对第二目标虚拟数字内容进行编辑,第一电子设备也可以同步显示编辑后的第二目标虚拟数字内容,具体参见“三、虚拟数字内容交互”中的描述,此处不再赘述。
需要说明的是,上述实例提供的具体实施流程,仅是对本申请实施例适用方法流程的举例说明,其中各步骤的执行顺序可根据实际需求进行相应调整,还可以增加其它步骤,或减少部分步骤。
基于以上实施例及相同构思,本申请实施例还提供一种第一电子设备,该第一电子设备用于实现本申请实施例提供的第一电子设备所执行的方法。
如图9中所示,第一电子设备900可以包括:存储器901,一个或多个处理器902,以及一个或多个计算机程序(图中未示出)。上述各器件可以通过一个或多个通信总线903耦合。可选的,当第一电子设备900用于实现本申请实施例提供的第一电子设备所执行的方法时,第一电子设备900还可以包括显示屏904。
其中,存储器901中存储有一个或多个计算机程序(代码),一个或多个计算机程序包括计算机指令;一个或多个处理器902调用存储器901中存储的计算机指令,使得第一电子设备900执行本申请实施例提供的虚拟数字内容显示方法。显示屏904用于显示图像、视频、应用界面等相关用户界面。
具体实现中,存储器901可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器901可以存储操作***(下述简称***),例如ANDROID,IOS,WINDOWS,或者LINUX等嵌入式操作***。存储器901可用于存储本申请实施例的实现程序。存储器901还可以存储网络通信程序,该网络通信程序可用于与一个或多个附加设备,一个或多个用户设备,一个或多个网络设备进行通信。一个或多个处理器902可以是一个通用中央处理器(Central Processing Unit,CPU),微处理器,特定应用集成电路(Application-Specific Integrated Circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
需要说明的是,图9仅仅是本申请实施例提供的第一电子设备900的一种实现方式,实际应用中, 第一电子设备900还可以包括更多或更少的部件,这里不作限制。
基于以上实施例及相同构思,本申请实施例还提供一种第二电子设备,该第二电子设备用于实现本申请实施例提供的第二电子设备所执行的方法。
如图10中所示,第二电子设备1000可以包括:存储器1001,一个或多个处理器1002,以及一个或多个计算机程序(图中未示出)。上述各器件可以通过一个或多个通信总线1003耦合。可选的,当第二电子设备1000用于实现本申请实施例提供的第二电子设备所执行的方法时,第二电子设备1000还可以包括显示屏1004。
其中,存储器1001中存储有一个或多个计算机程序(代码),一个或多个计算机程序包括计算机指令;一个或多个处理器1002调用存储器1001中存储的计算机指令,使得第二电子设备1000执行本申请实施例提供的虚拟数字内容显示方法。显示屏1004用于显示图像、视频、应用界面等相关用户界面。
具体实现中,存储器1001可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器1001可以存储操作***(下述简称***),例如ANDROID,IOS,WINDOWS,或者LINUX等嵌入式操作***。存储器1001可用于存储本申请实施例的实现程序。存储器1001还可以存储网络通信程序,该网络通信程序可用于与一个或多个附加设备,一个或多个用户设备,一个或多个网络设备进行通信。一个或多个处理器1002可以是一个通用中央处理器(Central Processing Unit,CPU),微处理器,特定应用集成电路(Application-Specific Integrated Circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
需要说明的是,图10仅仅是本申请实施例提供的第二电子设备1000的一种实现方式,实际应用中,第二电子设备1000还可以包括更多或更少的部件,这里不作限制。
基于以上实施例及相同构思,本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,当计算机程序在计算机上运行时,使得计算机执行上述实施例提供的方法中由第一电子设备或者第二电子设备所执行的方法。
基于以上实施例及相同构思,本申请实施例还提供一种计算机程序产品,该计算机程序产品包括计算机程序或指令,当计算机程序或指令在计算机上运行时,使得计算机执行上述实施例提供的方法中由第一电子设备或者第二电子设备所执行的方法。
本申请实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本发明实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,简称DSL)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,简称DVD)、或者半导体介质(例如,SSD)等。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (16)

  1. 一种虚拟数字内容显示***,其特征在于,所述虚拟数字内容显示***包括第一电子设备和第二电子设备;
    所述第一电子设备,用于响应于用户触发的第一操作,从至少一个候选虚拟数字场景中确定目标虚拟数字场景;
    所述第一电子设备,还用于响应于用户触发的第二操作,从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并在所述目标虚拟数字场景的第一位置显示所述第一目标虚拟数字内容;
    所述第二电子设备,用于响应于用户触发的第三操作,采集并显示第一现实场景的图像;
    所述第二电子设备,还用于在所述第一现实场景为所述目标虚拟数字场景所对应的现实场景时,在所述第一现实场景的第一位置显示所述第一目标虚拟数字内容。
  2. 如权利要求1所述的***,其特征在于,
    所述第一现实场景的第一位置为所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置;或
    所述第一现实场景的第一位置和所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置之间的距离小于或等于第一阈值。
  3. 如权利要求1所述的***,其特征在于,
    所述第二电子设备,还用于响应于用户触发的第四操作,从至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并在第一现实场景的第二位置显示所述第二目标虚拟数字内容;
    所述第一电子设备,还用于在所述第一现实场景为所述目标虚拟数字场景所对应的现实场景时,在所述目标虚拟数字场景的第二位置显示所述第二目标虚拟数字内容,其中,所述第一现实场景的第二位置为所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置,或所述第一现实场景的第二位置和所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置之间的距离小于或等于第二阈值。
  4. 如权利要求1-3任一所述的***,其特征在于,
    所述第一电子设备,还用于响应于用户触发的第五操作,执行以下任意一项或多项操作:
    调整所述第一目标虚拟数字内容在所述目标虚拟数字场景的位置;或
    调整所述第一目标虚拟数字内容的大小;或
    调整所述第一目标虚拟数字内容的朝向;或
    删除所述第一目标虚拟数字内容;
    所述第二电子设备,还用于响应于用户触发的第六操作,执行以下任意一项或多项操作:
    调整所述第一目标虚拟数字内容在所述第一现实场景的位置;或
    调整所述第一目标虚拟数字内容的大小;或
    调整所述第一目标虚拟数字内容的朝向;或
    删除所述第一目标虚拟数字内容。
  5. 如权利要求4所述的***,其特征在于,
    所述第一电子设备,还用于响应于用户触发的第五操作,向所述第二电子设备发送第一编辑信息,其中,所述第一编辑信息包括所述第一电子设备对所述第一电子设备显示的所述第一目标虚拟数字内容进行编辑的信息;
    所述第二电子设备,还用于在接收来自所述第一电子设备的第一编辑信息时,根据所述第一编辑信息对显示的所述第一目标虚拟数字内容进行编辑,并在所述第一现实场景显示编辑后的所述第一目标虚拟数字内容。
  6. 如权利要求4所述的***,其特征在于,
    所述第二电子设备,还用于响应于用户触发的第六操作,向所述第一电子设备发送第二编辑信息,其中,所述第二编辑信息包括所述第二电子设备对所述第二电子设备显示的所述第一目标虚拟数字内容进行编辑的信息;
    所述第一电子设备,还用于在接收来自所述第二电子设备的第二编辑信息时,根据所述第二编辑信息对显示的所述第一目标虚拟数字内容进行编辑,并在所述目标虚拟数字场景显示编辑后的所述第一目 标虚拟数字内容。
  7. 如权利要求1-6任一所述的***,其特征在于,
    所述第一电子设备,还用于在响应用户触发的所述第一操作之前,显示所述至少一个候选虚拟数字场景对应的二维地图或者文字中的任意一项或多项;
    所述第一电子设备,用于响应于用户触发的所述第一操作,从所述至少一个候选虚拟数字场景中确定所述目标虚拟数字场景,包括:响应于用户选择所述二维地图或者文字中的任意一项或多项的所述第一操作,确定所述目标虚拟数字场景。
  8. 一种虚拟数字内容显示方法,其特征在于,应用于第一电子设备,所述方法包括:
    响应于用户触发的第一操作,所述第一电子设备从至少一个候选虚拟数字场景中确定目标虚拟数字场景;
    响应于用户触发的第二操作,所述第一电子设备从至少一个候选虚拟数字内容中确定第一目标虚拟数字内容,并在所述目标虚拟数字场景的第一位置显示所述第一目标虚拟数字内容;
    响应于用户触发的第三操作,所述第一电子设备采集并显示第一现实场景的图像;
    在所述第一现实场景为目标虚拟数字场景所对应的现实场景时,所述第一电子设备在所述第一现实场景的第一位置显示第一目标虚拟数字内容,其中,所述第一目标虚拟数字内容为第一电子设备在所述目标虚拟数字场景的第一位置显示的虚拟数字内容,所述第一现实场景的第一位置为所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置,或所述第一现实场景的第一位置和所述目标虚拟数字场景的第一位置在所述第一现实场景所对应的位置之间的距离小于或等于第一阈值。
  9. 如权利要求8所述的方法,其特征在于,所述方法还包括:
    响应于用户触发的第四操作,所述第一电子设备从至少一个候选虚拟数字内容中确定第二目标虚拟数字内容,并在第一现实场景的第二位置显示所述第二目标虚拟数字内容;
    在第一现实场景为所述目标虚拟数字场景所对应的现实场景时,所述第一电子设备在所述目标虚拟数字场景的第二位置显示第二目标虚拟数字内容,其中,所述第二目标虚拟数字内容为第一电子设备在所述第一现实场景的第二位置显示的虚拟数字内容,所述第一现实场景的第二位置为所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置,或所述第一现实场景的第二位置和所述目标虚拟数字场景的第二位置在所述第一现实场景所对应的位置之间的距离小于或等于第二阈值。
  10. 如权利要求8或9所述的方法,其特征在于,所述方法还包括:
    响应于用户触发的第五操作,所述第一电子设备对所述目标虚拟数字场景的所述第一目标虚拟数字内容进行编辑,使所述第一电子设备执行以下任意一项或多项操作:
    调整所述第一目标虚拟数字内容在所述目标虚拟数字场景的位置;或
    调整所述第一目标虚拟数字内容的大小;或
    调整所述第一目标虚拟数字内容的朝向;或
    删除所述第一目标虚拟数字内容;
    响应于用户触发的第六操作,所述第一电子设备对所述第一现实场景的所述第一目标虚拟数字内容进行编辑,使所述第一电子设备执行以下任意一项或多项操作:
    调整所述第一目标虚拟数字内容在所述第一现实场景的位置;或
    调整所述第一目标虚拟数字内容的大小;或
    调整所述第一目标虚拟数字内容的朝向;或
    删除所述第一目标虚拟数字内容。
  11. 如权利要求10所述的方法,其特征在于,所述方法还包括:
    响应于用户触发的第五操作,所述第一电子设备生成并保存第一编辑信息,其中,所述第一编辑信息包括所述第一电子设备对所述目标虚拟数字场景的所述第一目标虚拟数字内容进行编辑的信息;
    所述第一电子设备,还用于在生成并保存所述第一编辑信息时,根据所述第一编辑信息对所述第一现实场景显示的所述第一目标虚拟数字内容进行编辑,并在所述第一现实场景显示编辑后的所述第一目标虚拟数字内容。
  12. 如权利要求10所述的方法,其特征在于,所述方法还包括:
    响应于用户触发的第六操作,所述第一电子设备生成并保存第二编辑信息,其中,所述第二编辑信息包括所述第一电子设备对所述第一现实场景的所述第一目标虚拟数字内容进行编辑的信息;
    所述第一电子设备,还用于在生成并保存所述第二编辑信息时,根据所述第二编辑信息对显示的所述第一目标虚拟数字内容进行编辑,并在所述第一现实场景显示编辑后的所述第一目标虚拟数字内容。
  13. 如权利要求8-12任一所述的方法,其特征在于,所述方法还包括:
    在响应用户触发的所述第一操作之前,显示所述至少一个候选虚拟数字场景对应的二维地图或者文字中的任意一项或多项;
    所述第一电子设备从所述至少一个候选虚拟数字场景中确定所述目标虚拟数字场景,包括:响应于用户选择所述二维地图或者文字中的任意一项或多项的所述第一操作,确定所述目标虚拟数字场景。
  14. 一种电子设备,其特征在于,所述电子设备包括:
    处理器,存储器,以及,一个或多个程序;
    其中,所述一个或多个程序被存储在所述存储器中,所述一个或多个程序包括指令,当所述指令被所述处理器执行时,使得所述第一电子设备执行如权利要求8-13中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质用于存储计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如权利要求8-13中任一项所述的方法。
  16. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序在计算机上运行时,使得所述计算机执行如上述权利要求8-13中任一项所述的方法。
PCT/CN2023/104001 2022-08-31 2023-06-29 一种虚拟数字内容显示***、方法与电子设备 WO2024045854A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211052147.6A CN117671203A (zh) 2022-08-31 2022-08-31 一种虚拟数字内容显示***、方法与电子设备
CN202211052147.6 2022-08-31

Publications (1)

Publication Number Publication Date
WO2024045854A1 true WO2024045854A1 (zh) 2024-03-07

Family

ID=90073861

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/104001 WO2024045854A1 (zh) 2022-08-31 2023-06-29 一种虚拟数字内容显示***、方法与电子设备

Country Status (2)

Country Link
CN (1) CN117671203A (zh)
WO (1) WO2024045854A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571532A (zh) * 2015-02-04 2015-04-29 网易有道信息技术(北京)有限公司 一种实现增强现实或虚拟现实的方法及装置
CN107111996A (zh) * 2014-11-11 2017-08-29 本特图像实验室有限责任公司 实时共享的增强现实体验
CN108479060A (zh) * 2018-03-29 2018-09-04 联想(北京)有限公司 一种显示控制方法及电子设备
CN111078003A (zh) * 2019-11-27 2020-04-28 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备及存储介质
WO2021190280A1 (en) * 2020-03-24 2021-09-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. System and method for augmented tele-cooperation
CN113672087A (zh) * 2021-08-10 2021-11-19 Oppo广东移动通信有限公司 远程交互方法、装置、***、电子设备以及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107111996A (zh) * 2014-11-11 2017-08-29 本特图像实验室有限责任公司 实时共享的增强现实体验
CN104571532A (zh) * 2015-02-04 2015-04-29 网易有道信息技术(北京)有限公司 一种实现增强现实或虚拟现实的方法及装置
CN108479060A (zh) * 2018-03-29 2018-09-04 联想(北京)有限公司 一种显示控制方法及电子设备
CN111078003A (zh) * 2019-11-27 2020-04-28 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备及存储介质
WO2021190280A1 (en) * 2020-03-24 2021-09-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. System and method for augmented tele-cooperation
CN113672087A (zh) * 2021-08-10 2021-11-19 Oppo广东移动通信有限公司 远程交互方法、装置、***、电子设备以及存储介质

Also Published As

Publication number Publication date
CN117671203A (zh) 2024-03-08

Similar Documents

Publication Publication Date Title
US11715268B2 (en) Video clip object tracking
KR102635373B1 (ko) 이미지 처리 방법 및 장치, 단말 및 컴퓨터 판독 가능 저장 매체
US10659684B2 (en) Apparatus and method for providing dynamic panorama function
WO2021036571A1 (zh) 一种桌面的编辑方法及电子设备
US11762529B2 (en) Method for displaying application icon and electronic device
CN109191549B (zh) 显示动画的方法及装置
WO2021000841A1 (zh) 一种生成用户头像的方法及电子设备
CN115297200A (zh) 一种具有折叠屏的设备的触控方法与折叠屏设备
KR20150083636A (ko) 전자 장치에서 이미지 운영 방법 및 장치
CN114666427B (zh) 一种图像显示方法、电子设备及存储介质
CN116095413B (zh) 视频处理方法及电子设备
WO2022033272A1 (zh) 图像处理方法以及电子设备
CN115442509B (zh) 拍摄方法、用户界面及电子设备
CN114708289A (zh) 一种图像帧预测的方法及电子设备
CN112822544A (zh) 视频素材文件生成方法、视频合成方法、设备及介质
CN113448658A (zh) 截屏处理的方法、图形用户接口及终端
WO2023005751A1 (zh) 渲染方法及电子设备
US20220264176A1 (en) Digital space management method, apparatus, and device
WO2024045854A1 (zh) 一种虚拟数字内容显示***、方法与电子设备
CN113485596B (zh) 虚拟模型的处理方法、装置、电子设备及存储介质
CN115032640A (zh) 手势识别方法和终端设备
KR20170027136A (ko) 이동 단말기 및 그 제어 방법
US20230114178A1 (en) Image display method and electronic device
KR102069228B1 (ko) 영상의 회화적 표현을 위한 영상 처리 방법 및 장치
CN114968423B (zh) 一种长页面截屏方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23858892

Country of ref document: EP

Kind code of ref document: A1